U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence

U.S. AI Standards_ Shaping the Future of Trustworthy Artificial Intelligence

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel, convened by the White House OSTP and featuring senior officials from the U.S. government and leaders from Anthropic, Google DeepMind, OpenAI and XAI, focused on how open standards and protocols can make AI agents interoperable and secure [1-3][5-11][15-22]. Sihao Huang noted that billions of dollars are being invested in AI infrastructure and that competing firms are racing to make models cheaper and more powerful, underscoring the need for common interfaces [13-15]. He introduced the emerging ecosystem of agent protocols-including Anthropic’s Model Context Protocol (MCP), DeepMind’s A2A, OpenAI’s Agentic Commerce Protocol and XAI’s MacroHearts project-as the basis for the discussion [17-21][23-24].


Michael Sellitto explained that MCP is a universal open standard that lets models discover and use enterprise or government data sources and tools through simple descriptions, eliminating bespoke integrations [28-34][36-38]. He added that the companion Skills protocol lets developers encode repeatable task instructions that can be transferred across vendors, further enhancing data portability and competition [46-48]. Owen Lauder described DeepMind’s agent-to-agent standard as a digitized “clipboard” sharing identity, capabilities and security requirements, and its Universal Commerce Protocol (UCP) as a way for agents to interact with websites and payment systems [63-71][74-76]. Michael Brown highlighted that shared commerce protocols enable agents from different companies to coordinate tasks such as booking travel, illustrating how common standards can democratize AI-driven services worldwide [94-102].


Austin Marin announced the new Agent Standards Initiative, housed within NIST’s voluntary-consensus framework, and a request for information on AI-agent security that closes in March [130-138][155-162]. He outlined upcoming sector-specific listening sessions on education, healthcare and finance to identify challenges such as handling personally identifiable information and to develop metrology, benchmarks and best-practice documents [165-172]. The initiative also builds on existing drafts for AI-agent identity and authorization, aiming to create interoperable security layers analogous to the historic development of SSL and HTTPS for e-commerce [163-168][206-207]. Sihao Huang linked this effort to the open-internet legacy, arguing that decentralized protocols like TCP/IP and HTTPS spurred global prosperity and that similar open AI standards are essential for worldwide adoption and secure commerce [186-198][199-202].


Participants used analogies from the automobile and electrical industries to stress that standardized metrics and safety certifications can give users confidence in AI agents, while open standards preserve sovereignty and allow switching between providers [211-230][232-250]. The discussion concluded that government can facilitate cross-industry dialogue, but industry must lead the technical work, and international collaborations such as the INAEMS network are already shaping measurement and evaluation consensus for AI agents [252-254][276-281].


Keypoints


Major discussion points


Emergence of AI agent protocols to enable interoperability and competition – The panel highlighted several open standards such as the Anthropic Model Context Protocol (MCP), Google DeepMind’s agent-to-agent protocol, OpenAI’s commerce protocol, and XAI’s MacroHearts project, all aimed at letting agents “talk” to data sources, each other, and commerce systems [17-21][28-38][63-76][98-101].


U.S. government’s coordinating role in standards development – OSTP and the newly-rebranded Center for AI Standards and Innovation (within the Department of Commerce and NIST) act as the “front door” for industry, avoiding duplicated agency requests, issuing requests for information on agent security, and convening sector-specific listening sessions [132-146][155-164][165-172].


Security, trust, and evaluation as prerequisites for adoption – Speakers stressed that without robust security, identity, and authorization standards, builders cannot safely grant agents access to sensitive data or real-world actions; analogies to SSL/HTTPS and automotive safety metrics were used to illustrate the need for measurable, trustworthy standards [206-207][211-230][158-162][163-164].


International collaboration and a global “AI Internet” vision – The discussion repeatedly referenced builders in India, Kenya, and other regions, noting that open protocols should let any developer plug into AI services worldwide; the U.S. engages with ten-country networks and shares best-practice measurements to foster a truly global ecosystem [186-190][52-56][276-280][108-115].


Learning from historical standards (Internet, electrical, automotive) to shape AI standards – Examples such as the 802.11 Wi-Fi standard, NIST’s taillight-color standard, early HTTPS adoption, and automotive safety ratings were invoked to argue that open, consensus-based standards drive widespread, secure adoption [124-126][147-152][233-238][211-218].


Overall purpose / goal of the discussion


The panel was convened to explain and promote a coordinated effort-led by both industry leaders and U.S. government agencies-to create open, interoperable, and secure AI agent standards. By establishing common protocols, testing frameworks, and security guidelines, the participants aim to lower barriers for global developers, accelerate innovation, and ensure that AI systems can be safely integrated into commerce, public services, and everyday applications.


Tone of the conversation


The tone remained constructive and collaborative throughout. It began with a formal introduction and factual overview, moved into enthusiastic descriptions of technical progress, incorporated light-hearted remarks (e.g., Michael Brown’s “red means stop” analogy), and shifted into reflective, historical analogies that underscored the importance of standards. No adversarial moments appeared; the dialogue stayed optimistic about the potential of open standards to “grow the pie” for all stakeholders.


Speakers

Sihao Huang – Senior Policy Advisor for AI, Emerging Tech, White House [S1][S2]


Austin Marin – Acting Director, U.S. Center for AI Standards and Innovation, Department of Commerce [S4]


Wifredo Fernandez – Director for Global Government Affairs, XAI [S5][S6]


Owen Lauder – Senior Director and Head of Frontier Policy and Public Affairs, Google DeepMind [S7][S8]


Michael Sellitto – Head of Global Affairs, Anthropic [S9]


Michael Brown – Head of Growth and Operations, OpenAI [S10][S11][S12]


Additional speakers:


Michael Kratzios – Director, Office of Science and Technology Policy (OSTP) (mentioned as OSTP director)


Craig Burkhart – Acting Director, National Institute of Standards and Technology (NIST) (mentioned as Acting Director of NIST)


Howard Lutnick – U.S. Secretary of Commerce (Commerce Secretary)


George Osborne – Colleague of Michael Brown, name on placard (referenced in discussion)


Casey – Individual referenced in connection with the NIST Agent Standards Initiative (no further detail)


Full session reportComprehensive analysis and detailed insights

Opening & Context – Sihao Huang, Senior Policy Advisor for AI at the White House OSTP, opened the session, noting that U.S. firms are investing roughly $700 billion in AI infrastructure this year and are competing fiercely to deliver cheaper, more powerful models, making common interfaces urgent [13-15]. He introduced the panel: Austin Marin, Acting Director of the Centre for AI Standards and Innovation at the Department of Commerce, and senior representatives from Anthropic, Google DeepMind, OpenAI and XAI [3-12].


Company-Specific Protocol Overviews


Anthropic – Model Context Protocol (MCP) & SKILLZ – Michael Sellitto described MCP as a universal open standard that lets models describe a data source and its tools, enabling automatic discovery and retrieval of enterprise information such as payroll or revenue [28-38]. He contrasted this with the prior landscape of bespoke, vendor-locked integrations and highlighted the companion SKILLZ protocol, which encodes repeatable task instructions that can be taught once and transferred across models, enhancing data portability and reducing lock-in [39-48].


Google DeepMind – Agent-to-Agent (A2A) & Universal Commerce Protocol (UCP) – Owen Lauder explained A2A as a “digitised clipboard” that conveys an agent’s identity, capabilities, intent, data requirements and security constraints to another agent, removing the need for custom code [63-73]. He also outlined UCP, which standardises how agents interact with websites and payment systems, with pilot partners ranging from Walmart and Target in the United States to Flipkart and Infosys in India [74-77].


OpenAI – Commerce Protocol – Michael Brown noted that OpenAI’s commerce protocol enables agents to plan a family vacation, book flights and hotels, demonstrating how shared commerce standards allow agents from different companies to cooperate on real-world tasks [94-102].


XAI – MacroHearts & “parallel Internet” – Wifredo Fernandez positioned XAI’s MacroHearts project as part of a “parallel Internet” that will sit alongside the existing web, accelerating AI development while raising regulatory questions such as governance of agent-driven social-media platforms [119-123].


Historical Analogies & Security Emphasis – Sihao likened the need for secure AI-agent interfaces to the historic development of SSL and HTTPS, which unlocked e-commerce on the open web [206-207]. Sellitto reinforced the security argument with an automobile-industry analogy, suggesting that standardized crash-test-style safety metrics would give users confidence in AI agents [212-218].


U.S. Government Initiatives – Austin Marin clarified the Centre’s “front-door” role: it coordinates agency requests, avoids duplication, and ensures companies engage with advisers who understand frontier and agentic AI [138-152]. The Centre follows NIST’s long-standing voluntary-consensus approach, exemplified by the historic taillight-colour standard that defined the exact shade of red for vehicle lights [146-152]. Marin announced a Request for Information on AI-agent security (deadline in March) and referenced a draft NIST document on agent identity and authorisation that is open for comment [155-165]. He also outlined upcoming sector-specific listening sessions (education, healthcare, finance) in April to surface challenges such as handling personally identifiable information, with the aim of producing metrology, benchmarks and best-practice guidance [165-172].


International Collaboration – The discussion highlighted the International Network for Advanced AI Measurement, Evaluation and Science (INAEMS), a ten-country consortium that meets regularly to share best practices and develop consensus on measurement methodologies [276-280]. Sihao stressed that standards should enable builders in India, Kenya and elsewhere to use and switch between U.S. AI products without lock-in [186-190].


Next Steps (as stated in the transcript)


1. Submit comments to the AI-agent security RFI before the March deadline [155-165].


2. Review and comment on the NIST draft on agent identity and authorisation [163-165].


3. Participate in the April sector-specific listening sessions [166-172].


4. Continue engagement with the Centre for AI Standards and Innovation and the broader INAEMS network to shape forthcoming voluntary standards [276-280].


Closing Observation – Participants expressed broad agreement that open, interoperable AI-agent standards-covering data access (MCP), task encoding (SKILLZ), inter-agent communication (A2A) and commerce (UCP)-are essential to prevent vendor lock-in, foster global innovation and create a “parallel Internet” for AI [28-48][63-77][94-102][119-123]. They invoked historical precedents such as TCP/IP, HTTPS, electrical-plug standards and automotive safety metrics to argue that voluntary, consensus-based standards can drive secure, widespread adoption while avoiding fragmentation [198-201][233-242][248-251].


Session transcriptComplete transcript of the session
Sihao Huang

of these agents work with each other smoothly. And protocols are so important because that’s how builders interact with these products and how we make them interoperable and open to the rest of the world to sort of build on that for your own businesses, for your own benefits. And so we have an amazing panel here today. We have, so first of all, I’m Sihao Huang. I’m Senior Policy Advisor for AI at Emerging Tech at the White House. We’re joined with Austin Marin, who’s the Director for the Center for AI Standards and Innovation at the Department of Commerce, which really is the center for a lot of AI activity within the U .S. government, setting standards, driving innovation, measuring AI systems, improving metrology, and a lot of the smartest people in the U .S.

government are within Austin’s organization. And then we have the four frontier AI companies from the United States. So we’re very happy to be joined by Mike Salito, who is the Head of Global Affairs at Anthropic. We have Owen Lauder at Google DeepMind, who’s the Senior, Director and Head of Frontier Policy and Public Affairs. We have Mike Brown, who is head of growth and operations for OpenAI for our countries. And, of course, we have Weefi Fernandez, who is the director for global government affairs at XAI. So really the amazing lineup of U .S. industry. I said this in a previous panel, but American companies are spending $700 billion into infrastructure this year, just this year alone. And they probably won’t like it that I say this, but they’re competing very hard against each other to make AI models cheaper and more powerful for you guys to build on and to drive those applications.

And so this is going to be a panel on how we make that happen, how we standardize interfaces with those AI systems. And so first I’m just going to ask a question to the AI companies that are sat here. So over the past few months, I think, we’ve seen the emergence of an ecosystem of standards that move. To support the deployment of AI agents. I think one of the most notable ones is Anthropix Model Context Protocol, which a lot of other companies are building off of right now and is sort of becoming the industry standards. Of course, you have Google DeepMind’s A2A Agent -to -Agent Protocol, OpenAI’s Agentic Commerce Protocol, and then XAI, of course, has been working on its highly secretive and famous MacroHearts agent project.

And so all the companies here are very much involved in sort of this agent discussion. And so maybe open it up to the companies here to tell us a little bit about what these agent protocols actually do and what they have unlocked. And that’s sort of the builders who are sat here, the audience. What do they enable a software engineer or an AI engineer at India or other countries to create?

Michael Sellitto

Okay. Well, first I want to start off by thanking Suhao and OSTP for organizing this panel and all the people who are here. Thank you. So it’s great to be here with Austin. I think Anthropic has really had a really strong partnership with the Trump administration and appreciated the leadership of Secretary Lutnick in expanding and enhancing the Center for AI Standards and Innovation, which is really critical to making this technology work for everybody in a manner that’s safe, responsible, and open. MCP is a universal open standard for connecting AI systems to the tools and data sources that people already use. So imagine the knowledge bases inside of an enterprise. You can imagine government data sources.

The Indian government, of course, is a real leader in, why am I forgetting the acronym right now, DPI, sorry, and just has massive amounts of data that are already digitized. And so MCP is a way that you can connect your AI models and agents to those data sets and also tools. And it really, you know, simple. intuitive way. You just need to give the model a rough description of what’s in the data source and what kind of tools or how can it access it. And then the model will intuitively know how it can use those data sources the same way that somebody in your enterprise or your organization would know if I want to get payroll data, I need to go to this human resources system.

If I want to get data about, you know, our revenue, I need to go into HEX or whatever your particular tools are. You know, before MCP, you really had to build all these systems in a very bespoke manner, which meant that if you built them with one model or one vendor, you were kind of stuck because you’d have to rewrite everything if you wanted to switch. MCP being this open source protocol that’s supported by all of the major AI companies means that you really have this degree of interoperability, which just enables the whole system to be much more open and competitive. We also recently built SKILLZ, which is a software that’s been around for a long time.

It’s a software that’s been around for a long time. It’s a set of instructions that teach agents how to perform. specific tasks. The way that I describe this or think about it is, you know, imagine a new person joins your team. You spend a little bit of time teaching them, you know, how to do work the way that your organization does it. And then you expect them to just be able to follow those instructions all the time. So you kind of teach once and then they’re able to do that. It’s the same thing with skills, which also is another open protocol where you can build these skills. And then if you decide that, you know, you want to switch from Anthropic to any of the other fine companies here on the panel, you can move those skills over.

And so that interoperability and data portability is really a critical piece of making this an open and competitive environment.

Owen Lauder

Amazing. Thank you, Mike. And, yeah, thank you to Sehow. Thank you to OSTP and the U .S. government for the event and all the partnership. And a big thank you and congrats to our Indian hosts on a fantastic summit week. If you take a step back, it has been, I think, a really exciting week, a demonstration of how advanced AI is now being used around the world to do incredible things. It’s been really exciting. I think seeing the way that people are using Gemini right across India, really exciting to see the way that everyone in India from world -class scientists using AlphaFold to teachers and students using AI in the classroom. And I think with all of the progress that we’ve seen in the last few years, it’s easy to forget sometimes that this is still relatively new technology.

We’re still in the relatively early innings of working out how to develop this technology and use it for good. And one of the things that we need to do, I think Seahaw covered this very well in his opening gambit, is build out this ecosystem of technical standards to make sure that we can continue using this technology in the right ways. There’s a couple of ways that we’re thinking about these standards. One is technical standards, interoperable standards, and then also standards for testing these systems, making sure that we can use them in a reliable and secure way. We really want to contribute right across the piece here, so we’re excited. We have various standards that we have contributed to the ecosystem.

Our agent -to -agent standard that Seahaw mentioned. This is basically a standard for how… agentic systems talk to each other. At the moment, it’s a little bit tricky for agents to converse with each other. You have to often write bits of bespoke code for an agent to talk to an agent, or they have to be running on the same walled garden code base. So what we do with agent -to -agent is essentially have a sort of digitized clipboard of information that an agent will share with another agent. What’s my ID as an agent? What are my capabilities? What am I trying to do? How do I take data? What are my security requirements? This is going to be absolutely fundamental to sort of greasing the wheels of the agentic economy.

UCP, another standard that we’re working on, so we have our universal commerce protocol at Google. This essentially does the same thing, but it’s for how agents talk to websites and payment systems. This is going to be transformative for business. It’s great to be able to partner with companies right around the world, whether it’s Walmart and Target in the U .S. or Flipkart and Infosys in India that we’re working with across these agents. Excited to see what… everyone is going to do with the technology that we can enable with this.

Michael Brown

Thanks for the tip. Hi, everyone. My name is Michael Brown. My name placard says George Osborne, who’s a colleague. He got tied up in another panel, so I’m here. George and I work extremely closely together, but he has a much nicer accent because he’s from the U .K. I’m doing my best here. You’re doing very well, I might say, very well. For me, this is a fun panel because it feels like a very collaborative and cooperative opportunity to grow the pie, and the companies that are on either of our side are extraordinary companies with extraordinary humans, and it’s fun to just work with them in some of these areas. If I were going to kind of explain, why we’re here in this particular panel to my kids who are 9 -11, I would sort of say, look, are there countries out there in the world where when you get to a stoplight, red means go?

I don’t think so. I think mostly red means stop and green means go. I mean, if I’m wrong, I apologize. I’m not an expert. But, you know, having sort of shared understanding in countries, rich and poor, advanced and still developing around how things work, I think grows the pie because it allows builders to build in a way that everyone can kind of know that what they’re building is going to be both secure and is going to be accessible and hopefully enjoyable or useful to people anywhere in the world. And I think each of the companies up here is contributing something great to that. You know, I’ve joined OpenAthens. I relatively recently, but like MCP to me is something like I just knew it’s like that’s really important.

And like, well, Anthropic introduced it. Hopefully, Anthropic would agree with this, that now it’s just like the thing, right? And I think that’s terrific that it’s the thing. You know, Owen also mentioned in commerce, I don’t know if these standards compete or if it’s cooperative, but at OpenAI, we have a commerce protocol as well for the same thing, because there’s a world where these agents are going to be out shopping for us, which is kind of fun, right? So, you know, if the agent knows that you’re planning on taking a family vacation and it knows that you want to visit Goa and the agent can go actually secure your travel flights and your hotel, these commerce protocols can do that.

So agents of different companies, potentially in different countries, can all partner and work well together because they understand how they’re supposed to be looking for shared information and how that information should be shared. There’s kind of a shared understanding there. And so I think all of us are working to build these protocols to grow the pie, to create more democratization, more commerce, more benefit for everyone by having these common protocols in place.

Wifredo Fernandez

Thank you, Sihal. Great to be with you all here, and thank you to the government for having us. What an exciting week, frenetic and kinetic and chaotic, as I was saying earlier. So it’s just an honor to be here and to feel the energy and all the innovation and to meet a bunch of different builders across India. So Wefredo Fernandez, folks call me Weefy for short. It’s a nickname I got in the 90s before wireless Internet was a thing, so my name became relevant later. But, yeah, this is certainly a topic that brings us all together, which is wonderful. You know, XAI is only two and a half years old. So we’re all in this together.

So we’re all in this together. So we’re all in this together. So the foundational work done by these peer companies have enabled us to accelerate our development. We’re better because of those, and we’re better because we can all build on top of those. And these standards and protocols that folks have built and that we sort of lay out and sort of agree to as an industry and as governments really make sure that not just us four compete, right? This enables a ton of innovation. So, you know, on the X side, and, you know, XAI and X sort of operate in tandem, it’s been really neat to see the AI community sort of build and test and discuss and debate in public.

So, like, when Malt Book was taking off, I think you likely found out about it on X. And so it’s just neat to see the ecosystem sort of converge in that discussion space. And just in thinking about this panel and thinking about MoldBook in particular, it’s like, well, do we regulate social media platforms that are agent driven? Just it brings like all these really novel questions about about how we regulate. But I think at the end of the day, we all agree that these open standards that are creating sort of this call it a layer, call it a new ecosystem, call it a parallel Internet. I just really crucial for for our development of the Internet writ large.

And so, yeah, excited about the panel and the discussion here today.

Sihao Huang

Thank you so much. Your name is formalized in the 802 dot 11 protocol, which is what allows my phone to connect to the Internet in D .C. and here in India. So it’s extremely relevant. I’m going to use that. That’s awesome. So I think we’ve heard a little bit from our companies who are engaging a lot of dynamic activity, pushing out agent protocols of all kinds. And I think. There’s a lot of industry excitement over agents right now. One of the big announcements that we’re here to make, also Director Carrazio’s made early on the main stage, is the Agent Standards Initiative, and that is something that is let out of Casey in NIST. So I’ll turn to Austin to introduce that.

Austin Marin

Absolutely, and thanks, Hal, and thank you to OSTP for convening this event and to my fellow panelists. I’ll start with a brief introduction of my organization. So I am the Acting Director for the U .S. Center for AI Standards and Innovation. Our background, we were founded about two years ago as the U .S. AI Safety Institute. In June of last year, Commerce Secretary Howard Lutnick refounded us as the Center for AI Standards and Innovation, which signaled a shift from sort of safety concepts to standards and innovation. And our remit is to be the front door to industry to working with the U .S. government. There’s, I think, two aspects of our organization I think that bear note is, first, that we’re located within the Department of Commerce.

We are commerce -focused. We are industry -focused. We work. We work with all of the companies on this panel. Some of them we have formal research. or pre -deployment evaluation agreements with so that we can work with them on their models and the research questions they’re tackling. We also do take seriously our role trying to serve as a front door to the U .S. government for industry. We want to make sure that when industry is trying to navigate government that they’re speaking to the right people, that the people in government they’re speaking to have advisors who understand frontier AI and agentic AI, and also that the industry isn’t being overwhelmed by duplicative requests from different aspects of government.

You don’t want 10 different agencies asking the same company basically the same thing and creating unnecessary work, and so we try to act in sort of a coordinating role to make sure that industry is being heard and they’re navigating U .S. government. The other aspect of our organization that bears note is we’re located within NIST, the National Institute of Standards and Technology, and NIST has an over -century -long track record of not regulating but helping industry through, consensus, develop voluntary standards and best practices. Acting Director of NIST, Craig Burkhart, he likes to talk about taillights, brake lights on the back of a car. I’m sure you all see them in India. It’s the same color red as it is in the U .S.

That’s because it was a NIST standard of what exactly color red is going to be on the taillights. But another important aspect of that anecdote is it wasn’t government that said this is the color red that you all must use. It was industry came together, and with the help of NIST experts through a convening, they agreed on what the color should be. And so now when we look at it, what does the future bring and where can NIST bring its industry -driven, consensus -based voluntary standards work into the new AI world, we’re looking to AI agent standards. So as to how said, we announced this week an AI agent standards initiative, which is looking at all facets of AI and AI agents.

There’s a couple aspects of it that have already been announced that we’re working on, and I’ll tick through those relatively quickly. The first is we have a request for information. I’m going to go ahead and get this. So we have a request for information. We have a request for information. in the field. It closes in March and we encourage you to engage with us and provide comments on AI agent security. AI agents obviously bring a whole host of new security challenges and we’d love to hear from you and your organizations about what challenges you are facing. Learning and identifying those challenges is a first step. Once we identify those challenges we can then take the next step of seeing where can NIST’s approach of voluntary standards and best practices documents, how can they help address and mitigate those those challenges.

Another aspect, our colleagues at NIST, the Information Technology Laboratory or ITL, they have a draft out for comment on AI agent identity and authorization. Again, encourage you to engage and interact with them. A third initiative that we recently announced is we’re going to hold sector specific listening sessions hopefully in April in the sectors of education, healthcare, and finance where we’re going to convene various members of industry and say to them look there’s this great technology out there called AI have you heard of it, AI agents, why aren’t you adopting it? it? What challenges are you facing? And we may not be able to solve those challenges, but maybe we can. And so one example I give, and I don’t know that it’s going to be something we find out, but for instance, in the education and healthcare sector, there’s business concerns and existing regulatory concerns about PII, personally identifiable information.

And perhaps what we’ll learn through these listening sessions is that hospitals or schools aren’t deploying AI because they can’t reliably evaluate how AI agents are handling the PII. And so that’s something that KC, my organization, could develop metrology, benchmarks, evaluations, best practices, documents that could give confidence to those types of institutions that the agents are performing as desired. And maybe that’s a step that we could take through voluntary consensus driven best practices and standards that unlocks adoption. So we’re very focused on that. We’re looking forward to learning what those challenges are. I don’t know if the challenge I mentioned is actually a challenge facing industry. And that’s part of NIST’s approach, which is we … In D .C., we only see a small slice of what’s going on in industry.

We only have a tiny window into the world. And so it comes from a place of humility. We don’t know what the challenges people are facing. The companies that are on this panel, they’re doing an incredible job coming up with protocols for some of the challenges that they’re facing. We talked about agent -to -agent for how agents communicate. We talked about MCP for how agents navigate databases. We talked about UCP and OpenAI’s commerce protocol for engaging in e -commerce. And I’m sure through these conversations, we’re going to identify other areas where open source protocols, where standards, best practices could help unlock adoption and implementation. And we’re really excited to work with both you and all your institutions and companies on stage to identify those opportunities and see how we can leverage NIST’s convening authority to help.

Sihao Huang

Thank you so much for that, Austin. I think to reemphasize, this standards initiative is really wanting to make sure that the products that we build, on top of it, are able to connect with each other into our… such that if there’s a builder in India, a builder in Kenya, building on top of our AI products, American companies can use them as well. American companies can buy from them as well. And similarly, if you want to switch to a different model, nothing is sort of locked in. And I think this really ties back to a perspective that we sort of, as U .S. government, in particular the Trump administration, has about AI and AI products. We think back a lot on the history of the Internet and what that enabled for the world, but also what that enabled for America.

I think there was a perspective in the U .S. from a previous administration that technology had to be strictly locked down, and we think that’s a mistake. We want to share the best AI technologies with the rest of the world, and that’s also a sort of leading message that our delegation has here at the India AI Summit. And when we think back at the success of the Internet, what enabled that? There’s actually a number of companies and countries that tried to create their own closed version of the Internet that were centralized, that were tied to particular nations, at their own telecom networks. and they saw a little bit of success. A lot of them were state -subsidized, but none of them really scaled to the global level of the World Wide Web.

And the World Wide Web became so successful precisely because of the protocols that the U .S. government had supported. The U .S. government had made a very intentional effort to make sure that the Internet was a decentralized system, created protocols like TCPIP, HTTPS, the sort of Internet suite that was actually funded by the U .S. government back then to create independent development of these protocols that enabled the rest of the world to build on that. And what you had is really this win -win situation where the entire world now benefits from sort of the access of the Internet, the ability to build applications, companies on top of that that’s driven so much prosperity for countries around the world, but also made Silicon Valley one of the most wealthy places in human history.

And it is because of this open commerce. And that’s what we really want to create with a world of AI in the future as well. I think just to add a bit on to what Austin had said, sort of the agent security. piece. Why is agent security so important to us? It’s precisely because of adoption. You need security -driven adoption. If you look back again also at the history of the internet, the development of the secure sockets layer, SSL, and then eventually HTTPS, was what enabled e -commerce. And so, again, I think it’s a lot about the efforts that we’re going to be working with industry together to make sure that there is this standards ecosystem, that there are these interoperable interfaces that everyone can build on and trust to create the AI economy that we’re all looking forward to.

So I’ll stop ranting, but I’ll turn to the companies here. And I guess I’ll ask you all, how do you see sort of the future of AI standards and agent development? And how can AI agent standards really reflect the same principles that enable the open internet, including interoperability and including security?

Michael Sellitto

I feel like I need to somehow fix this. an automobile analogy in here since there’s been a setting. Maybe I’ll use my favorite one, which is right now if you go to buy a car and you go down to the car dealership, those cars are going to have a bunch of metrics that you can use that have been independently determined to understand the characteristics of that vehicle. So it will tell you what the fuel economy is, how far can you drive on a gallon or liter of gas, how does it perform in various types of crash tests. These are all metrics that are done in a standardized way that are oftentimes done by third parties, and so you can have sort of trust and confidence in them, and you can know what kind of car you want to buy.

Maybe I’m a single person and I like to drive fast, and so I’m just worried about head -on collisions because I’m going to be going as fast as the car can. I’m going to be driving as fast as the car can possibly go, and that’s the biggest danger for me. Or maybe I have a family and I’m worried about you know, what happens if we get hit from the side and I’ve got kids in the back seats. You know, a piece that this standardization can help us get to is having that same kind of confidence in knowing what you’re purchasing that, you know, customers and governments and the public, you know, can have. I think another real benefit, and it’s really aligned with, I think, some things that Michael Kratzios, the OSTP director, talked about today and also in an op -ed that he had in the Financial Times around exporting the American AI stack, right?

There’s a lot of concerns today about sovereignty, about having control over systems in your data and so on. And a way that I think you can both use the best technology in the world, which sometimes comes from American companies, but also have confidence that there’s resilience in the system, is really having things be built to open. Open standards, right? And that gives you the ability to… to decide to make changes. If today Anthropic is producing the best technology and tomorrow it’s X or it’s OpenAI or someone else, you can change. Or maybe an open source model gets good enough at the use case that you want and you want to switch over from a proprietary model to an open source model.

So I think that’s what this can enable. I think that’s sort of the opportunity that we have ahead of us. And I think that the vision of the AI security standards work that Casey’s going to be working on is, if you’re going to entrust these systems with access to your personal data or your financial data or the ability to do things in the real world on behalf of your enterprise or what have you, you need to have some sense that there’s security, there’s authentication for things, that there’s an ability to come back and check with the user before making certain significant decisions or taking certain decisions. Certain significant actions. And that’s… You can test and evaluate and report that information in a way that is intelligible to the customer, that they know what they’re buying, and they know when to trust, and they know when not to trust.

What’s up there?

Owen Lauder

Yeah, well said, and I endorse a lot of what Mike mentioned there and Austin and Sihau as well. I do think there’s a lot you can learn from the history of standards in various different industries that we can apply to AI. Sihau mentioned some of the early Internet standards. I mean, I’m just about old enough to remember people in the early 90s talking about how they would never, ever, ever put credit card information on the Internet. That would be absolutely insane. And it sort of was when you had information being shared in plain text in a totally unencrypted way. Then you have the secure layer that Sihau mentioned, HTTPS, and it’s completely unlocked the modern Internet economy as we know it to be.

History of electrical standards as well. Actually, this was something that drove the adoption of electrical products in the late 20th and early 21st century. You had a scientific approach to standardizing units of. measurement like ohms and volts and amperes, which allowed power supplies to connect their energy to the grid. It also meant that you could invent things like fuses, which could be set to a certain amperage, and if you had an electrical current above that, it would shut itself off. So I think we need to continue learning from history. I think there are a few principles that we should take forward as we do that. I think open standards, as we’ve been discussing, is the right way to go.

You need technically robust standards that are really informed by an understanding of the technology and how they work, and we should be looking to prioritize interoperability as well. Maybe a final thought for this piece is also learning from standards that are not done well. There are many industries that have not quite gotten this right. A lot of us have traveled here from around the world having to bring adapters with us because our electrical products won’t plug into the wall. It’s really, really annoying. It’s actually also a massive hindrance on commerce as well, because it means if you’re producing a computer or another electronic application, you have to have a different plug socket in every single country around the world that you’re developing your product for.

So things to avoid. as well we need to be mindful of.

Michael Brown

automobile industry or something, two humongous but separate industries, and how they’re going to have to come together to set up norms for how agentic systems work and how data is shared, I think government can probably play an important role in bringing together industries to establish those dialogues. But the industries certainly still need to be front and center in establishing what works for them because they are the practitioners and the experts on what their customers need, what their colleagues need. And so I think we’re all going to have to kind of navigate that world together and figure out what is the role for the research labs, how does government support, and then how does industry play a leadership role in both governing and building for itself industry -specific standards for the future of AI.

Wifredo Fernandez

Yeah, I think this conversation has been a bit of a history lesson. I appreciate that. Thank you. And it made me think about how I used to get music when I was a kid, and some of the panelists may appreciate. You know, there were these music catalogs that would come to your house. You’d select however many compact discs you wanted, CDs. You’d put cash or a check in an envelope and send it away. And some weeks later, magically, some CDs would appear on your doorstep. So when I think about, you know, instructing an Asian to go download music or acquire music on my behalf, like, I’d much rather have that than I don’t know how we used to put so much trust in a system without standards or, you know, a process that could not be audited.

So I think sort of the guiding principles that have developed the Internet still apply. We want privacy -preserving technology. We want technology that allows us to audit. We want technology that considers authenticity. We want technology that considers means of consent. And to Michael’s point, I think ultimately agents serve the user and agents serve organizations. And so if we view it through that lens, it should guide us right. They don’t serve us as the model developers.

Sihao Huang

Great. Thank you all so much for that. So that was a bit of a nerdy discussion on standards, a bit of a history lesson. I love that. But we’re also here right now at the India AI Impact Summit talking to a country of builders, talking to the developing worlds, which are some of the most dynamic AI markets in the world. And so I think it will also be amazing to hear from the panelists here, including Austin, how you all are engaging with the rest of the world on these standards, how your organizations are engaging with other countries on AI. And one of the most exciting applications you’ve seen develop on top of your standards and products.

Austin Marin

I guess I’ll lead off. One of the main forums that Casey engages internationally is through the International Network for Advanced AI Measurement, Evaluation, and Science. It’s a bit of a mouthful of a name, but it’s ten countries that have established AI security institutes or, like we do, the Center for AI Standards and Innovation, and we meet a couple times a year. We also engage in informal technical and scientific exchanges and we share best practices in measurement and evaluation science. In December, we met in San Diego on the sidelines of the NURFS conference and we sat down and discussed sort of open questions in measurement science and the challenges that we’re facing, and we published a blog post, I think, about a week ago that summarizes some of the periods of consensus and the open questions.

And there, the work we’re doing, I think, is very important because when we talk about the evaluation, of AI systems, particular capabilities, particular security vulnerabilities, etc. It’s important for us to have consensus on the methodologies.

Related ResourcesKnowledge base sources related to the discussion topics (15)
Factual NotesClaims verified against the Diplo knowledge base (4)
Confirmedhigh

“Protocols are essential for agents to work together smoothly and enable interoperability across products and businesses.”

The knowledge base explicitly states that protocols are crucial for builders to interact with products and achieve interoperability, confirming the report’s emphasis on their importance [S1] and [S3].

Confirmedhigh

“Google DeepMind’s Agent‑to‑Agent (A2A) protocol acts as a “digitised clipboard” conveying an agent’s identity, capabilities, intent, data requirements and security constraints, removing the need for custom code.”

S15 notes that Google has launched an agents-to-agents protocol, which aligns with the report’s description of A2A as a standardized way to convey agent metadata and eliminate bespoke integrations [S15].

Confirmedhigh

“Google DeepMind’s Universal Commerce Protocol (UCP) standardises how agents interact with websites and payment systems.”

The knowledge base mentions a “universal” protocol for agent communication with web services, supporting the report’s claim that a Universal Commerce Protocol standardises website and payment interactions [S15].

Confirmedmedium

“The panel discussion will cover the business case for agentic AI and the public‑policy implications of its use.”

S19 describes a panel that will discuss the business case for agentic AI followed by a second panel on public-policy implications, confirming the report’s outline of the session’s agenda [S19].

External Sources (71)
S1
U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence — <strong>Sihao Huang:</strong> of these agents work with each other smoothly. And protocols are so important because that…
S2
U.S. AI Standards_ Shaping the Future of Trustworthy Artificial Intelligence — Great. Thank you all so much for that. So that was a bit of a nerdy discussion on standards, a bit of a history lesson. …
S3
https://app.faicon.ai/ai-impact-summit-2026/us-ai-standards_-shaping-the-future-of-trustworthy-artificial-intelligence — And it is because of this open commerce. And that’s what we really want to create with a world of AI in the future as we…
S5
U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence — -Wifredo Fernandez- Director for Global Government Affairs at XAI
S6
U.S. AI Standards_ Shaping the Future of Trustworthy Artificial Intelligence — Thank you, Sihal. Great to be with you all here, and thank you to the government for having us. What an exciting week, f…
S7
U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence — <strong>Sihao Huang:</strong> of these agents work with each other smoothly. And protocols are so important because that…
S8
U.S. AI Standards_ Shaping the Future of Trustworthy Artificial Intelligence — of these agents work with each other smoothly. And protocols are so important because that’s how builders interact with …
S9
U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence — <strong>Sihao Huang:</strong> of these agents work with each other smoothly. And protocols are so important because that…
S10
U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence — <strong>Sihao Huang:</strong> of these agents work with each other smoothly. And protocols are so important because that…
S11
U.S. AI Standards_ Shaping the Future of Trustworthy Artificial Intelligence — Thanks for the tip. Hi, everyone. My name is Michael Brown. My name placard says George Osborne, who’s a colleague. He g…
S12
https://app.faicon.ai/ai-impact-summit-2026/us-ai-standards_-shaping-the-future-of-trustworthy-artificial-intelligence — And like, well, Anthropic introduced it. Hopefully, Anthropic would agree with this, that now it’s just like the thing, …
S13
From Innovation to Impact_ Bringing AI to the Public — Yeah, this was going to be my question to him. But it’s always fun to answer. I think that models are just not what we k…
S14
Driving Enterprise Impact Through Scalable AI Adoption — But with AI, we’re able to create programs much faster. The models are infinitely scalable. They’re always awake 24 -7. …
S15
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — All right. Just speaking for myself, I can’t wait to use agents. I feel like it’s a lot of developer communities that ha…
S16
Challenging the status quo of AI security — Debora Comparin addressed the critical issue of identity management for AI agents, highlighting several open problems th…
S17
Digital standards — Besides developing standards for AI, SDOs can also rely on AI techniques tofacilitate and improve some of their activiti…
S18
WS #187 Bridging Internet AI Governance From Theory to Practice — Vint Cerf: Well, thank you so much for this opportunity. I want to remind everyone that I am not an expert on artificial…
S19
Agentic AI in Focus Opportunities Risks and Governance — Evidence:CAISI launched an AI agent standards initiative, issued an RFI on AI agent security, and announced sector-speci…
S20
How Trust and Safety Drive Innovation and Sustainable Growth — Summary:All speakers agreed that trust is the foundational requirement for AI adoption. Without trust, people simply won…
S21
WS #193 Cybersecurity Odyssey Securing Digital Sovereignty Trust — Boutife Adisa: Yeah, so, Bullet Defe Adisa for the record. In my opinion, I think in addition to what Lily has already s…
S22
Open Forum #33 Building an International AI Cooperation Ecosystem — – **Multi-stakeholder Approach and Inclusive Development**: Drawing parallels to internet governance, speakers stressed …
S23
How the EU’s GPAI Code Shapes Safe and Trustworthy AI Governance India AI Impact Summit 2026 — This discussion focused on governing AI development and ensuring safe, beneficial deployment while maintaining innovatio…
S24
Open Forum #30 High Level Review of AI Governance Including the Discussion — Abhishek Singh: That will really empower people globally. What do we expect from the Global Digital Compact to make this…
S25
Main Session on Artificial Intelligence | IGF 2023 — In today’s world, Artificial Intelligence (AI) plays a pivotal role in transforming industries and daily life. By emulat…
S26
Setting the Rules_ Global AI Standards for Growth and Governance — And it’s going to have to be a collective effort. Yeah. Okay. Key areas of convergence included the importance of proce…
S27
Setting the Rules_ Global AI Standards for Growth and Governance — Key areas of convergence included the importance of process-oriented standards that can adapt to evolving capabilities, …
S28
International multistakeholder cooperation for AI standards | IGF 2023 WS #465 — Context is highlighted as a crucial element for effective engagement in standards development. Australia’s experts have …
S29
Global AI Policy Framework: International Cooperation and Historical Perspectives — But I think this is a global forum and I would like to talk about this classical debate which started in 1960s and 70s a…
S30
Searching for Standards: The Global Competition to Govern AI | IGF 2023 — Audience:Thank you so much, Dr. Ali Mahmood. I’m from Pakistan. I’m heading a provincial government entity that is invol…
S31
Harmonizing High-Tech: The role of AI standards as an implementation tool — Sergio Mujica:Very well, thank you, Bilel, and good afternoon to all of you. I think the most important point here is th…
S32
U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence — – Michael Sellitto- Owen Lauder- Austin Marin Industry-led, consensus-based approach to standards development is prefer…
S33
Open Forum #26 High-level review of AI governance from Inter-governmental P — 1. Governments: Responsible for balancing innovation and security, and creating appropriate regulatory frameworks. Yoic…
S34
Challenging the status quo of AI security — Connection between observed security challenges and need for standards Given the new security challenges that emerge wh…
S35
WS #283 AI Agents: Ensuring Responsible Deployment — ### Government and Regulatory Approaches ### Introduction and Context ### Technical Infrastructure and Standards Anne…
S36
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Jeetu Patel President and Chief Product Officer Cisco Inc — Patel outlines a three‑layer security approach: protect agents from malicious inputs, protect the world from rogue agent…
S37
Not Losing Sight of Soft Power — Thailand’s strategy is built on a 13-pillar approach to soft power, encompassing diverse areas such as food, film, touri…
S38
U.S. AI Standards_ Shaping the Future of Trustworthy Artificial Intelligence — Summary:Speakers agree that industry should lead standards development with government playing a convening and facilitat…
S39
Fast-tracking a digital economy future in developing countries (UNCTAD) — They have also released laws supporting small business start-ups and providing incentives for import and local market sh…
S40
Artificial intelligence — Despite their technical nature – or rather because of that – standards have an important role to play in bridging techno…
S41
How Trust and Safety Drive Innovation and Sustainable Growth — Summary:All speakers agreed that trust is the foundational requirement for AI adoption. Without trust, people simply won…
S42
Agentic AI in Focus Opportunities Risks and Governance — So understanding that risk picture is going to be critically important. And last, I think that really pivots into one of…
S43
AI-Driven Enforcement_ Better Governance through Effective Compliance & Services — Disagreement level:Very low level of disagreement with high implications for successful AI implementation. The consensus…
S44
From Technical Safety to Societal Impact Rethinking AI Governanc — Disagreement level:Moderate disagreement with significant implications – while speakers generally agree on the need to m…
S45
AI-Driven Enforcement_ Better Governance through Effective Compliance &amp; Services — Very low level of disagreement with high implications for successful AI implementation. The consensus suggests strong al…
S46
AI in Mobility_ Accelerating the Next Era of Intelligent Transport — Disagreement level:Moderate disagreement level with significant implications for policy and investment decisions. The di…
S47
From Technical Safety to Societal Impact Rethinking AI Governanc — Moderate disagreement with significant implications – while speakers generally agree on the need to move beyond purely t…
S48
U.S. AI Standards_ Shaping the Future of Trustworthy Artificial Intelligence — Summary:All speakers strongly advocate for open, interoperable standards that enable cross-vendor compatibility and prev…
S49
U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence — The conversation centered around emerging AI agent protocols that enable different AI systems to work together seamlessl…
S50
WS #283 AI Agents: Ensuring Responsible Deployment — Carter describes specific technical developments including Google’s agent-to-agent protocol for vendor-agnostic interact…
S51
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — Would you like me to go and take care of you and get some more toothpaste for you? You mentioned standards, which I thin…
S52
WS #187 Bridging Internet AI Governance From Theory to Practice — – Yik Chan Ching- Audience Explains A2A (agent-to-agent) and MCP (model context protocol) standards and uses the analog…
S53
Agentic AI in Focus Opportunities Risks and Governance — Absolutely. Thank you, Jason. Thank you to ITI, and thank you all for coming today. As Jason said, my name is Austin May…
S54
How Trust and Safety Drive Innovation and Sustainable Growth — Summary:All speakers agreed that trust is the foundational requirement for AI adoption. Without trust, people simply won…
S55
WS #204 Closing Digital Divides by Universal Access Acceptance — All speakers emphasize that trust and safety are prerequisites for meaningful internet adoption, particularly for vulner…
S56
Closing remarks — Standards are needed to help build trust, and trust isn’t a property of machines but how we handle uncertainty together
S57
Open Forum #33 Building an International AI Cooperation Ecosystem — ### Regional Perspectives **Dai Wei** from the Internet Society of China highlighted the organization’s cooperation wit…
S58
Artificial intelligence (AI) – UN Security Council — During the9821st meetingof the Artificial Intelligence Security Council, a key discussion centered around whether existi…
S59
How the EU’s GPAI Code Shapes Safe and Trustworthy AI Governance India AI Impact Summit 2026 — This discussion focused on governing AI development and ensuring safe, beneficial deployment while maintaining innovatio…
S60
Open Forum #30 High Level Review of AI Governance Including the Discussion — Abhishek Singh: That will really empower people globally. What do we expect from the Global Digital Compact to make this…
S61
Panel Discussion AI & Cybersecurity _ India AI Impact Summit — The speaker explains the origins of the global AI capacity building network, crediting Saudi Arabia and Kenya with initi…
S62
Digital standards — ‘Standards can underpin regulatory frameworks and […] provide appropriate guardrails for responsible, safe and trustwo…
S63
US AI Safety Institute staff left out of Paris summit delegation — Vice President JD Vancewill lead the US delegationto a major AI summit in Paris next week, but technical staff from the …
S64
AI demand drives record power sector deals — The US power industry isexperiencinga surge in mergers and acquisitions (M&A) as record demand for electricity, particul…
S65
Data first in the AI era — – Someone from the Department of Commerce
S66
Scaling Trusted AI_ How France and India Are Building Industrial & Innovation Bridges — Hello. Yeah. Thank you very much, Professor Karandika. This is a perfect question for me to talk about. This is why I’m …
S67
Building Population-Scale Digital Public Infrastructure for AI — Very interesting. And I’ll just try to kind of paint the picture by giving a context. Now, think about it. We’re talking…
S68
Leaders TalkX: ICT Applications Unlocking the Full Potential of Digital – Part II — Mr. Timothy Grosser from EY discussed leveraging technology for sustainable development goals, focusing on digital publi…
S69
AI for food systems — Dejan Jakovljevic argues that standardization and reference architectures serve as foundational elements that enable res…
S70
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Giordano Albertazzi — The discussion featured Giordano Albertazzi, CEO of Vertiv, addressing the critical physical infrastructure requirements…
S71
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — This discussion focused on AI assurance and the challenges of ensuring AI systems, particularly emerging agentic AI, are…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
M
Michael Sellitto
4 arguments183 words per minute1123 words366 seconds
Argument 1
MCP as universal open standard for connecting AI models to enterprise data and tools (Michael Sellitto)
EXPLANATION
MCP is presented as a universal, open protocol that lets AI systems link directly to existing enterprise knowledge bases and governmental data sources. It simplifies integration by allowing models to understand and access data with a simple description.
EVIDENCE
Sellitto explains that MCP connects AI models to enterprise knowledge bases and government data sources, such as the Indian government’s digitized datasets, by providing a rough description of the data source and tools, enabling intuitive access similar to how a human would retrieve payroll or revenue data [28-38].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
External sources describe MCP as a universal open standard that connects AI models to enterprise knowledge bases and government data, and note that prior to MCP integrations were bespoke and vendor-locked <a href="https://dig.watch/event/india-ai-impact-summit-2026/u-s-ai-standards-shaping-the-future-of-trustworthy-artificial-intelligence/" target="_blank" class="diplo-source-cite" title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-snippet="Sihao Huang: of these agents work with each other smoothly. And protocols are so important because that’s how builders interact with these products and how we make them interoperable “>[S1][S2].
MAJOR DISCUSSION POINT
Universal data connectivity
AGREED WITH
Owen Lauder, Michael Brown, Sihao Huang, Wifredo Fernandez
Argument 2
Skills protocol enables one‑time teaching of tasks and portable agent capabilities across models (Michael Sellitto)
EXPLANATION
The Skills protocol allows developers to encode task instructions once and attach them to agents, making the skills portable across different AI providers. This promotes reusability and reduces the need for repeated training.
EVIDENCE
Sellitto describes SKILLZ as a set of instructions that teach agents how to perform specific tasks, likening it to onboarding a new employee who learns a process once and can then repeat it, and notes that these skills can be transferred when switching between AI companies [39-48].
MAJOR DISCUSSION POINT
Task portability
AGREED WITH
Owen Lauder, Michael Brown, Sihao Huang, Wifredo Fernandez
Argument 3
Open standards prevent vendor lock‑in and allow seamless switching between AI models (Michael Sellitto)
EXPLANATION
By adopting open, interoperable protocols like MCP, organizations avoid being tied to a single vendor’s proprietary system. This flexibility encourages competition and innovation across the AI market.
EVIDENCE
Sellitto contrasts the pre-MCP era, where bespoke integrations locked users into a single model, with the current open-source protocol that supports all major AI companies, thereby preventing vendor lock-in [37-38].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Sources highlight that open standards avoid vendor lock-in and enable switching between providers, contrasting pre-MCP bespoke solutions <a href="https://dig.watch/event/india-ai-impact-summit-2026/u-s-ai-standards-shaping-the-future-of-trustworthy-artificial-intelligence/" target="_blank" class="diplo-source-cite" title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-snippet="Sihao Huang: of these agents work with each other smoothly. And protocols are so important because that’s how builders interact with these products and how we make them interoperable “>[S1][S2].
MAJOR DISCUSSION POINT
Avoiding vendor lock‑in
AGREED WITH
Owen Lauder, Michael Brown, Sihao Huang, Wifredo Fernandez
Argument 4
Security metrics analogous to automotive crash‑test standards provide confidence in AI agents (Michael Sellitto)
EXPLANATION
Sellitto draws an analogy between standardized automotive safety metrics and the need for comparable AI security metrics. Standardized testing would give users confidence in the safety and performance of AI agents.
EVIDENCE
He compares buying a car-where fuel economy, crash-test results, and other standardized metrics inform consumer confidence-to the need for similar standardized security metrics for AI agents, emphasizing trust and safety [212-218].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The discussion references automotive crash-test analogies for AI security metrics, emphasizing the need for standardized, independently verified testing [S2]<a href="https://dig.watch/event/india-ai-impact-summit-2026/u-s-ai-standards-shaping-the-future-of-trustworthy-artificial-intelligence/" target="_blank" class="diplo-source-cite" title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-snippet="Sihao Huang: of these agents work with each other smoothly. And protocols are so important because that’s how builders interact with these products and how we make them interoperable “>[S1].
MAJOR DISCUSSION POINT
Standardized security metrics
AGREED WITH
Sihao Huang, Austin Marin, Owen Lauder, Wifredo Fernandez
A
Austin Marin
6 arguments191 words per minute1263 words395 seconds
Argument 1
Request for information on AI agent security challenges to shape future standards (Austin Marin)
EXPLANATION
The Center issues an RFI to collect industry input on security challenges faced by AI agents. The feedback will guide the development of future voluntary standards.
EVIDENCE
Marin announces a request for information that closes in March, urging stakeholders to comment on AI agent security challenges as a first step toward developing standards [155-161].
MAJOR DISCUSSION POINT
RFI on agent security
AGREED WITH
Michael Sellitto, Sihao Huang, Owen Lauder, Wifredo Fernandez
Argument 2
Draft guidance on AI agent identity and authorization to ensure trustworthy interactions (Austin Marin)
EXPLANATION
NIST’s Information Technology Laboratory has released a draft document on agent identity and authorization, inviting public comment to shape trustworthy agent interactions.
EVIDENCE
Marin points to a draft for comment on AI agent identity and authorization prepared by NIST’s ITL, encouraging engagement from the community [163-165].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
A draft on AI agent identity and authorization is cited, outlining open problems such as defining agent identity and verification mechanisms [S16].
MAJOR DISCUSSION POINT
Identity and authorization draft
AGREED WITH
Michael Sellitto, Sihao Huang, Owen Lauder, Wifredo Fernandez
Argument 3
Center for AI Standards and Innovation serves as industry “front door,” coordinating across agencies to avoid duplication (Austin Marin)
EXPLANATION
The Center acts as the primary liaison between industry and U.S. government, streamlining communication and preventing multiple agencies from issuing redundant requests to companies.
EVIDENCE
Marin describes the Center’s role as the “front door” for industry, coordinating with various agencies to avoid duplicate requests and ensuring companies speak to the right government advisors [138-145].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The Center for AI Standards and Innovation is introduced as the industry “front door,” coordinating agency interactions and preventing redundant requests [S2].
MAJOR DISCUSSION POINT
Industry front‑door coordination
AGREED WITH
Sihao Huang, Michael Brown, Michael Sellitto
Argument 4
NIST‑led Agent Standards Initiative aims to develop voluntary, consensus‑based AI standards (Austin Marin)
EXPLANATION
NIST, through its long‑standing voluntary standards process, is launching an AI Agent Standards Initiative to create consensus‑driven standards for AI agents.
EVIDENCE
Marin explains that NIST’s historic role in voluntary standards is being extended to AI agents via the newly announced AI Agent Standards Initiative, emphasizing industry-driven consensus [146-154].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
NIST’s century-long voluntary standards process is described as the basis for the new AI Agent Standards Initiative [S2].
MAJOR DISCUSSION POINT
Voluntary consensus standards
AGREED WITH
Sihao Huang, Michael Brown, Michael Sellitto
Argument 5
Government seeks to provide guidance while letting industry lead technical standard development (Austin Marin)
EXPLANATION
The government’s approach is to offer high‑level guidance and coordination while allowing industry experts to define the technical details of standards.
EVIDENCE
Marin notes that the Center works to ensure industry receives appropriate guidance without being overwhelmed, positioning the government as a coordinator rather than a regulator [144-145].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The approach of government providing high-level guidance while industry leads technical standard development is noted as the preferred model [S2].
MAJOR DISCUSSION POINT
Guidance vs. industry leadership
Argument 6
Participation in the International Network for Advanced AI Measurement, Evaluation, and Science (INAEMS) to share best practices globally (Austin Marin)
EXPLANATION
Through INAEMS, the Center collaborates with ten other countries’ AI security institutes to exchange measurement and evaluation methodologies, fostering global consensus.
EVIDENCE
Marin outlines INAEMS as a ten-country network that meets regularly, conducts informal technical exchanges, and recently published a blog summarizing consensus and open questions in AI measurement [276-280].
MAJOR DISCUSSION POINT
Global measurement collaboration
AGREED WITH
Sihao Huang, Owen Lauder, Wifredo Fernandez, Michael Brown
W
Wifredo Fernandez
4 arguments156 words per minute603 words231 seconds
Argument 1
XAI builds on peer standards to create a “parallel Internet” for agent development (Wifredo Fernandez)
EXPLANATION
XAI leverages existing industry standards to construct a new, layered ecosystem—described as a parallel Internet—that supports rapid agent development and deployment.
EVIDENCE
Fernandez states that open standards are creating a new layer, a “parallel Internet,” which is crucial for broader internet development and for XAI’s own progress [121-123].
MAJOR DISCUSSION POINT
Parallel Internet concept
AGREED WITH
Michael Sellitto, Owen Lauder, Michael Brown, Sihao Huang
Argument 2
Privacy‑preserving, auditable, consent‑driven design is critical for agent‑driven services (Wifredo Fernandez)
EXPLANATION
He emphasizes that agents must incorporate privacy safeguards, auditability, authenticity, and consent mechanisms to be trustworthy for users and organizations.
EVIDENCE
Fernandez lists guiding principles-privacy-preserving technology, auditability, authenticity, and consent-as essential for trustworthy agent services [264-268].
MAJOR DISCUSSION POINT
Trustworthy design principles
AGREED WITH
Michael Sellitto, Sihao Huang, Austin Marin, Owen Lauder
Argument 3
Music‑catalog delivery analogy underscores the need for trust, auditability, and standards in agent services (Wifredo Fernandez)
EXPLANATION
He draws a parallel between old music‑catalog ordering systems and modern agent services, highlighting how standards and auditability build user trust.
EVIDENCE
Fernandez recounts how, in the past, ordering music CDs required trust in a system without standards, suggesting that modern agents need similar trust, auditability, and standards [257-263].
MAJOR DISCUSSION POINT
Trust through standards
Argument 4
XAI’s engagement with the global X platform illustrates cross‑border collaboration on standards (Wifredo Fernandez)
EXPLANATION
XAI collaborates with the broader X ecosystem, demonstrating how standards can be co‑developed across platforms and geographies.
EVIDENCE
Fernandez mentions that XAI and X operate in tandem, noting the visibility of initiatives like “Malt Book” on X and the collaborative discussion space it provides [118-119].
MAJOR DISCUSSION POINT
Cross‑platform collaboration
AGREED WITH
Sihao Huang, Austin Marin, Owen Lauder, Michael Brown
O
Owen Lauder
4 arguments212 words per minute892 words251 seconds
Argument 1
Agent‑to‑Agent standard defines shared identity, capabilities, and security requirements for inter‑agent communication (Owen Lauder)
EXPLANATION
The standard provides a structured “digital clipboard” that conveys an agent’s ID, capabilities, goals, data handling, and security needs, enabling seamless agent‑to‑agent interaction.
EVIDENCE
Lauder explains that the agent-to-agent protocol includes fields for ID, capabilities, intent, data handling, and security requirements, forming a digitized clipboard for agents to share information [63-73].
MAJOR DISCUSSION POINT
Shared agent identity
AGREED WITH
Michael Sellitto, Michael Brown, Sihao Huang, Wifredo Fernandez
Argument 2
Universal Commerce Protocol (UCP) lets agents interact with websites and payment systems for business transactions (Owen Lauder)
EXPLANATION
UCP standardizes how agents communicate with e‑commerce sites and payment gateways, facilitating automated business transactions across platforms and retailers.
EVIDENCE
Lauder describes Google’s Universal Commerce Protocol as enabling agents to talk to websites and payment systems, citing collaborations with retailers like Walmart, Target, Flipkart, and Infosys [74-77].
MAJOR DISCUSSION POINT
Agent‑driven commerce
AGREED WITH
Michael Sellitto, Michael Brown, Sihao Huang, Wifredo Fernandez
Argument 3
Technically robust, interoperable standards are essential for reliable AI ecosystems (Owen Lauder)
EXPLANATION
He stresses that standards must be technically sound and interoperable to ensure reliable, secure, and testable AI systems across the industry.
EVIDENCE
Lauder notes the need for technical, interoperable, and testing standards to guarantee reliability and security of AI systems [60-62].
MAJOR DISCUSSION POINT
Robust technical standards
AGREED WITH
Michael Sellitto, Sihao Huang, Austin Marin, Wifredo Fernandez
Argument 4
Electrical standards (volts, amps, fuses) demonstrate how common units enable safe, interoperable products (Owen Lauder)
EXPLANATION
He draws a parallel to historical electrical standards that allowed universal power connections and safety devices, illustrating the power of shared measurement units.
EVIDENCE
Lauder references the standardization of electrical units-ohms, volts, amperes-and safety devices like fuses, which enabled safe, interoperable electrical products [238-242].
MAJOR DISCUSSION POINT
Historical electrical standards
M
Michael Brown
4 arguments163 words per minute631 words232 seconds
Argument 1
OpenAI commerce protocol enables agents to book travel and handle e‑commerce actions (Michael Brown)
EXPLANATION
OpenAI’s protocol allows agents to autonomously arrange flights, hotels, and other travel logistics, showcasing practical e‑commerce capabilities of AI agents.
EVIDENCE
Brown explains that OpenAI’s commerce protocol lets an agent know a user wants a family vacation to Goa and can automatically secure flights and hotels, illustrating cross-company agent commerce [98-101].
MAJOR DISCUSSION POINT
AI‑driven travel booking
Argument 2
Shared global understanding (e.g., traffic‑light analogy) fosters secure, accessible agent development worldwide (Michael Brown)
EXPLANATION
He uses the universal traffic‑light system as an analogy for the need for shared conventions that enable secure and accessible AI development across nations.
EVIDENCE
Brown compares traffic-light meanings (red = stop, green = go) across countries to illustrate how shared understanding underpins secure, accessible agent development [86-92].
MAJOR DISCUSSION POINT
Universal conventions
AGREED WITH
Sihao Huang, Austin Marin, Owen Lauder, Wifredo Fernandez
Argument 3
Open standards drive democratization, competition, and broader AI “pie” growth (Michael Brown)
EXPLANATION
He argues that open standards expand the overall AI market, fostering competition and allowing more participants to benefit from AI advancements.
EVIDENCE
Brown describes the panel as a collaborative opportunity to “grow the pie,” emphasizing that open standards democratize AI and create broader benefits for all [85-92].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Panelists stress that open standards prevent lock-in, promote competition and expand the AI market globally <a href="https://dig.watch/event/india-ai-impact-summit-2026/u-s-ai-standards-shaping-the-future-of-trustworthy-artificial-intelligence/" target="_blank" class="diplo-source-cite" title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-snippet="Sihao Huang: of these agents work with each other smoothly. And protocols are so important because that’s how builders interact with these products and how we make them interoperable “>[S1][S2].
MAJOR DISCUSSION POINT
Democratizing AI
AGREED WITH
Michael Sellitto, Owen Lauder, Sihao Huang, Wifredo Fernandez
Argument 4
Collaborative government‑industry partnership highlighted as key to advancing standards (Michael Brown)
EXPLANATION
Brown underscores the importance of close cooperation between government bodies and industry to develop and adopt AI standards effectively.
EVIDENCE
He thanks the government and notes the collaborative spirit of the panel, highlighting that such partnerships are essential for advancing standards [85-92].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The panel thanked the government and highlighted collaborative efforts between agencies and industry as essential for advancing standards [S2].
MAJOR DISCUSSION POINT
Gov‑industry collaboration
AGREED WITH
Austin Marin, Sihao Huang, Michael Sellitto
S
Sihao Huang
6 arguments196 words per minute1363 words415 seconds
Argument 1
Historical U.S. internet standards (TCP/IP, HTTPS) illustrate how open protocols generate global prosperity (Sihao Huang)
EXPLANATION
He points to the foundational U.S. protocols that enabled a decentralized, open internet, which in turn spurred worldwide economic growth and innovation.
EVIDENCE
Huang recounts how U.S. government-backed protocols like TCP/IP and HTTPS created a decentralized internet that drove global prosperity and Silicon Valley’s wealth [198-201].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Panelists drew parallels to TCP/IP and HTTPS as open protocols that generated worldwide prosperity and innovation <a href="https://dig.watch/event/india-ai-impact-summit-2026/u-s-ai-standards-shaping-the-future-of-trustworthy-artificial-intelligence/" target="_blank" class="diplo-source-cite" title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-snippet="Sihao Huang: of these agents work with each other smoothly. And protocols are so important because that’s how builders interact with these products and how we make them interoperable “>[S1][S2].
MAJOR DISCUSSION POINT
Open internet foundations
AGREED WITH
Michael Sellitto, Owen Lauder, Michael Brown, Wifredo Fernandez
Argument 2
SSL/HTTPS analogy shows how security standards enable e‑commerce adoption for AI agents (Sihao Huang)
EXPLANATION
He draws a parallel between the historical adoption of SSL/HTTPS and the need for similar security standards to foster AI‑driven e‑commerce.
EVIDENCE
Huang explains that SSL and later HTTPS were pivotal in enabling e-commerce, suggesting analogous security standards are needed for AI agents [206-207].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The analogy to SSL/HTTPS illustrates how security standards enable e-commerce, a point made for AI agents as well <a href="https://dig.watch/event/india-ai-impact-summit-2026/u-s-ai-standards-shaping-the-future-of-trustworthy-artificial-intelligence/" target="_blank" class="diplo-source-cite" title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-snippet="Sihao Huang: of these agents work with each other smoothly. And protocols are so important because that’s how builders interact with these products and how we make them interoperable “>[S1][S2].
MAJOR DISCUSSION POINT
Security enabling commerce
AGREED WITH
Michael Sellitto, Austin Marin, Owen Lauder, Wifredo Fernandez
Argument 3
U.S. policy promotes exporting an open AI stack, mirroring the open‑internet model (Sihao Huang)
EXPLANATION
He notes that U.S. policy aims to share open AI technologies globally, echoing the historic strategy of exporting an open internet architecture.
EVIDENCE
Huang references Michael Kratzios’s op-ed about exporting the American AI stack, emphasizing openness to allow other nations to adopt and switch technologies [219-224].
MAJOR DISCUSSION POINT
Open AI export policy
AGREED WITH
Austin Marin, Michael Brown, Michael Sellitto
Argument 4
Early internet protocols (TCP/IP, HTTPS) enabled global connectivity and economic growth (Sihao Huang)
EXPLANATION
Reiterates that early open internet standards were critical for worldwide connectivity and the subsequent economic boom.
EVIDENCE
He again highlights the role of TCP/IP and HTTPS in creating a globally connected, prosperous internet ecosystem [198-201].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Panelists highlighted how early open internet protocols like TCP/IP and HTTPS created global connectivity and spurred economic growth <a href="https://dig.watch/event/india-ai-impact-summit-2026/u-s-ai-standards-shaping-the-future-of-trustworthy-artificial-intelligence/" target="_blank" class="diplo-source-cite" title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-snippet="Sihao Huang: of these agents work with each other smoothly. And protocols are so important because that’s how builders interact with these products and how we make them interoperable “>[S1][S2].
MAJOR DISCUSSION POINT
Impact of early internet standards
Argument 5
Emphasis on building standards that work for builders in India, Kenya, and other emerging markets (Sihao Huang)
EXPLANATION
He stresses the need for standards that enable developers worldwide—including those in emerging economies—to build, switch, and purchase AI services without barriers.
EVIDENCE
Huang mentions that standards should allow builders in India and Kenya to develop on top of U.S. AI products and enable cross-border buying and switching [186-190].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
References to India’s Digital Public Infrastructure and the need for standards that support builders in emerging economies are provided <a href="https://dig.watch/event/india-ai-impact-summit-2026/u-s-ai-standards-shaping-the-future-of-trustworthy-artificial-intelligence/" target="_blank" class="diplo-source-cite" title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-snippet="Sihao Huang: of these agents work with each other smoothly. And protocols are so important because that’s how builders interact with these products and how we make them interoperable “>[S1].
MAJOR DISCUSSION POINT
Inclusive global standards
AGREED WITH
Austin Marin, Owen Lauder, Wifredo Fernandez, Michael Brown
Argument 6
Goal to create a globally interoperable AI ecosystem that benefits both developed and developing economies (Sihao Huang)
EXPLANATION
He envisions an AI ecosystem where interoperable standards allow seamless collaboration and commerce across nations, mirroring the open internet’s success.
EVIDENCE
Huang ties the vision of a globally interoperable AI ecosystem to the historical success of open internet protocols, emphasizing benefits for both developed and developing economies [187-194].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The vision of a globally interoperable AI ecosystem benefiting all economies is linked to the success of open internet standards <a href="https://dig.watch/event/india-ai-impact-summit-2026/u-s-ai-standards-shaping-the-future-of-trustworthy-artificial-intelligence/" target="_blank" class="diplo-source-cite" title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-snippet="Sihao Huang: of these agents work with each other smoothly. And protocols are so important because that’s how builders interact with these products and how we make them interoperable “>[S1][S2].
MAJOR DISCUSSION POINT
Global AI ecosystem vision
Agreements
Agreement Points
Open standards are essential for interoperability, avoiding vendor lock‑in and fostering a global AI ecosystem.
Speakers: Michael Sellitto, Owen Lauder, Michael Brown, Sihao Huang, Wifredo Fernandez
MCP as universal open standard for connecting AI models to enterprise data and tools (Michael Sellitto) Open standards prevent vendor lock‑in and allow seamless switching between AI models (Michael Sellitto) Skills protocol enables one‑time teaching of tasks and portable agent capabilities across models (Michael Sellitto) Agent‑to‑Agent standard defines shared identity, capabilities, and security requirements for inter‑agent communication (Owen Lauder) Universal Commerce Protocol (UCP) lets agents interact with websites and payment systems for business transactions (Owen Lauder) Technically robust, interoperable standards are essential for reliable AI ecosystems (Owen Lauder) Open standards drive democratization, competition, and broader AI “pie” growth (Michael Brown) Historical U.S. internet standards (TCP/IP, HTTPS) illustrate how open protocols generate global prosperity (Sihao Huang) SSL/HTTPS analogy shows how security standards enable e‑commerce adoption for AI agents (Sihao Huang) XAI builds on peer standards to create a “parallel Internet” for agent development (Wifredo Fernandez)
All speakers highlighted that open, interoperable standards-whether MCP, agent-to-agent, UCP, or historical internet protocols-prevent vendor lock-in, enable cross-border collaboration, and expand the AI market, echoing the vision of a globally interoperable AI ecosystem [28-38][39-48][60-62][63-73][74-77][85-92][198-201][206-207][121-123].
POLICY CONTEXT (KNOWLEDGE BASE)
This aligns with the consensus at IGF 2023 that process-oriented, open standards are needed to ensure interoperability and prevent vendor lock-in, as highlighted in the “Setting the Rules” reports and the call for international cooperation among standards bodies [S26][S27][S31][S40].
Security and trust standards are critical to enable safe adoption and commercial use of AI agents.
Speakers: Michael Sellitto, Sihao Huang, Austin Marin, Owen Lauder, Wifredo Fernandez
Security metrics analogous to automotive crash‑test standards provide confidence in AI agents (Michael Sellitto) SSL/HTTPS analogy shows how security standards enable e‑commerce adoption for AI agents (Sihao Huang) Request for information on AI agent security challenges to shape future standards (Austin Marin) Draft guidance on AI agent identity and authorization to ensure trustworthy interactions (Austin Marin) Technically robust, interoperable standards are essential for reliable AI ecosystems (Owen Lauder) Privacy‑preserving, auditable, consent‑driven design is critical for agent‑driven services (Wifredo Fernandez)
Speakers converged on the need for security-focused standards-ranging from metric analogies, SSL/HTTPS precedents, formal RFIs, identity drafts, to privacy-preserving design-to build trust and enable AI agents to handle sensitive data and commerce safely [212-218][206-207][155-161][163-165][60-62][264-268].
POLICY CONTEXT (KNOWLEDGE BASE)
The need for dedicated security standards for AI agents is emphasized in recent IGF discussions, which outline a three-layer security model and stress that standards reduce risk and build trust for commercial deployment [S34][S36][S41][S42].
Government should act as a coordinator and facilitator, providing high‑level guidance while allowing industry to lead technical standard development.
Speakers: Austin Marin, Sihao Huang, Michael Brown, Michael Sellitto
Center for AI Standards and Innovation serves as industry “front door,” coordinating across agencies to avoid duplication (Austin Marin) NIST‑led Agent Standards Initiative aims to develop voluntary, consensus‑based AI standards (Austin Marin) U.S. policy promotes exporting an open AI stack, mirroring the open‑internet model (Sihao Huang) Collaborative government‑industry partnership highlighted as key to advancing standards (Michael Brown) Anthropic partnership with the Trump administration underscores productive government‑industry collaboration (Michael Sellitto)
All four speakers emphasized a government role that coordinates, issues RFIs, and supports voluntary consensus standards, while leaving detailed technical work to industry, reflecting a collaborative, non-regulatory approach [138-145][146-154][219-224][85-92][27].
POLICY CONTEXT (KNOWLEDGE BASE)
Multiple U.S. and international forums advocate an industry-led, consensus-based approach with governments playing a convening role rather than imposing regulations, reflecting the stance of the U.S. AI Standards initiative and IGF panels [S32][S38][S33][S28][S30].
Global collaboration and inclusion of emerging economies are essential for developing AI standards that serve all builders.
Speakers: Sihao Huang, Austin Marin, Owen Lauder, Wifredo Fernandez, Michael Brown
Emphasis on building standards that work for builders in India, Kenya, and other emerging markets (Sihao Huang) Participation in the International Network for Advanced AI Measurement, Evaluation, and Science (INAEMS) to share best practices globally (Austin Marin) Partnering with companies worldwide such as Walmart, Target, Flipkart, and Infosys demonstrates global engagement (Owen Lauder) XAI’s engagement with the global X platform illustrates cross‑border collaboration on standards (Wifredo Fernandez) Shared global understanding (e.g., traffic‑light analogy) fosters secure, accessible agent development worldwide (Michael Brown)
Speakers agreed that standards must be co-created with global partners, especially builders in developing regions, and cited concrete mechanisms like INAEMS, multinational corporate partnerships, and analogies that underscore universal conventions [186-190][276-280][77][118-119][86-92].
POLICY CONTEXT (KNOWLEDGE BASE)
The importance of multistakeholder, globally inclusive processes is documented in IGF 2023 sessions and the Global AI Policy Framework, which stress participation of Global South actors to avoid a US-centric model [S26][S29][S31][S28].
Similar Viewpoints
Both emphasize that open, technically sound standards are the foundation for interoperability and avoiding vendor lock‑in across AI agents and data integrations [28-38][39-48][60-62][63-73].
Speakers: Michael Sellitto, Owen Lauder
MCP as universal open standard for connecting AI models to enterprise data and tools (Michael Sellitto) Open standards prevent vendor lock‑in and allow seamless switching between AI models (Michael Sellitto) Technically robust, interoperable standards are essential for reliable AI ecosystems (Owen Lauder) Agent‑to‑Agent standard defines shared identity, capabilities, and security requirements for inter‑agent communication (Owen Lauder)
Both view U.S. government action as pivotal in fostering open, globally beneficial AI standards, drawing on historic internet protocol successes [138-145][219-224][198-201].
Speakers: Austin Marin, Sihao Huang
Center for AI Standards and Innovation serves as industry “front door,” coordinating across agencies (Austin Marin) U.S. policy promotes exporting an open AI stack, mirroring the open‑internet model (Sihao Huang) Historical U.S. internet standards (TCP/IP, HTTPS) illustrate how open protocols generate global prosperity (Sihao Huang)
Both use familiar analogies from other domains to argue that common standards are necessary for secure, interoperable AI services worldwide [86-92][206-207].
Speakers: Michael Brown, Sihao Huang
Shared global understanding (traffic‑light analogy) fosters secure, accessible agent development worldwide (Michael Brown) SSL/HTTPS analogy shows how security standards enable e‑commerce adoption for AI agents (Sihao Huang)
Both stress that measurable security and trust mechanisms (auditability, privacy, standardized metrics) are required for agents to be adopted safely [264-268][212-218].
Speakers: Wifredo Fernandez, Michael Sellitto
Privacy‑preserving, auditable, consent‑driven design is critical for agent‑driven services (Wifredo Fernandez) Security metrics analogous to automotive crash‑test standards provide confidence in AI agents (Michael Sellitto)
Unexpected Consensus
Alignment on the need for auditability and trust in AI agents despite differing primary framings (privacy vs. safety metrics).
Speakers: Wifredo Fernandez, Michael Sellitto, Sihao Huang
Privacy‑preserving, auditable, consent‑driven design is critical for agent‑driven services (Wifredo Fernandez) Security metrics analogous to automotive crash‑test standards provide confidence in AI agents (Michael Sellitto) SSL/HTTPS analogy shows how security standards enable e‑commerce adoption for AI agents (Sihao Huang)
While Fernandez focuses on privacy and auditability, Sellitto and Huang discuss safety metrics and security protocols; nevertheless, all three converge on the principle that transparent, measurable trust mechanisms are indispensable for agent deployment-a convergence not explicitly anticipated at the start of the panel [264-268][212-218][206-207].
Overall Assessment

The panel displayed a strong, multi‑speaker consensus that open, interoperable standards—paired with robust security and trust frameworks—are the cornerstone for a globally inclusive AI ecosystem. Government is seen as a facilitator rather than a regulator, and international collaboration, especially with emerging markets, is deemed essential.

High consensus: the convergence across industry and government representatives on open standards, security, and global collaboration suggests a solid foundation for coordinated policy and technical work, likely accelerating the development and adoption of AI agent standards worldwide.

Differences
Different Viewpoints
Role of government in driving AI standards versus industry‑led open standards
Speakers: Austin Marin, Michael Sellitto
Center for AI Standards and Innovation serves as industry “front door,” coordinating across agencies and using NIST-led voluntary consensus processes [138-145][146-154] MCP as universal open standard for connecting AI models to enterprise data and tools, emphasizing industry-driven open source protocol without need for government solicitation [28-38]
Austin proposes a coordinated government role that issues RFIs and drafts to shape voluntary standards, while Michael stresses that the market-driven open protocols like MCP already provide the needed interoperability, suggesting less direct government involvement [155-161][28-38].
POLICY CONTEXT (KNOWLEDGE BASE)
This tension is captured in the U.S. AI Standards discussions where industry-led consensus is preferred and governments are urged to facilitate rather than dictate, highlighting divergent views on regulatory involvement [S32][S38][S30][S33].
Global multilateral collaboration versus a U.S.–centric export model for AI standards
Speakers: Austin Marin, Sihao Huang
Participation in the International Network for Advanced AI Measurement, Evaluation, and Science (INAEMS) to share best practices globally [276-280] U.S. policy promotes exporting an open AI stack, echoing historic U.S. internet protocols as a model for worldwide adoption [219-224][190-200]
Austin emphasizes a ten-country network and multilateral exchanges to develop standards, whereas Sihao frames the U.S. approach as exporting an open AI stack rooted in historic U.S. internet standards, implying a more U.S.-led direction [276-280][219-224][190-200].
POLICY CONTEXT (KNOWLEDGE BASE)
Debates on a US-centric export approach versus broader multilateral cooperation have been raised in the Global AI Policy Framework and IGF panels, underscoring historical North-South dynamics in standards setting [S29][S26][S31].
Approach to security standards for AI agents
Speakers: Austin Marin, Michael Sellitto
Draft guidance on AI agent identity and authorization; request for information to collect industry security challenges [163-165][155-161] Security metrics analogous to automotive crash-test standards provide confidence in AI agents [212-218]
Austin focuses on identity/authorization drafts and sector-specific RFIs to build security standards, while Michael advocates concrete, metric-based safety testing modeled on automotive standards, reflecting different pathways to achieve trustworthy agents [163-165][155-161][212-218].
POLICY CONTEXT (KNOWLEDGE BASE)
Divergent proposals for AI agent security-ranging from process-oriented uncertainty quantification to layered technical safeguards-are reflected in recent IGF workshops and security-focused briefs, indicating ongoing disagreement on the optimal framework [S34][S36][S44][S42].
Unexpected Differences
Misstatement of traffic‑light conventions indicating differing levels of technical precision
Speakers: Michael Brown, Sihao Huang, Owen Lauder
Brown says “red means go” and then corrects himself, showing uncertainty about basic conventions [86-88] Huang references precise historical standards (TCP/IP, HTTPS) that underpin global interoperability [198-201]
Brown’s casual, inaccurate traffic-light analogy contrasts with other speakers’ emphasis on rigorously defined technical standards, revealing an unexpected gap in shared understanding of baseline conventions [86-88][198-201].
Overall Assessment

The panel shows strong overall consensus on the need for open, interoperable AI standards, security, and global inclusion. Disagreements are limited to the preferred locus of leadership (government‑coordinated versus industry‑driven), the framing of international collaboration versus a U.S.–centric export model, and the methodological path to security assurance (policy‑driven drafts versus metric‑based testing).

Low to moderate disagreement; the differences are largely about implementation pathways rather than fundamental goals, suggesting that progress on AI standards can continue with coordinated effort, though alignment on governance mechanisms will be required.

Partial Agreements
All agree that interoperable standards are essential to avoid vendor lock‑in and enable global builders, but they propose different technical mechanisms (MCP, agent‑to‑agent clipboard, commerce protocols) to achieve that goal [28-38][63-73][98-101][186-190].
Speakers: Michael Sellitto, Owen Lauder, Michael Brown, Sihao Huang
MCP as universal open standard for data connectivity [28-38] Agent-to-Agent standard defines shared identity, capabilities, and security requirements [63-73] OpenAI commerce protocol enables agents to book travel and handle e-commerce actions [98-101] Standards should let builders in India, Kenya, etc., switch models and buy across borders [186-190]
They share the aim of inclusive global standards, yet differ on the method: Sihao stresses universal protocols, Austin proposes targeted listening sessions, and Owen draws lessons from historical technical standards [186-190][165-170][238-242].
Speakers: Sihao Huang, Austin Marin, Owen Lauder
Goal of standards that work for builders in emerging markets like India and Kenya [186-190] Sector-specific listening sessions to surface challenges in education, health, finance [165-170] Historical electrical standards illustrate how common units enable safe, interoperable products [238-242]
All concur that security and shared conventions are vital, but propose different routes: policy‑driven RFIs, metric‑based testing, or simple universal analogies to foster trust [202-207][155-161][163-165][212-218][86-92].
Speakers: Sihao Huang, Austin Marin, Michael Sellitto, Michael Brown
Security is a prerequisite for adoption (SSL/HTTPS analogy) [202-207] RFI and draft identity/authorization to shape security standards [155-161][163-165] Automotive-style safety metrics to build confidence [212-218] Universal traffic-light analogy for shared conventions [86-92]
Takeaways
Key takeaways
Open, interoperable AI agent protocols (e.g., MCP, Skills, Agent‑to‑Agent, Universal Commerce Protocol) are essential to prevent vendor lock‑in and enable a global AI ecosystem. Standardization efforts are being led by industry with coordinated support from the U.S. government (Center for AI Standards and Innovation, NIST) to create voluntary, consensus‑based specifications. Security, authentication, privacy, and auditability are critical for the adoption of AI agents, analogous to SSL/HTTPS for the web and automotive safety standards. Historical precedents (TCP/IP, HTTPS, electrical standards, automotive crash‑test metrics) illustrate how open standards drive widespread innovation and economic growth. International collaboration (e.g., INAEMS, engagement with builders in India, Kenya, and other emerging markets) is necessary to ensure standards are globally applicable and inclusive. Sector‑specific challenges (PII handling in education/healthcare, evaluation metrics, metrology) must be identified through listening sessions and stakeholder input.
Resolutions and action items
Release of a Request for Information (RFI) on AI agent security challenges, closing in March; stakeholders are invited to submit comments. Draft guidance on AI agent identity and authorization is available for public comment via NIST’s Information Technology Laboratory. Plan to hold sector‑specific listening sessions in April (education, healthcare, finance) to gather adoption barriers and security concerns. Commitment from the Center for AI Standards and Innovation to act as a “front door” for industry, coordinating agency interactions and avoiding duplicate requests. Encouragement for companies and international partners to engage with the ongoing standards initiatives and contribute to consensus documents.
Unresolved issues
Specific technical and policy solutions for AI agent security, especially around authentication, consent, and handling of personally identifiable information (PII). How to create universally accepted metrics and benchmarks for agent performance and safety comparable to automotive crash‑test standards. The exact mechanisms for ensuring interoperability across competing commercial protocols (e.g., Anthropic’s MCP vs. OpenAI’s commerce protocol). Adoption hurdles in emerging markets and developing economies, including infrastructure constraints and regulatory environments. Potential regulatory approaches for agent‑driven social media platforms and other novel use cases that were raised but not detailed.
Suggested compromises
Adopt open, voluntary standards while allowing industry to lead technical development, with government providing coordination and guidance rather than prescriptive regulation. Balance security requirements with interoperability by developing shared identity/authorization frameworks that do not lock developers into a single vendor. Leverage historical lessons by implementing standards that are robust yet flexible, avoiding overly rigid specifications that could hinder innovation.
Thought Provoking Comments
MCP is a universal open standard for connecting AI systems to the tools and data sources that people already use… you just need to give the model a rough description of what’s in the data source and what kind of tools or how can it access it. The model will intuitively know how it can use those data sources the same way that somebody in your enterprise would know which system to query for payroll or revenue data.
Introduces a concrete, vendor‑agnostic protocol that solves the interoperability bottleneck and explains how it lowers the cost of switching models, a core challenge for builders worldwide.
Set the technical baseline for the discussion, prompting other panelists to reference their own protocols (agent‑to‑agent, commerce) and leading the conversation toward the need for open, interchangeable standards.
Speaker: Michael Sellitto (Anthropic)
Our agent‑to‑agent standard is basically a digitized clipboard of information that an agent will share with another agent: ID, capabilities, intent, data requirements, security requirements. This is fundamental to greasing the wheels of the agentic economy.
Provides a clear, tangible description of how agents can communicate without bespoke code, highlighting a key technical hurdle and proposing a solution that can be widely adopted.
Expanded the scope from data‑access standards to inter‑agent communication, prompting further discussion on commerce protocols and reinforcing the theme of building a layered, interoperable ecosystem.
Speaker: Owen Lauder (Google DeepMind)
Imagine a country where red means go at a stoplight… shared understanding—like traffic‑light rules—allows builders everywhere to know that what they’re building will be secure, accessible, and useful. That shared understanding grows the pie for everyone.
Uses a simple, relatable analogy to illustrate why common standards are essential for global interoperability and democratization of AI services.
Reframed the technical debate in terms of everyday experience, making the need for standards feel universal and prompting others (e.g., Sihao, Austin) to link it to historical internet standards.
Speaker: Michael Brown (OpenAI)
These open standards create a new layer—a parallel Internet—that is crucial for the development of the Internet writ large. They also raise novel regulatory questions, like whether we should regulate social‑media platforms that are agent‑driven.
Broadens the conversation from pure technical interoperability to governance, highlighting emerging policy challenges that accompany agent‑driven ecosystems.
Shifted the tone toward regulatory considerations, leading Sihao and Austin to discuss the role of government and NIST in shaping standards and security frameworks.
Speaker: Wifredo Fernandez (XAI)
The success of the World Wide Web came from open protocols like TCP/IP and HTTPS that the U.S. government helped fund. Those standards made the Internet decentralized and globally interoperable, and we need the same approach for AI.
Draws a powerful historical parallel, arguing that open, government‑backed standards were essential to past technological revolutions and should be replicated for AI.
Anchored the discussion in a policy narrative, reinforcing the call for voluntary, consensus‑based standards and influencing Austin’s description of the new Agent Standards Initiative.
Speaker: Sihao Huang (White House OSTP)
We have a Request for Information on AI‑agent security, a draft on agent identity and authorization, and we’ll hold sector‑specific listening sessions (education, healthcare, finance) to surface real‑world challenges and develop voluntary standards.
Moves the conversation from abstract ideas to concrete actions the U.S. government is taking, showing how industry input will shape forthcoming standards.
Provided a roadmap for collaboration, prompting panelists to reference their own protocols as contributions and encouraging participants to engage with the upcoming RFI.
Speaker: Austin Marin (Center for AI Standards and Innovation, Dept. of Commerce)
Just like car metrics (fuel economy, crash‑test ratings) give consumers confidence, AI standards should give us confidence in security, authentication, and the ability to audit decisions—especially when agents act on our behalf.
Uses an automobile analogy to explain why measurable, standardized security metrics are critical for trust and sovereignty in AI deployments.
Deepened the discussion on security, reinforcing Sihao’s point about SSL/HTTPS and prompting further emphasis on the need for evaluative benchmarks and metrology.
Speaker: Michael Sellitto (Anthropic)
We’ve all traveled with adapters because electrical plugs aren’t standardized globally—this is a massive hindrance to commerce. We must avoid repeating such failures in AI standards.
Provides a cautionary historical example of what happens when standards are fragmented, underscoring the urgency of global alignment.
Served as a warning that reinforced the urgency expressed by other speakers, adding a practical perspective that highlighted the economic cost of non‑standardization.
Speaker: Owen Lauder (Google DeepMind)
Overall Assessment

The discussion pivoted around the central theme of open, interoperable standards for AI agents. Early technical explanations (MCP, agent‑to‑agent, commerce protocols) established a shared vocabulary, while analogies from traffic lights to automobile metrics translated complex ideas into everyday terms, making the need for standards feel universal. Historical references to the Internet’s open protocols and failed standards (electrical plugs) framed the conversation within a broader policy and economic context, prompting the government representatives to announce concrete initiatives (RFI, sector listening sessions). Together, these comments moved the panel from abstract enthusiasm to a concrete, collaborative roadmap, aligning industry innovation with governmental facilitation and highlighting both technical and regulatory dimensions of the emerging AI ecosystem.

Follow-up Questions
How do you see the future of AI standards and agent development? And how can AI agent standards reflect the same principles that enable the open internet, including interoperability and security?
Guides the overall direction of standard‑setting efforts and ensures that new AI standards promote openness, interoperability, and security, mirroring the success of early Internet protocols.
Speaker: Sihao Huang
Should social media platforms that are agent‑driven be regulated, and if so, how?
Raises a policy gap concerning the governance of emerging agent‑driven social media, requiring legal and regulatory analysis to protect users and maintain trust.
Speaker: Wifredo Fernandez
What are the key security challenges faced by AI agents that should be addressed in forthcoming standards?
A request for information (RFI) aimed at gathering industry input on AI agent security risks, essential for shaping effective, risk‑based standards.
Speaker: Austin Marin
How should AI agent identity and authorization be defined and standardized?
Calls for comments on a draft NIST document, highlighting the need for clear identity and access‑control frameworks to ensure trustworthy agent interactions.
Speaker: Austin Marin
What specific challenges do sectors such as education, healthcare, and finance encounter when adopting AI agents, especially regarding PII handling?
Sector‑specific listening sessions are proposed to uncover practical barriers and data‑privacy concerns, informing targeted standards and best‑practice guidance.
Speaker: Austin Marin
What technical standards are needed for testing AI systems to ensure reliability and security?
Identifies a gap in robust testing methodologies, which is critical for validating agent behavior before widespread deployment.
Speaker: Owen Lauder
What metrics and evaluation methods should be developed for AI agents (e.g., performance, safety, security) analogous to automotive standards?
Suggests creating standardized measurement frameworks to build confidence among users, regulators, and purchasers of AI‑driven solutions.
Speaker: Michael Sellitto
What role should government play in convening industries to establish norms for agentic systems and data sharing?
Highlights the need for coordinated governance structures that balance industry expertise with public oversight to shape effective standards.
Speaker: Michael Brown
What lessons can be learned from failed or problematic standards (e.g., incompatible electrical plugs) to avoid similar pitfalls in AI agent standards?
Emphasizes the importance of designing universally compatible standards to prevent market fragmentation and hindered commerce.
Speaker: Owen Lauder
What are the open questions in AI measurement and evaluation science that need consensus and further research?
References ongoing discussions and a recent blog post, indicating unresolved methodological issues that must be addressed to support standard development.
Speaker: Austin Marin

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Shaping the Future AI Strategies for Jobs and Economic Development

Shaping the Future AI Strategies for Jobs and Economic Development

Session at a glanceSummary, keypoints, and speakers overview

Summary

The summit opened with Tejpreet S. Chopra framing AI-driven strategies for workforce and economic growth as the most critical issue for governments worldwide, noting that AI’s impact on society, industry and employment is a top concern [1-8][19-22]. He highlighted that India’s 70 million MSMEs employ 230 million people and generate 30 % of GDP, underscoring the challenge of making AI affordable for such a large, resource-constrained sector [19-22][24]. Satvinder Singh described the Digital Economy Framework Agreement (DEFA) as a legally binding pact being negotiated among the 11 ASEAN countries to digitally interconnect 700 million people, with the greatest economic and job benefits expected for the least-developed members [31-40][45-48]. He added that DEFA could double the region’s digital economy to a trillion dollars by 2030 and that linking it with India would create a broader economy-to-economy digital corridor [46-48][49-52].


Dr. Mahendra Karpan explained how Guyana is using AI-enabled telemedicine and a network of 200 remote health sites equipped with Starlink to provide real-time specialist diagnoses for isolated communities, and how AI also supports primary health, agriculture and digital schooling initiatives [74-86][94-96][364-370]. Nihar Shah from Lawrence Berkeley National Lab warned that rapid data-center expansion creates hidden bottlenecks in power, cooling and water, and that addressing these infrastructure gaps is essential for sustainable AI growth [106-115][118-124]. Vinod Jhawar of Nextra detailed Nextra’s plan to build gigawatt-scale, renewable-powered data-center campuses in India, noting challenges of land, high-voltage supply and talent shortages, while aiming for near-zero carbon operation [152-166][180-194].


Chopra noted India’s declining renewable energy costs-solar falling from 18 ₹/kWh to 2 ₹/kWh and wind from 8.5 ₹/kWh to 2 ₹/kWh-positioning the country to win the “AI arms race” by offering cheap compute for its 70 million MSMEs [203-212]. Narendra Singh added that building data centers in India costs 4-6 million USD per MW versus 12 million USD elsewhere, and that domestic chip and hardware production can further reduce expenses, creating a trillion-dollar opportunity [218-224][229-230].


Satvinder Singh later argued that AI is currently reshaping mainly white-collar jobs through collaborative augmentation rather than full automation, and that governments are unlikely to allow wholesale replacement of high-skill roles, emphasizing the need for policy and ethical safeguards [248-258]. He also stressed that continuous upskilling-from schools to on-the-job training-is crucial, especially for younger workers who must develop lifelong skills to adapt to rapid AI change [303-307]. The panel converged on the importance of trustworthy AI, with the moderator highlighting the Global South’s need to co-design governance frameworks, and participants urging transparent, auditable systems and patient capital to scale responsible AI deployments [423-426][528-533]. The discussion concluded that inclusive collaboration, affordable infrastructure, and ongoing education are essential to harness AI for sustainable economic growth while safeguarding jobs and societal wellbeing [372-380][395-396].


Keypoints


Major discussion points


Redesigning workforce strategies and up-skilling for an AI-driven economy – Panelists stressed that AI will reshape jobs, especially white-collar roles, and that continuous learning and reskilling are essential. Satvinder highlighted studies showing AI’s biggest impact on white-collar jobs and warned that governments will not hand over “full replacement” of high-skill work to machines [248-260].  Vinod and Nihar pointed to looming talent bottlenecks in data-center operations and the need for rapid skill upgrades [120-124][165-199].  Satvinder later reiterated that up-skilling must become a lifelong, systemic effort, starting from schools and extending to the current workforce [303-308].


Regional digital cooperation – the Digital Economy Framework Agreement (DEFA) – The ASEAN representative explained DEFA as the world’s largest legally-binding regional digital pact, designed to inter-connect 700 million people across 11 countries and to deliver jobs and growth especially for the least-developed economies [35-48][38-44].  He argued that the agreement will double the region’s digital economy size by 2030, creating a shared platform for AI-driven trade with India [46-48][50-52].


Infrastructure bottlenecks: data-centers, energy, cooling, and the cloud-edge split – Multiple speakers identified the physical foundations of AI as a critical constraint. Nihar warned that cooling and power are “blind spots” often ignored in AI rollout plans [112-119].  Vinod described Nextra’s strategy to build gigawatt-scale, renewable-powered campuses, tackling land, voltage, and talent challenges [165-194].  Narendra added that compute costs in India are far lower than in the US or Singapore, but chip prices remain a major hurdle [223-230].  Tejpreet highlighted India’s falling renewable-energy tariffs as a competitive advantage in the “AI arms race” [203-212].


AI’s role in public health and tele-medicine, especially for remote or low-resource settings – Dr Mahendra Karpan shared Guyana’s tele-medicine network of 200 sites using satellite connectivity, enabling community health workers to obtain real-time specialist advice [84-86].  He emphasized AI-assisted diagnostics (e.g., CT-scan interpretation) while stressing that human empathy remains irreplaceable in critical care [264-270][271-274].  These examples illustrate how AI can extend specialist services to underserved populations.


Building trusted, responsible AI governance in the Global South – The later segment framed trust as a prerequisite for scaling AI. The moderator introduced the “Trusted AI at Scale” dialogue, noting that existing governance models are North-centric and must be adapted for the Global South’s realities [410-418].  Dipali Khanna described trust as “strategic infrastructure” that must be baked into transparency, auditability, and grievance mechanisms from day one [521-529].  Kip Wainscott reinforced that financial-sector trust frameworks (model-risk management, ongoing monitoring) are essential for broader societal adoption [658-669] and expressed optimism that the summit is moving the conversation from theory to actionable trust models [681-684].


Overall purpose / goal of the discussion


The panel was convened to explore how AI can be harnessed to drive inclusive economic growth and workforce transformation, especially for emerging economies. Participants examined concrete policy tools (DEFA, national AI strategies), infrastructure needs (energy, data-centers, edge vs. cloud), sectoral applications (healthcare, agriculture, climate), and the governance frameworks required to ensure AI is deployed responsibly, ethically, and at scale across the Global South.


Overall tone and its evolution


Opening (0-10 min): Energetic and forward-looking, with Tejpreet framing AI as the “most important topic” and celebrating the panel’s diversity [1-5][30].


Middle (10-40 min): Technical and problem-solving tone; speakers detailed regional agreements, infrastructure challenges, and sectoral pilots, acknowledging significant bottlenecks while maintaining optimism about cost-effective renewable energy and emerging solutions [35-48][112-119][165-194].


Later (40-80 min): Shift toward caution and responsibility; emphasis on up-skilling, human-centric design, and the need for robust trust and governance mechanisms [248-260][303-308][410-418][521-529].


Closing (80-end): Constructive and hopeful, with multiple participants summarizing key takeaways, urging collaboration, and expressing confidence that the summit has moved AI discourse from hype to actionable, trustworthy implementation [372-397][681-684][726-734].


Overall, the conversation remained collaborative and solution-oriented, moving from enthusiasm about AI’s potential to a sober assessment of the practical, ethical, and infrastructural work required to realize that potential responsibly.


Speakers


Tejpreet S Chopra – Founder & CEO of Industry .AI; AI strategy, digital workforce, productivity-driven AI solutions [S14]


Satvinder Singh – ASEAN representative; Digital Economy Framework Agreement (DEFA), AI impact on jobs and regional digital economy [S28]


Dr. Mahendra Karpan – Interventional cardiologist & Presidential Advisor (Guyana); healthcare transformation, tele-medicine, AI in medical diagnostics [S4]


Nihar Shah – Researcher, Lawrence Berkeley National Laboratory; energy systems, data-center cooling, AI-driven infrastructure, hydrogen research [S29]


Vinod Jhawar – Senior executive, Nextra (Airtel subsidiary); large-scale data-center and AI-focused infrastructure, renewable-energy powered facilities


Narendra Singh – Managing Director, RackBank & NeveCloud; cloud compute cost, space-based data-center concepts, Indian data-center economics [S5]


Aju Widya Sari – Director of AI & Emerging Technology Ecosystems, Ministry of Communications & Digital Affairs (Indonesia); national AI roadmap, infrastructure & ethical AI guidelines [S3]


Dr. Mahendra Karpan – (listed above)


Mohamed Kinaanath – Minister of State for Homeland Security & Technology (Maldives); AI governance, national AI readiness assessment, AI Act development [S10]


Eugenio Vargas Garcia – Ambassador of Brazil (G20); AI policy, tech diplomacy, sustainability & AI-driven climate initiatives [S11][S12]


Parag Khanna – Founder & CEO, AlphaGeo; geospatial AI for sustainable urbanisation, climate-adaptation modelling, AI for public-good infrastructure [S13]


Kip Wainscott – Executive Director, Global AI Policy, JPMorgan Chase; financial-services AI risk management, model governance, trust frameworks [S16]


Dipali Khanna – Senior Vice-President & Head of Asia, Rockefeller Foundation; philanthropy for trusted AI, partnership & capital for AI deployment [S18]


Son Sokeng – Senior government official, Cambodia (also addressed as H.E. Sokeng); AI readiness, digital skills roadmap, national AI strategy & governance [S15][S25]


Audience – Various participants (e.g., Harsh Vartan, CTO HDI Industry; other unnamed attendees) – asked questions on AI, hydrogen, up-skilling, subsidies, etc.


Moderator – Session moderator (unnamed) – facilitated panel discussion and audience Q&A.


Additional speakers not listed in the provided names list


Mr. Vinod Khosla – Mentioned as a provocative commentator; not a speaking participant.


Dr. Carpin – Mis-named reference to Dr. Mahendra Karpan; no separate speaker.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above.


Mr. Khosla – Same as above


Full session reportComprehensive analysis and detailed insights

Opening & Scope


Tejpreet S Chopra opened the session by emphasizing that AI-driven workforce and economic strategies are the summit’s top priority for governments worldwide, which are asking how AI will reshape society, industry and employment [3-8]. He highlighted India’s MSME sector – roughly 70 million firms employing about 230 million people, contributing ~30 % of GDP and ~50 % of exports – and framed the challenge of delivering affordable AI to these resource-constrained companies [19-22][24-25]. Chopra then set out three discussion pillars: (1) redesigning workforce strategies, (2) building digital-compute infrastructure, and (3) ensuring inclusive, responsible and sustainable AI-driven growth [26-28].


Digital-Economy Framework (DEFA) – Satvinder Singh


Satvinder Singh described DEFA as the largest legally-binding regional digital agreement under negotiation by the 11 ASEAN nations and India, intended to create a digitally-interconnected market of 700 million people [29-31]. He noted that post-COVID digital transformation accelerated growth and that DEFA is projected to double ASEAN’s digital-economy from US$300 bn today to a trillion dollars by 2030 [32-34]. Singh stressed that the least-developed ASEAN economies (Laos, Cambodia, Myanmar, Timor-Leste) stand to gain the most jobs and per-capita economic growth [35-37].


Health-care & Tele-medicine – Dr Mahendra Karpan


Dr Mahendra Karpan (presidential advisor, Guyana) explained that offshore-oil revenues are being allocated to health, agriculture, digital public services and carbon-credit programmes [38-40]. He detailed Guyana’s tele-medicine network of over 200 remote sites equipped with Starlink connectivity, enabling community health workers to transmit EKGs, X-rays and vital signs to specialists in real time [41-44]. AI is applied for primary-care diagnostics, inventory management, disease surveillance and agricultural soil management [45-47]. Karpan invited investors, emphasizing the need for long-term, sustainable AI-driven development [48-50].


Energy, Cooling & Data-center Bottlenecks – Nihar Shah


Nihar Shah (Lawrence Berkeley National Lab) warned that AI’s rapid expansion will strain power, cooling and water resources, calling cooling and water-use “blind spots” in AI-infrastructure planning [51-53]. He cited an Energy Act-mandated report showing data-center capacity has tripled over the last decade and is expected to triple again by 2028 [54-56]. Shah highlighted the PAC-Silica U.S.-India AI supply-chain partnership and the need for talent pipelines to support AI hardware and software [57-59]. He gave an example of AI-designed chips delivering a 30 % performance gain, illustrating AI’s potential to optimise the whole stack [60-62].


Data-center Infrastructure & Renewable Energy – Vinod Jhawar


Vinod Jhawar (Nextra, Airtel subsidiary) outlined Nextra’s new “AI-VC” vertical focused on building gigawatt-scale AI-ready data-center campuses across India [63-65]. He identified key challenges: power availability, land acquisition, high-skill talent, and the need for ultra-high-voltage (700 kV) grid connections [66-68]. Nextra is sourcing ~400 MW of renewable energy and aims for net-zero operation by 2032, leveraging favourable Indian renewable-energy policies [69-71]. Jhawar stressed the urgency of up-skilling engineers through rapid training programmes [72-74].


Cost of Compute & Edge vs. Cloud – Narendra Singh


Narendra Singh (MD, RackBank/NeveCloud) compared data-center CAPEX: $4-6 M per MW in India versus $12 M in the US, Singapore and Dubai, attributing the cost advantage to domestic supply-chain manufacturing [75-77]. He warned that chip costs remain 5-10× higher than infrastructure costs and that a “trillion-dollar” AI opportunity hinges on cheaper compute and indigenous AI chips [78-80]. Singh also referenced a future space-based data-center mission, in partnership with Agni Cool, to support critical workloads and border-security applications [81-83].


Workforce Impact & Upskilling – Satvinder Singh & Panel


Singh cited the Entropic study showing AI will affect white-collar jobs more than blue-collar, with most impact coming from collaborative augmentation rather than full automation [84-86]. He warned that governments are likely to regulate AI-driven job displacement and that continuous learning will become the norm [87-89]. Audience questions and panel responses highlighted the need for large-scale up-skilling programmes, school-level digital curricula and lifelong-learning pathways [90-93].


Policy, Subsidies & Trust – Chopra, Singh, Khanna, Wainscott


Chopra noted the Indian AI Mission’s INR 10,300 crore fund, GPU subsidies at INR 65 per hour, and other incentives aimed at democratizing AI access [94-96]. Singh added that the government’s INR 200 billion data-center fund excludes chip costs, reinforcing the need for private-sector investment [97-99]. Dipali Khanna (Rockefeller Foundation) stressed that trusted AI must be built into systems from day one and that patient capital can support regulatory sandboxes and capacity-building in the Global South [100-102]. Kip Wainscott (JPMorgan Chase) described the financial sector’s model-risk-management framework as a template for AI trust and called for industry-wide standards to accelerate responsible AI deployment [103-105].


Key Take-aways


The panel converged on six core messages: (1) AI will reshape jobs but collaboration, not wholesale displacement, is the realistic path; (2) affordable compute and renewable energy are critical infrastructure bottlenecks; (3) tele-medicine and AI-enabled primary health care can leapfrog resource-constrained health systems; (4) up-skilling and continuous learning are essential for all workers; (5) public-private partnerships, subsidies and sovereign AI strategies (e.g., DEFA, AI-Readiness assessments) are needed to ensure inclusive growth; (6) trust-by-design and transparent governance are prerequisites for scaling AI in the Global South [106-110].


Session transcriptComplete transcript of the session
Tejpreet S Chopra

Hi, good morning everybody. I’ve got an incredible panel here this morning. The topic that we have is, I think, the most important topic at the summit. I think everywhere I’ve spoken or everywhere I’ve been, it all revolves around this critical topic around AI -driven strategies for workforce and economic growth. And I think the reason this topic is super important is the fact that if you are a government official anywhere in the world, I think this is their biggest concern, is that how is AI going to impact society? How is AI going to impact workforce? How is AI going to impact industries? So that’s going to be the most, you know, it’s the most important topic.

So I appreciate everybody who’s out here. My name is Tej Trikot Chopra, and I’m the founder and CEO of Industry .AI. So we are an AI company that focuses on driving productivity. productivity. passionate for me because I live and breathe this every day because we are trying to really do this is that how do you create the digital workforce or how do you empower the workforce across industries. A quick introduction of my colleagues on the call. We have Mr. Sathinder Singh from the ASEAN. Mr. Narendra Singh from Neve Cloud should be joining us any minute. Mr. Vinod Javar who is really the key in Nextra which is part of AIDA. Dr. Nihar Shah who is one of the best in the healthcare space and Dr.

Mahendra Karpan who is a presidential advisor to Guyana. Welcome everybody. Just one other key point. Just to put it in context. In India we have 70 million MSMEs. These MSMEs employ 230 million people. The MSME market in India produces 30 % of India’s GDP and 50 % of exports. The other big critical thing is how do we bring AI for all, how do we bring AI for all these companies that can’t afford what normally large companies do? And that’s the big challenge in front of us. And that’s what we’re going to talk about today. So in order to really kick this off, what I’d like to do is first really talk about three critical elements in today’s discussion, is that how do we redesign our workforce strategies given this new technology that’s coming up?

How do we build the digital and compute infrastructure? And I’ll request Vinod to talk about that. And how do we really ensure that economic growth driven by AI remains inclusive, responsible, and sustainable? So with that, I’m going to request Satvinder, if you don’t mind kicking it off, and it would be good to understand from your perspective, how is the Digital Economy Framework Agreement going to help governments around the world navigate the opportunities that exist? Over to you. Thank you.

Satvinder Singh

Thank you, Mr. Chopra. Very good afternoon to all of you. Great to be here with all of you. I think all of us are enjoying this momentous impact event, and it’s a great place to be here sharing ideas. And I’m here specifically Mr. Chopra, if I don’t mind, I’m giving the perspective of the ASEAN some of you may not know ASEAN is next door to India today we are the 5th largest economic bloc, 700 million people, most of it middle and upper middle income economies who are part of ASEAN and with India of course we are deeply connected, we have a free trade agreement and we also have a very strong trade economic ties with India and we have a lot of cooperation going on with India including in the area of digital connectivity Mr.

Chopra talked about the digital economic framework agreement, let me just update you what is that, in short we call it DEFA DEFA is a digital agreement we are now negotiating in the midst of completing negotiations by March of this year last two years we have been negotiating it is the largest regional digital agreement in the world the only difference is also legally binding the only difference is also legally binding the only difference is also legally binding the only difference is also legally binding the only difference is also legally binding the only difference is also legally binding the only difference is also legally binding So we are actually negotiating with the 11 countries of ASEAN to come on board 700 million people to be digitally interconnected, interoperable So that we can do business better and so that we can grow our economies better Now the essence of DEFA came about post -COVID I think in COVID, like in India, in ASEAN too, it really changed us I think while COVID was not good for anyone, but COVID also had positive unintended consequence We saw the greatest transformation taking place in the way we live, work and play And that interpreted clearly into growth that took place post -COVID I think the leaders in my region saw the prospects And they also saw the numbers where huge chunk of economic growth is driven by digital And therefore there is a no regret move now to move the entire region into digital connectivity And that’s where the DEFA comes through I think the interesting thing in ASEAN I mean, well, India is one country in ASEAN Like I said, there are 11 countries.

We have LDCs there, least developed countries like Laos, Cambodia, Myanmar, and now Timor -Leste just joined us. And then we have advanced economies like Singapore, Malaysia, Thailand, Indonesia, who is like a middle -income economy. So it’s a mixed bag of economies, but the momentum of getting all of them together to do this, we were able to move the agenda because we were able to show very quickly through data that the biggest beneficiaries actually of the DEFA is not even advanced economies like Singapore, because they are already there, digitally connected, but actually are the LDCs. We were able to show that the impact of DEFA will be greatest in terms of jobs, prospects, economic growth, because they are really economies which are least developed, but they are going to be more developed.

We are moving into the latest of all kind of connectivity at the lowest cost. And they will be the ones who will be able to benefit on a per capita basis in a maximum way. So that is how we were able to get 700 million people from 11 countries. sitting on a common agenda of being integrated because you were able to show them the money. If you don’t show them the money, nobody is going to jump in and do any such agreement. Money here means jobs, economic growth, deeper depth in terms of growth of the people and communities. So DEFA, for example, already ASEAN is a very vibrant digital economy. Roughly it’s about 300 billion today and we are going to be moving to a trillion dollars in size by 2030 in the next couple of years.

But with DEFA, the numbers are showing that the region is going to double the size of digital economy. So I think this is where we are in terms of our ability to come together to be able to do business. And our idea is, of course, not to stop in ASEAN. The idea is that once DEFA is going to be in place, we want to be connecting to India. Economy to economy, I think this will really be fantastic. I think we can stop looking over the shoulder. I mean, basically global South India, Southeast Asia, there’s plenty of markets. demographics on our side. In fact, even the affinity of our people in wanting to embrace technology is on our side.

In fact, some of the studies that are showing that actually it’s economies like Southeast Asia and ASEAN as well as India is where people are seeing the translation of the use of AI in the most profitable way. The data is showing that it’s not in the West. It’s actually in our region where businesses are beginning to deploy small AI into their day -to -day business and making a big impact on productivity, on growth, and also in relevance. I’ll stop there. Maybe

Tejpreet S Chopra

Satyendra, you’re absolutely right because I think in our part of the world, I tell everybody that India is trying to lead and be the bridge between the advanced economies and the emerging economies. But I think the dynamics of technologies needed for our kind of part of the world is very very different from the West. So I think if we can get a good price point to provide these technologies, that will be great. Dr. Carpin, you’re an interventional cardiologist and you’re advising a lot of governments around the world. It would be great to get your perspective in terms of how you see AI and its impact in terms of transforming public health care. Now Dr. Carpin I was at a discussion two days ago with Vinod Khosla and he always says some really provocative things and one of the things he said was that in a few years from now AI, we won’t need doctors in the world and I know he said we won’t even need doctor surgeons in the world so you don’t need you’re the actual real person who does all this stuff so it would be good to get your perspective.

Dr. Mahendra Karpan

Thank you very much for having me here and I bring you greetings from our President Dr. Mohamed Irfan Ali and Vice President Dr. Bayard Jagdeo We are from a small country located in South America and we are here to help you just a population of 850 ,000 people. I believe on the way here I might have encountered 850 ,000 people on the road. So you can imagine the scale that we are dealing with. What is transformative about Guyana at this time in our history is that sometime 2015 we discovered oil offshore. And you know everything that comes with that transformative discovery. The oil and gas industry is now booming. And we are trying to learn all of the lessons from states that have walked this path before.

Those that have relied heavily only on oil and gas they have encountered tremendous difficulties that we are hoping to avoid. So one of the things, or a couple of the things that we are using these resources to do is to help us with our health care, our agriculture, and our digital transformation in the public service. One of the other important things about Guyana is that we have the majority of our population living on the coastal area, and most of the rest of the country is forest. So we actually are pioneers in selling carbon credits to the world. The sad and vulnerable part of this, however, is that the coastal area is on the sea level.

So using AI, predictive models, all of those things, it’s a survival tool for us, not just now, but for future generations. In recent times, we have, been fortunate to have visionary leadership to take us in the direction we’d like to go. and we have in our country several remote villages of indigenous populations and I’ll share an example, a personal example when I was more in hospital practice. There was an 18 -year -old boy who had to be flown out from an interior village by the military helicopter after a snake bite and when he came to the city it was the first time that he saw headlights on a car. That’s where we still are in some places.

So we have been able in recent times to use our resources to establish telemedicine in particular areas. We have now over 200 functional sites that can actually serve these remote communities. They can do simple things like this. EKGs, x -rays, blood pressure, blood sugar, all of the common things and respond. to trauma, etc. So a healthcare worker, not necessarily a doctor, a community health worker, somebody indigenous to that area, can assess these patients, they go on video conferencing and all 200 of these locations actually have Starlink or we’re trying to implement that now, so that they have connectivity. So the specialists on the coast and in larger centres actually can give real -time diagnosis, real -time advice.

I myself and the cardiac unit, my on -call team always will be able to review an EKG. Like in India, I suppose, our number one cause of mortality is still cardiovascular disease. Heart attack is a huge problem in our population. Historically, some of you may be familiar with the facts that most of our population at one time were indentured immigrants that left India. and the majority never returned. And they built homes and created generations of descendants from then in Guyana, Trinidad and Tobago, Suriname, in that entire region. So whatever is plaguing you in India from a healthcare perspective is the same thing that was transferred. Because we maintain the same lifestyle, we have the same foods, the same likes, the same dislikes, the same genetic predispositions.

So in our context, what we have been doing is to start at the basic primary level, because that’s where we are. We’re not yet at the Singapore level and the others, but we’re hoping to get there in a very rapid leapfrog type of strategy. So we’re using AI at this time to do primary healthcare, for inventory management, for surveillance. For surveillance. And we’re moving into areas like agriculture for soil management, food production, et cetera, and to help us in all aspects. So for those of you who are looking for opportunities where there are challenges, there’s always opportunities. So Guyana presents to you tremendous opportunity for investment, for development, and for long -term, multi -generational, sustainable involvement in our country.

And I am sure, and I bring you this message on behalf of the president, we welcome investors to Guyana.

Tejpreet S Chopra

Dr. Carpin, thanks so much for that. I completely agree because I think the world is going to face exactly the same challenges, whether it’s in health care, whether it’s in agriculture, and I think there’s a lot of cross -sharing that we can actually learn from. Niyar, I’m going to pull you in, right? So just for everybody’s benefit, Niyar is with the Lawrence Berkeley National Labs, which is really one of the leading public research institutes. It’s in the world. But Niyar, I just want to share with you about four or five weeks ago. there was a majlis in Adnok in Abu Dhabi and they had a hundred CEOs in a room on a Sunday and everybody in the world showed up and there were four groups of people every major CEO of every major oil and gas company in the world the CEOs of every major energy utility in the world the CEOs of every AI company in the world and the CEOs of every large capital provider in the world and I was trying to figure out myself what’s the connect and at the end of the day what came out was that the world will need four times more energy in the next 10 -12 years to support the growth of data centers and all the other things that are going to happen and that’s going to require four trillion dollars every year for the next 10 years so Nir with those numbers would love to get your perspective from a technology perspective and how should the world react to that kind of growth that’s needed thank you

Nihar Shah

yeah so as mentioned my name is Nihar Shahab and I work at Lawrence Berkeley National Lab it’s one of the 17 Department of Energy National Labs If you also are Oppenheimer, you might know where the National Labs came from. And, of course, we have a very distinguished history with a lot of Nobel Prizes, so I won’t bore you with all that. But I’m very grateful, first of all, to CII for this opportunity to speak. And with respect to the question, obviously energy is one of the things that I think in, you know, I go to bed every night, I wake up every morning thinking about energy, being one of the energy labs of the United States.

Now, one of the other things that is also probably not as well -known, I direct the global cooling program at Berkeley Lab. And another blind spot with respect to kind of, you know, you mentioned energy, you mentioned the huge growth, you mentioned, you know, the huge investment needed. Another thing that’s needed is going to be cooling. And that’s another blind spot that I think we don’t really pay attention to. So that gathering of CEOs, I hope that there’s also a gathering of HVAC and data centers and other CEOs. and then in addition I think one more thing that we would probably need to think through in countries like India is the water consumption so you think about these kinds of things you also so we need to really think about this in a holistic sense and then you know in the bigger picture thing I think some of these things of course AI we are at the intersection of so many different things but if somebody tells you that they know exactly what’s going to happen in three years or five years or seven years they’re selling you something so you might want to take a second look at that.

I’ll say a couple of other things right related to what you mentioned Vinod Khosla just a month ago I was in Silicon Valley Berkeley Lab is based in Silicon Valley and Mr. Khosla was giving a keynote there and he said as usual he said some very provocative things. One of the things he said was by 2030 everything that needs human expertise will be free or nearly free. Second thing he said was everything that needs you know labor is going to be and this is coming to our topic here is going to be also very nearly free. The thing that I disagree with Mr. Khosla about is that, again, I mentioned the energy blind spot and the cooling blind spot.

So really, I think some of these things are going to be infrastructure bottlenecks, which I think some of our co -panelists are going to be able to address. And then also, I think, along with that, there’s probably also going to be a talent bottleneck. And when I say talent bottleneck, I don’t mean talent across the board. I think it’s going to be particular kinds of talent that we’re going to need. And just today, you might have heard the U .S. and India signed or India formally joined this PAC -Silica initiative by the United States. PAC -Silica is an initiative about the whole AI supply chain. So now you’re talking about not just, you know, kind of compute and not just, you know, the infrastructure, but you’re also talking about the whole supply chain that will allow that to happen.

And that’s an initiative that the U .S. government has started. So, you know, there’s a range of different things we could talk about. The workforce dimension, of course, is super important. And, of course, I can. Come back to any of those things. I’ll mention one last thing. Berkeley Lab, the Energy Act of 2020. requires Berkeley Lab to essentially report to the U .S. Congress on data center growth. And they found that over the last decade, data center growth has tripled. So, again, some of these numbers bear out even if you look at history. And the forecast is that by 2028, triple again. So, you know, these things are, again, not very well known. But I do think that, you know, that these blind spots need to be addressed by all of us and all of you all.

And we’re at a very interesting point with the, you know, I would say industrial revolution. So let’s see what

Tejpreet S Chopra

Thanks for that, Nir. I think you’re absolutely right. I think the people are underestimating the challenges of developing all this infrastructure, whether it’s in terms of cooling, whether it’s in terms of power, whether it’s in terms of communication, fiber optics. So I think that’s going to be a huge challenge. And with that, I’m going to turn it over to Vinod. Vinod’s with Nextra. They’re building some of the largest data center networks in India. But just before this panel, I was actually talking to Vinod because I think there are going to be two parts of the world. There are going to be large cloud data centers, which Vinod is building, but I also think there’s going to be another parallel world that’s going to be on the edge.

We at Industry .ai this week launched the world’s first AI supercomputer for manufacturing, which can go on every factory floor of the world, especially at a price point for 70 million MSMEs to transform productivity. So two things, Vinod. One would be good to get your perspective on how the world’s going to pan out of cloud versus edge, number one. And number two, all the challenges or bottlenecks that Nir was talking about, whether it’s in terms of cooling, capital, technology, skill labor, would be good to get your perspective.

Vinod Jhawar

Sure. Thank you. Thank you very much. I represent Nextra. It’s a subsidy of Airtel. We are in this business of building infrastructure for data centers. So that’s it. That’s our bread and butter from Nextra point. So we’ve been doing this for the last 20 years. We’ve seen the evolution from a normal. server room racks to small enterprise customers to now hyperscalers. Now, we’ve got the expert also over here. We’ve got now the new, what do you say, elephant in the room called the AI requirement of data center. So much is the demand now which is coming through to build large infrastructure for data center that Nextra has decided to carve out a separate vertical called AI VC on that.

So that’s the vertical which I represent. We are here to develop large scale gigawatt kind of campuses to that to cater to the fast requirements of some of our customers to grow primarily in the Indian subcontinent areas here on that. Yeah, the right is said that the challenges are there. Power is the challenge. Land is a challenge and getting the right kind of skill set still remains a challenge. We come from 20 years of experience, so we have understood the ways to work on this, on that. So being one of the pioneers in the home ground industry in the data center here, so we have been doing that. Few of the challenges have been pushing us to go beyond certain areas, look at new areas to build data centers.

Some of them are very close to the coastal area so that it can also accommodate cable landing stations for us, so that takes care of a lot of data required requirements. Plus we are also putting sites which are close to national grids now. There were places where we used to source voltage at 33 kV, now we are looking at 700 kV volts and all. So this is the thing, thoughts, which has now evolved and all, and it requires a separate thought process and the… Obviously the large amount of capital is required to get into that. So we are in this and we are well prepared. The demand obviously is quite high. The expansions of Nextra is quite aggressive also.

And we got something going on in the south. We thought something the best going forward on that. So your second question of how do you do the power and sustainably portion of it? Obviously most of the power we are going to source from renewable energy. That’s the key strength in the Indian regions here. Luckily for some of the good policies which have been put across a few decades back by the government, we have got plenty of renewable energy generators here. The government is pushing for upgrading the infrastructure to evacuate this energy also. So once we are at this high voltage, we are also connected to the central grid. So it makes it very, very reliable for us.

We are aiming to be what we said net zero by 30, 20, 30, 32, something is what. And there is a big pool of renewable energy for us to tap into that. We are at present, we have contracted close to 400 odd megawatt of renewable energy. So no longer we are looking at just 50 % resource for energy. That percentage is going almost close to almost 100 % now. This is how the whole sustainably portion of data center. And India and Nextra are well positioned to tap into this. I think that’s the interest level in trying to do a green data center evolves for us on that. The other challenge which has been told about how do you do the skill set. Yeah, skill set is a challenge.

Probably it requires a lot of debate on that. It’s something which needs. To be handled both at the fundamental level at the schooling level. and at the university level and right at the immediate level of training the existing engineers to adapt. So by the time the next generation come in, we probably would have missed the bus. So there are three, four approaches we are looking at how we can build an immediate kind of skill upgrade to make them suitable to develop the data center on that. So this is some of the things which we are doing, and I think we are, as an extra, very, very well positioned to meet the demands of whatever the customer is looking at.

Tejpreet S Chopra

Thanks for that, Varun. I think one of the things that came out of that session in Abu Dhabi was the fact that the world that’s going to win the AI arms war is the country that has the cheapest energy. And I really do believe that in India we have an incredible opportunity. I come from a renewable space. My first solar farm eight years ago, my revenue was 18 rupees a kilowatt hour. Today we get 2 rupees 20. My first wind farm was 8 rupees 50. Today we are in 2 rupees again. So I think we have an incredible opportunity in India to really win this AI arms war because the cost of producing energy is quite cheap. So, Narendra, first of all, welcome.

Narendra is the MD of RackBank and NeveCloud. Narendra, you’ve heard all the challenges in terms of the cloud. It would be good to get your perspective in terms of, one, cost of compute in India. How do you really make it affordable for everybody? And two is, how do you ensure adoption across the country?

Narendra Singh

kilometers away from Earth. We partner with Agni Cool, which is a space tech company, and the space ecosystem has evolved in the country from the last seven years. And the government has given a lot of open up space for everyone, for the private player. The first mission we are sending before the end of this year, and we believe that this is for critical workload which can protect the borders, unmanned vehicles, and all those things. So we started exploring beyond Earth and that’s what it is needed. And we can lead as India because the ecosystem, today look at the cost of building data center in India is 4, 5, 6 million dollars per megawatt versus 12 million dollars in US, in Singapore and Dubai.

Any market you go, you get a cost of 12 million dollars. Why? Because the 80 to 90 percent product which required in supply chain is manufactured in India. And we have to strengthen that. As government announced the 200 billion, I believe this is only for data center infrastructure, not for chips. So chips cost is on top of it. It’s like 5x or 10x. It depends on what chips you are using. so opportunity is huge, it’s a trillion dollar opportunity for the country thank you

Tejpreet S Chopra

I met Narendra about 2021 at the JW Marriott Hotel in Mumbai and that time he was still putting together this whole strategy and I was thinking to myself in 2021 what’s going to happen about data centres and now he’s talking about data centres in space so it’s good to see the kind of progress that’s happening before I go, I have lots of questions to ask but any questions from the audience that they want to ask go ahead go ahead

Audience

Sir, I am Harsh Vartan basically from HDI industry but before that I was working as a research fellow in CSIRC so my question to Mr. Shah is we have seen hydrogen fuel cells being used at experimental level in railways and buses but it has not been implemented at a large scale neither in India nor abroad so what are the… Thanks.

Nihar Shah

Yeah, I have many colleagues at Berkeley Lab. Actually, they have been collaborating with India’s National Hydrogen Mission. So, you know, I think, and thanks for the question. You know, when you come to fuel cells, I think there are a few bottlenecks. You know, some of them have to do with also just even having the hydrogen infrastructure in the country, right? Right. So I’m not necessarily the right person to address like all of these issues in terms of why hydrogen fuels have not taken off. But I do think that some of these things are still, you know, kind of an R &D challenge. And I think many of these governments are looking at hydrogen to see whether or not you can actually, you know, eventually do the R &D to deploy it.

And there is collaboration going on. So, you know, stay tuned. I think it will obviously India is also doing a lot on that and other countries also.

Narendra Singh

I can add to this that. The bottleneck as an operator is the cost should not be higher than what we are getting today from the grid. innovation should like lower down the cost then the adoption will happen rapidly so that’s what we think and that’s why I believe the adoption is not happening because people don’t want to pay premium for that and I think India can take the lead in that in terms of cost adoption and price points the supercomputer for manufacturing we are seeing 6 .5 lakhs so I think that’s the kind of speed at which we are going to change the way things are going

Tejpreet S Chopra

so Tindra I want to pull you in and the ultimate question that everybody is asking impact on jobs at the ASEAN how are you thinking about it because huge concern for governments is AI going to replace jobs is it going to enhance jobs so it would be good to get your perspective and what you are going to say out here is going to drive policy all over the world

Satvinder Singh

so I am also going to say it perspectively from also how data is being collected already on impact on jobs and actually I have taken this from the studies that actually were done by Entropic Entropic it’s a massive study on AI jobs and security and one thing is clear. I think right now, while there’s quite a major hype on AI, but when you actually study the impact that it has globally and even in Southeast Asia, in ASEAN, it’s really impacting certain segments of the economy. I think the biggest impact is actually more on white -collar jobs rather than blue -collar jobs. And it’s true because a lot of it, I think even in the white -collar jobs, a lot of it has got to do with collaborative augmentation rather than full automation and handing over to the AI to do everything.

So I think that’s at this stage where we are in terms of the technologies on AI that we have and how we’re deploying them. Of course, when you are watching Elon Musk and you watch all these technologies and what’s to come in two, three years’ time, they are saying, now this is going to move from collaboration to totally replacing the human factor. In fact, the takeover part is, I think, that’s what… scares most societies. And I must tell you this is now becoming front and centre of conversations in government, in policy makers. I’m actually quite certain that the governments are not going to hand over this ability of replacements of all important jobs at the high echelons of society to the machine.

That I can assure you is not going to happen. There will be a lot of effort and conversations going on and it’s happening in closed doors where policy will have to come in to determine what can or cannot happen. And those barometers are going to be there and I think that’s where you see this impact event. You saw the largest conglomerate of decision makers from the private sector sitting with governments in this one location. You can see that momentum is there. There will have to be an ability for us to differentiate and also collaborate with what the change is going to come. and otherwise I think you’re going to see societies breaking up, the contract of governments with people is going to break up, people won’t have jobs, and if we are saying that the impact is not so much on the blue collar I think then a lot of the farming community is probably sighing with relief but I think we all know that in the cities where there are millions of people, I think this is where it’s going to be quite critical for us to get this contract properly sorted out I think in the coming years you’re going to see a lot of ethical rules regulations set up in order to ensure that actually whatever change that we embrace coming from the latest of AI, it has to improve, not take away the quality of life I think collaboration is going to be the name of the game not displacement of the human the use of people and population and I think that is something in conversation it’s not something that in this room we can decide, but clearly you can see that the momentum is here for that kind of difficult conversations to take place

Tejpreet S Chopra

you’re right and I like what you just said and first of all I’m hearing that one collaboration not displacement because the word I’ve been using is enhancement not displacement but I think it’s going to be all of the above so I think you’re absolutely right Dr. Karpen it would be good to get your perspective especially of healthcare you talked about telemedicine right technically I guess a doctor in the United States or India could be providing advice to somebody sitting in Guyana so how are you seeing this whole world panning out in terms of the impact on healthcare jobs

Dr. Mahendra Karpan

thank you so I’m glad that you mentioned the collaboration of these services yes indeed we have in the telemedicine space we have doctors from India we have doctors from New York the Apollo hospitals here the Northwell group in New York they collaborate they are able to help us with patients in terms of displacement of human capital or human skill set though I think for most of the countries like ours, we’re starting out at a severe deficit. There is not a surplus of radiologists. There’s not a surplus of cancer diagnostic technicians. All of these skill sets are extremely limited. So AI actually comes in to help us with diagnosis, accuracy, speed of diagnosis, as well as the economic aspect of achieving all of those outcomes.

But I tell you one thing, as a physician, that we are not too concerned for some things. In the emergency room, when there’s a child who can’t breathe from asthma and they’re scared parents, an AI can make an accurate diagnosis. They can tell you exactly what to give, what mixtures to nebulize, but to comfort and reassure those parents. that’s a human function at difficult stages of life when you’re facing terminal situations end stage of cancer you want somebody with warmth to hold your hand that cannot and can never be replaced by AI so all of this we have to bear in mind the complementary aspect of this new era that we are entering into we rely on the AI to give us the accuracy of the diagnosis in fact in Guyana we just purchased software to help us with CT scan interpretations and the world is going towards more imaging earlier diagnosis and that will be used effectively to reduce cost to have better access to specialists there was a time when we could not even contemplate getting for the right treatment for the right treatment and we were just waiting and we were just waiting the top guys from Apollo to give us an opinion, or the top guys from Mount Sinai.

Now they’re willing and they’re able to, despite the time difference. Actually, it’s like quarter to five in my morning time, so if I’m a little sleepy, please forgive me. But this is how we’re using it. But that human touch, I don’t believe it will be replaced at all.

Tejpreet S Chopra

Glad to hear that. And also, I sometimes think that when you go and search something on the internet, you get hallucinations, you get false answers, so the last thing I want is a doctor to be searching and getting the wrong answer and suggesting the wrong medicine. Any other questions? Otherwise, go ahead.

Audience

Hello, I’m the CTO at MindEquity .ai and I’m also the founder of AI Society. I have two questions, actually. My first question is that if I am starting an AI pilot company, and to have a full -time impact, what is the biggest challenge? technical barriers that I think have value in both that.

Tejpreet S Chopra

Do you want to take that? Do you want to go ahead?

Narendra Singh

So scale and AI. Today you spend $2 and you generate $1 because half of the 50 % goes to the AI chip company and this problem can be only solved through enabling indigenous AI chips which has a better performance and the lower cost and maybe in the big guys who is enabling the entire ecosystem, they have to reduce the cost. That’s the best way because once you build the agents, we have billion users. It’s the largest market in the world. Scaling, when you do the scale, people will not pay for the value. $20 is not enough or $10 is not enough for the subscription but your cost is higher so I think this will take some time. The new chips is coming and that’s where the job questions related to that, if I answer this.

The job is the AI voice is fairly usage happen in the country and we are losing even government is adopting AI. They are signing all the MU with foundation companies. What I believe that they should not remove the call center. They should come up with a policy because AI is costing 7 rupees per call versus call center is only 1 rupees per call. So adopting AI in this and you are firing millions of jobs and what happened after that. So I think in some area government has to restrict the AI and those are the challenges because we have to wait for some time to figure out what these people are going to do next. Upskill their thing.

Tejpreet S Chopra

Thank you. Can I give somebody else a chance? Thanks so much. Somebody at the back. Sorry. Just give me one minute. Go ahead.

Audience

Question to Mr. Sathinder. As you are taking the topic of job in securities, say upskilling or reskilling, how can these strategies can help preserve the jobs and still the human in the loop and in the end, the relationship between these two, more better, more human friendly.

Satvinder Singh

So clearly the efforts in most countries in the world is to really start upskilling their populations. It’s really beginning. It’s starting from schools but it’s going out to the workforce because the workforce that’s actually today actively under siege with all this AI implementation. And obviously there are countries who can afford the upskilling. The more developed countries are quite generous in terms of capacity building and coming out with programs even empowering employers and workers to help themselves in order to do the upskilling. But I’m also worried sometimes, what are they upskilling with? what they sometimes upskill with may not be enough in two years time so I think this upskilling is going to be really an upskill task for all of us so ultimately I think you have to continue learning upskilling is the word but continuous learning to adapt is going to be the name of the game but you know it’s going to be harder for my generation and some of us in the panel, not all of you and for some of us in the audience but not for some of the younger ones and some of you who are just starting work right now and when I talk to some of them, the younger people they are less worried about this they are less worried because they have already grown up in the universe where things are moving in that speed and they are not talking about lifelong careers they are talking about lifelong skills that they will keep adapting to the new change so I think that is the name of the game to survive

Tejpreet S Chopra

go ahead

Audience

basically you told that the solar revolution came in India When it was A to be 30 kilowatt per hour, and now it’s to be 20. There were little, little catalysts which were involved to boost up the solar revolution in India. So, one of the revolution was subsidy and the information to people. Basically, I am in favor that AI should come and boost up because it won’t affect the jobs. It will fit the people at the particular level where they should be and it will increase the literacy. And are we also planning to give subsidy on various AI projects which are into development or subsidy level because it will play a catalyst to boost up the AI model as solar revolution came in India.

Tejpreet S Chopra

I think the government is doing a lot already. The government has already given 10 ,300 crores for the India AI mission for sovereign AI. They are giving GPUs available at 65 rupees per month. Yes. Right, per month. Not per year. Per hour. Per hour. So, we are already the cheapest in the world. So, the government has a whole slew of incentives and subsidies that they have announced and they keep on adding more. I don’t know if anybody else wants to add.

Narendra Singh

so there are quite a few no no it’s public it’s all public the event is all about India if you forget our global missions so the India is bring us together and build this entire ecosystem two years back Sam Altman was in India and talking about you can’t do this this is not part of now look at 12 foundation models country has launched so this is only possible when you democratize the AI access or GPU access to the innovators like you so that’s what government has already done you just request you will get the allocation of the GPU with one of the providers like us and you can get half of the prices paid by government by the way it’s not subsidy they are paying full price and subsidizing the end user like you the innovators thanks

Satvinder Singh

So, I think when it comes to the higher education, the challenge is worldwide. So, I can tell you in dinner conversations of some of the most established people I’ve sat with, with their children who are all in higher education, you can see that the value system is changing. The focus today of some of the most influential people who can make a difference is to actually encourage their kids to become more enterprising. So, I think the culture of being enterprising has to be given prioritization in order for the ability for us to adapt to what’s coming. And I think if we change that, if we create that and inculcate that in the universities even more, and bring it up front, I think that that will be the way we can overcome some of the challenges you face.

At least, I’m trying to address this first part.

Narendra Singh

Yeah, I think I can adopt on top of it. Set up AI like you can get the GPUs from there. But now, you don’t need to learn code today. You can code through AI, right? So you can build this. We have a billion users, billion problems. So you can solve those problems. You can make them entrepreneurial, like solve one problem at home or a college related or school related. And that’s what we also encourage people should come. Our students should come and go to industry visit. Because if they see that, they will do that. And the physical world has a lot of opportunity than the digital world. Because digital world is now concentrated with, imagine how many apps you guys are using in your phone.

Now this will go to one app, which is open AI or cloud. So the money is going to one company. And that’s dangerous than anything else.

Nihar Shah

I’ll just add on the energy part, right? So you heard also, Narendra talk about putting data centers in space, right? So. And free cooling, free energy. So that’s one part of it. But the other part I think that’s interesting about this whole energy question is that you really, I think, with AI are able to imagine potential. I mean, we don’t know what we don’t know and we don’t know what we know even, right? So one of the examples I’ll give you on that is, you know, with respect to like designing better chips, right? It’s like when they gave the, you know, Google DeepMind, when they gave the problem to AI to actually design better chips, they found a 30 % improvement in the performance of chips because AI was able to design better chips.

And so you can think about AI designing better data centers, AI designing, you know, many different parts of the whole chain. And we don’t know all of the different things that AI, you know, even mathematical computational efficiency, these kinds of things. So there are many different domains that we haven’t even touched that potentially can also have a transformative impact. And so this is a very, like I said, a very exciting time in our lives where we get to really see what the impact is going to be. Thanks.

Tejpreet S Chopra

Vin, do you want to add? Do you want to add something? No, okay.

Vinod Jhawar

Just to add to that, what we expect AI tools probably to do is to give an opportunity to grassroots. so when these tools are being employed and they learn through that and now a lot of language barriers are also being broken up so English is no longer a barrier with all the AI tools so you will have some set of people who will be using that and qualification is not a driver for that so this is what AI will do and we will see a trend where you will have a lot of blue collar upskilling by themselves no need to link it to degrees it is the self learn module assisted by AI tools which will make them competent for the market they could be either a specialist advising or they could be entrepreneurs or they could be a coach on that need not be sitting in a desk and doing something which is written there I think this is how we feel the education system also will change with the tools being available there

Dr. Mahendra Karpan

Thank you. So obviously we are the newest, particularly in consideration of this room. We’re new to the AI game, but one of the things that we’ve been able to do in Guyana is to create a digital school at the primary level. And it started working. In fact, it’s now being requested by other countries in the region. That’s the Caribbean region. And part of our objective is to use this to get kids kind of hooked on technology, AI type of education. Hopefully it can be tailored to each individual child to identify strengths, weaknesses, to strengthen the areas that are weak, whether it’s literacy, numeracy, anything, and tailored to that particular child. And so that their interests can be peaked, their interests can be exploited and expanded.

and ultimately they may be able to condense eight hours of school time, maybe three hours, and then they can go outside and play like normal kids, the way we used to play kids as kids. So the digital schooling, the digital era, is not necessarily to take all their time behind a computer and a desk, but to give them more time and more freedom and to create a habit so that that could follow them, not just at the primary level, but when they get to university and all the way up to adulthood.

Tejpreet S Chopra

Thanks very much. I know we only have, we’re out of time right now, and I’ll quickly wrap this up in about 45 seconds just so that everybody, I think it’s been an incredible discussion. Boy, it’s going two times now, so I really have to wrap it up. But I think I just want to quickly wrap up. Six or seven key takeaways from today’s discussion that we all had. The first one is the fact that I think jobs are going to be key. It’s going to be collaboration, not replacement. I think the way we do our job, that’s critical. I think Dr. Carpin talked about agriculture and health and medicine. I think there’s going to be a huge transformation.

But the good thing is humans want touch. So that’s good. But, you know, there will be a lot of revolution in terms of telemedicine, et cetera. Nihar, you talked about cooling and bottlenecks. I think those are things that we all have to think about in our countries. How do we provide the infrastructure for cooling, et cetera, and something that you all can work on and let us know how to make it more efficient. I think that’s great. You talked about in terms of NEXTRA, in terms of the talent challenges we’re going to have and how we’re going to have to manage all these data centers. Somebody mentioned recently that some of the big data centers in India, for 30 percent of the time it’s down because of the lack of talent to maintain these data centers.

You talked about supply chain. It’s fantastic that India is also thinking about putting data centers in space, which is fascinating. There’s going to be a big debate about, you know, more data centers versus the edge. Like I mentioned to you, you know, there’s one school of thought that we can actually bring the big AI to every factory in India by bringing it on the edge. And the computing power that’s developing is going to make that happen. And the last point I want to say is the point that you mentioned. And I think this is going to be the key takeaway for all of us, is that the speed at which technology is changing is so rapid that we’re all going to require continuous learning going ahead.

And it doesn’t matter how old you are, but that’s going to be the biggest takeaway for me, that that continuous learning and upskilling is going to be the key for all of us. So with that, really, thank you very much to all my panelists. And to everybody. And hopefully we can all make an impact around the world. Thank you very much.

Mohamed Kinaanath

Thank you. Thank you. Thank you. Thank you. I’ll be here. Thank you. Thank you. Thank you. Mr. Cana Dipali will sit at the front please take a seat at the front Ambassador Garcia Ambassador Garcia and if anybody wants to Ibu Ayub okay, thank you very much everyone oh, do you want I’m getting my cues from the photographer it’s not my show yet until I start okay it’s the photographer Can we please, and you’d like us to stand up for a group photo? Okay. Thank you very much. It’s very Asia. It’s not an event unless there’s a photo, so thank you very much. All right. Thank you very much. Good afternoon, everyone, and welcome. It’s a real privilege for me to host and moderate this session, Trusted AI at Scale, a Global South Leadership Dialogue, here at the India AI Impact Summit.

Now, this session hits squarely within the summit’s trusted AI pillar, and deliberately so. Because trust is no longer a downstream concern, it is now the condition for scale. Across governments, enterprises, and societies, we are moving past the question of whether AI will be adopted. The real question is whether it will be trusted by citizens, by institutions, and across borders. So why this session, and why did ISA host this session? The framing for today’s conversation is very intentional. Much of the global AI governance debate is still shaped by frameworks emerging from the Global North, the US, Europe, and China. Those frameworks are important, but they are not sufficient for the lived realities of the Global South, where AI is often deployed at population scale, under real resource constraints.

and in context where the cost of failure is not abstract, it is social, economic, and political. This is precisely the gap that AI Safety Asia was created to address. I am one of the advisors of ISA, and our mandate is straightforward but ambitious to bridge the global north and the global south on AI governance, not by importing templates wholesale, but by co -designing governance approaches that are interoperable, pragmatic, and grounded in local institutional strengths. And we do this through the three pillars, collaboration, capacity building, and policy -relevant research. And what makes this session different? That brings me to expectations. This session is not about abstract principles or ideal end states. We are here to surface operational blueprints, how trust is built in practice, and we have an amazing panel that will hopefully be able to really bring that to the table.

Thank you. and how safety is governed under real constraints, how AI systems actually reach the people and states often struggle to serve. The speakers you will hear from today are not theorizing from a distance. They are governing, financing, regulating, and deploying AI in the real world, from small island states to large democracies, from welfare delivery to financial systems, from regional cooperation to enterprise risk management. So one final framing point before we begin. The goal of today’s dialogue is not to position the Global South as a passive recipient of AI governance norms, and we’ll hear definitely from Cambodia, from the Maldives, and Indonesia, and Brazil. It is to position the Global South as a co -author of those norms, contributing models of governance that are population -scale, institution -aware, and grounded in lived social reality.

That is the through line of this session, from why trusted AI matters to who it must reach to how it is enabled, governed, and ultimately operationalized. With that, I’m delighted to open this dialogue, and we’ll begin with opening remarks that set the stakes why trusted AI is existential and not abstract. And then we’ll move through discussion. I realize that time is very short, so I think one of the reasons Ed put me here is because I’m known to crack a whip a bit. So with all due respect, I know you’re all very important people, but I will let you know when the time is up. So with that, I would like to invite His Excellency Professor Mohamed Kinanath, Minister of State for Homeland Security and Technology from the Maldives.

Your Excellency. Your Excellencies, Distinguished Head Supporters. Delegations, Honorable Ministers. esteemed leaders. It is both a privilege and profound responsibility to stand here before not merely a representative from Republic of Maldives, but a voice for many CIDs, small island developing states. I extend my warmest gratitude to the organizations of this forum for creating a platform where the aspirations of nations regardless of their geographical size can be heard alongside the strategies of those leading the frontier of innovation. Ladies and gentlemen, if the global disclosure turns to AI, it is often centered on the ambition of large economies on the computing scale or on a trillion dollar geopolitical competition. And while these dimensions significant, this represents only part of the narrative for seeds like the Maldives.

Nations defined by geographical dispersal of small islands, 1 ,200 islands, narrow economy base, and acute exposure to climate change. AI is not a matter of competitive advantage alone. It is a matter of institutional resilience. It is a matter of sovereign capacity, and increasingly, it’s a matter of survival. The Maldives comprises nearly 1 ,200 remote islands, which is spread across 850 square kilometers. Our economy has been mainly based on tourism. Our exposure to sea -level rise remains among the highest of any nation on Earth. These realities do not diminish our ambitions, and they demand we adopt technologies that can deliver public services efficiently across vast distances, strengthening governance and diversifying the economic foundation. The government of the Maldives, under the leadership of our current president, launched a Digital Transformation Agenda, a comprehensive national vision to transform the Maldives into a digital first nation within the coming three years.

The technology vision is called Maldives 2 .0. It is not a technology initiative in isolation. It is a fundamental reimagination of how these states serve its people and how the economy grows and how opportunity reaches every citizen of the Maldives. We have already begun the implementation. Maldives have good technology infrastructure if you look at the region. We have the highest – we have one of the highest internet penetration networks in the region. We have the highest number of mobile subscribers in the region. Our population is half a million. We have mobile subscribers, 1 million. 4G coverage is 100%. 5G is 80%, one of the highest in the region. Six subsea cables. And also fiber to each household is 100%. So maybe some of the European countries have not even achieved these statistics.

So considering the delivery of the AI and also considering the geography of the Maldives, AI is very important for us when it comes to the health sector, education sector, since our islands are very remote. So AI intelligence also offers the Maldives a pathway to economic diversification, enabling us to develop a knowledge economy. To cultivate local technology enterprises and to position our youth. Thank you. digitally. The Maldives is not approaching AI without preparation. We are building governance structures to ensure that this technology serves our people ethically. In July 2025, Maldives launched the AI Readiness Assessment Methodology Report, which was developed with assistance from UNESCO. And this landmark assessment, the first of its kind in South Asia. So building on this assessment, the government is now advancing to develop a national AI master plan and also an AI Act, which is also underway.

So the UNESCO Readiness Assessment has further recommended the establishment of an independent AI governance body and multi -stakeholder advisory council. So these are some of the recommendations. The UNESCO Readiness Assessment has further recommended the establishment of an independent AI governance body and multi -stakeholder advisory council. So these are some of the of the report. So as I told you, Maldives is getting ready for the AI and also since we have this Maldives 2 .0 transformation mission, we are working very hard in the next three years to get digitalization complete in the Maldives. So excellencies, the Maldives may be small in land, but we are vast in determination. We are a nation that has built its identity upon resilience, resilience against the tides that shape our shores.

So Maldives 2 .0, I have said that the vision is our commitment to the future. AI deployed responsibilities governed ethically is central to this vision. We do not seek to replicate the digital trajectories of large nations. We seek to chart a course that is authentically ours. On that reflection, our values, address our vulnerabilities. As the world convinced to deliberate on the governance of AI, let us build on AI future, which is inclusive, intelligent, equitable, and as human as the technology is powerful. So thank you so much. Thank you

Moderator

so much, Your Excellency, Minister Kananath. I’d now like to invite Dipali K hanna, Senior VP and Head of Asia for the Rockefeller Foundation for her remarks. Just before

Dipali Khanna

I start, we were talking about global north, global south. What struck me in this panel is the women are at the periphery, right? So anything and everything that we’re going to do in this space, we’ll have to get women back in the center. But I was also excited that we have strong women who can manage these men. So anyway. Good afternoon, Ministers, Excellencies, colleagues, and partners. Let me begin by thanking AI Safety Asia for convening this dialogue and JPMorgan Chase for co -hosting. The fact that this conversation is happening here in this region with this leadership really matters. PM Modi in his keynote yesterday laid out the vision for Manav, building AI that is safe, ethical, and centered on people, ensuring technology serves humanity responsibly and benefits everyone, including women, right?

We’ve just heard powerful perspectives that bring the point to life. From the Maldives that AI is not abstract policy, it is a survival tool. I know a colleague from Togo couldn’t join, but I’m sure she would have mentioned that trusted AI can make the invisible visible. So the question before us is not why AI matters or who should benefit. It is how we build it responsibly, at scale, and with legitimacy. For over 100 years, the Rockefeller Foundation has leveraged, advanced technologies for betterment of society, and we believe that there are learnings from that work also to apply trusted AI. partnership, patient capital and institutional strength. What distinguishes success stories like Togo’s Novici and India’s Coven is not just technological sophistication, it is alignment.

Governments willing to move decisively, private sector actors willing to collaborate, technologists willing to design for public systems and catalytic capital willing to absorb early risk. Novici reached nearly a million informal workers, not in months, just in days. Coven delivered at population scale with transparency and interoperability built in. This was not mere luck. These were examples of ecosystems working together. That’s partnership. For adoption, users must trust both that AI will deliver the benefits without harm. Much like early vaccine development, we need to invest in both supporting users, to adopt the technology, as well as building robust evidence and systems that ensure safety. And scaling this trusted AI in the global south requires more than venture timelines.

It requires risk tolerance. It requires capital that understands that building sovereign AI capacity involves experimentation, regulatory iteration, and institutional learning. Philanthropy can truly play a catalytic role here, not by replacing markets, not by dictating governance, but by re -risking what some leaders have described as the smart adopter model. The smart adopter does not wait for perfect consensus. It adapts responsibly. It pilots with guardrails. It builds local institutional muscle alongside technical capability. Catalytic capital can support regulatory sandboxes, independent safety assessments, talent pipelines, and interoperable standards so that adoption is both fast, nimble, and short -lived. That’s the power of patient capital. And finally, institutional strength. Digital public infrastructure has shown us something profound. Trust must be designed from day one, not retrofitted after deployment.

Transparency, auditability, grievance redress, open architecture are not compliance burdens. They’re adoption accelerators. If our AI systems are to scale in health, climate resilience, food systems and financial inclusion, they must be built on institutional foundations that citizens recognize and most importantly trust. Businesses have a critical role here. Responsible innovation is not simply about internal governance frameworks. It is about long -term partnership with governments and societies. It is about seeing trust as a strategic infrastructure, not friction, because trusted systems scale, untrusted systems stall. The Global South is demonstrating that it does not need to choose between speed and safety. It can design both. The opportunity now is to align partnerships and patient capital behind that leadership. So that trusted AI at scale is not a slogan.

It is operational. The Rockefeller Foundation stands ready to continue playing a catalytic role in that journey because trusted AI is not simply a governance aspiration. It is a development imperative. Thank you

Moderator

Thank you so much to both your Excellency Minister Kananath and Diwali for the… I thought, again, it’s a great way to start us off for the discussion today. You’re welcome to stay in front, sitting in front, but we’ll start the discussion. Actually, I think it’s set the tone for what we really wanted. I think it’s set the tone for what we really wanted. I think it’s set the tone for what we really wanted. I think it’s set the tone for what we really wanted. I think it’s set the tone for what we really wanted. I think it’s set the tone for what we really wanted. I think it’s set the tone for what we really wanted.

I think it’s set the tone for what we really wanted. I think it’s set the tone for what we really wanted. so everyone that you see in front brings a very specific experience, skills and coming either from private sector or their government so I would love to in particular as Dipali mentioned, building trust starts from the beginning, it’s not an afterthought so I’d like to start with Under Secretary of State Your Excellency Sokeng I guess the question I’m going to have for everyone is what is the single biggest obstacle to operationalizing trust in your context based on your experience and what is it that this room that’s filled with quite a lot of people from different sectors what can we do about it as well and what you’ve heard as well in these past couple of days as you’ve been here in the summit

Son Sokeng

First, thank you Imah for for the very good set of tones of these discussions and I’d like to thank Asia AI Safety Asia for having me on these panel discussions from Cambodia perspective I would say a short answer to that is how to get the people familiar with that AI and that would start off with the people user the leader and the regulator and aside from that I can talk a little bit about the Cambodia experience on that similar to what Excellency Minister from ADEA mentioned Cambodia has began to back the journey of conducting the AI readiness assessment supported by UNESCO we completed last year in July 2025 and from that perspective we can understand rely on the recommendations and starting to think about what is the strategy for Cambodia to move forward in terms of AI adoption in Cambodia.

So based on the recommendation, the national AI strategy has been drafted and currently we are in the process of finalizing the national AI strategy. At the same time we are also drafting the national AI governance framework, keeping the national strategy in mind. And one of the key strategic priorities that we have in the national strategy is people, which is the first priority of our national strategy ranking from the user like I mentioned earlier, user the leader, regulator and also government official. The second priority is the infrastructure and data. The third and the first one is AI adoption in government and the private sector. The fifth strategic priority is the governance and the last one is cooperation and the research.

So based on this priority we can see that human is still the first and the key priority action for Cambodia. Building on that our draft AI governance framework also very much human centric so we believe that the governance should be aligned with the risk of AI. So context of our governance framework would be based on the risk assessment and to understand the risk people have to know the impact of the AI. So government has very clear intention is that we need to educate people, let people understand what is the AI tool and the implication of that. So since 2024 government introduced because Cambodia Digital Skills Roadmap, which we outline what is the plan for the next 10 years for Cambodia in terms of human development.

And our goal is that in the next 10 years, we will have 100 ,000 talents who are AI -ready. And in addition to that, we also introduced various programs to educate government officials. As of now, we have trained more than 10 ,000 government officials. Basic digital skills, and also part of it is the AI skill as well. And so based on what we have right now, if one thing that you ask me what we can do in this room is that I would like to say is to increase the capacity of humans to understand the risk of

Moderator

Thank you so much. I have so many questions but I’m going to hold them for now I’m going to move to Ambassador Garcia as a tech ambassador for Brazil as a G20 country leaving BRICS and all of these along with Indonesia as well for G20 as I mentioned the global south must be architects and not observers and I believe Brazil is at the forefront of this can you say a little bit more about that and what obstacles do you find

Eugenio Vargas Garcia

capabilities to harness the power of the technology so somehow we need to enhance our one national capabilities but in cooperation with other partners overseas and finally we only had COP30 as you remember last November and we included digital technologies and climate change as a sustainability problem because now we have been discussing in terms of data centers energy efficiency so sustainability is key in a way that we are always trying to send this message in terms of AI development oriented strategy and I think for the global south it’s important that we engage in tech diplomacy because otherwise we will not get hurt to do what we are doing and we will not be able to do what we are doing to do what we are doing heard and we need to speak up and our voice be heard where it matters Thank you so much

Moderator

So moving from one great nation in the south to another one, Ibu Ayu Ibu Ayu is the director of AI and Emerging Technology Ecosystems of the Ministry of Communications in Indonesia Ibu Ayu, can you tell us a little bit more about Indonesia’s national strategies, basically where do you find the obstacles as well and as Ambassador Garcia mentioned in the ecosystem of BRICS where does Indonesia or where is ASEAN sitting on this as well

Aju Widya Sari

Thank you Ima It’s an honor for me to sit here from Ministry of Communications and Digital Affairs I cannot say it is an obstacle, I say it’s challenging once you know mentioning about the challenging one Indonesia has many things to be resolved. One is the infrastructure, because you know that our penetration of broadband, especially for mobile broadband, even though it is above 95%, but it is still based on 4G coverage. You know that AI, we need more coverage for 5G. And then the penetration of fixed broadband and backbone is quite low, because you know Indonesia has hundreds of district area and 10 ,000 of sub -district area. Today, the penetration is still 70 % by sub -district area. That’s why we need to push the penetration of the backbone.

Regarding of the data center, today our providers of data center… We have many data centers, but the GPU basis is still limited. So I think we need to invest more regarding the processing for AI. And then this is relating to the infrastructure. And relating with this framework of regulation, right now in Indonesia we have set up the roadmap of AI, national roadmap AI, and also we are preparing the guideline ethical of AI. Talking about the national AI roadmap, we are sure that we need to have strategies that are real, not just theoretically. Because, you know, when we explain, execute our vision in national roadmap, We have four strategic directions. One is governance collaborative, and the second is encouraging innovation ecosystem.

And the third is to strengthen capabilities and capacities, including infrastructure. And the last is mitigating risk. You know that this national roadmap is important for us because we need clarity for five years ahead. Regarding to the issue that come from AI. Regarding to the ethical AI guideline, we set up the rule and clarity of responsibility of AI actors. And also we preparing the instrument for monitoring and evaluation. Because you know, ethic not just ethic, we have to monitor it one. And the last is we have… We have to put the safeguard for the people, how they using and develop AI. That’s the main thing that we preparing.

Moderator

Thank you, Ibu Ayu. I think it’s been mentioned already, and I’m glad Ambassador Garcia mentioned it. And, of course, in his address, Minister Kinanath mentioned, and I’m going to turn to Dr. Parakana. You did mention something about, in our, sorry, I’m going to bring the discussion that we had in the green room. I’m calling it the green room. It’s not green. It’s green, about how you, as a private company, utilize and how that could be beneficial for climate resilience and development, especially in vulnerable countries. So can you tell a little bit about that? Dr. Parakana, of course, is the founder and CEO of AlphaGeo.

Parag Khanna

Thank you. Thank you so much, Ina. Thank you all for being here. Well, AI as a concept evokes this notion of leapfrogging. Do you remember when we used to use that? We used to use that term all the time, leapfrogging. And, of course, it applied very appropriately to mobile telephony, to fintech, to renewable energy, solar panels. So, you know, faster, better, cheaper. and inherent in that concept, which is very important now when we talk about AI, is the notion of second mover advantage. That’s what leapfrogging was fundamentally about, having second mover advantage. Now, why that’s relevant right now is because, of course, developed countries in particular, particularly the United States and others, have invested enormous amounts of capital in the capex requirements of AI.

You know, some significant percentage of U .S. GDP growth, for example, is attributable right now to that AI capex -related infrastructure investment. But that is not something, of course, that nations of the global south, so to speak, developing countries can afford. And so the question becomes, is there an advantage to late development when it comes to AI that can save developing countries of the south a lot of money while still enjoying the fruits of that innovation? So it’s important to not, especially in the context where you’re making trade -offs, between electricity, food, water, the basics for your population, while, of course, there’s now this almost emotional and hype pressure to invest, you know, to clear land, to build data centers, to divert energy, we have to ask ourselves, is that the right way to allocate capital?

Or should one be taking advantage of cloud computing, edge computing, sovereign cloud solutions that can generate the same or better output, bang for your buck, with less CapEx expenditure? And that’s the moment that we’re at. And it’s important to remind that we’re having this conversation in India. And one of the virtues of India hosting this event is precisely that India’s rise as an AI superpower, you know, breaks the narrative out of this conventional wisdom that it’s a two -horse race between the U .S. and China, and that you’re doomed in some way to choose between your data being, you know, hoovered up by one or the other player. what India is offering, at least more than in theory, is rapid diffusion of the latest technologies, cloud -based models and solutions, through the tools of digital public infrastructure, DPI, which has been one of the benefits of this event now over the course of the years.

People have learned this all -important acronym, DPI, which I’ve adhered to or believed in, been a supporter of for quite some time, because it does hold the promise for neutrality, for a menu of options, for being delivered in a way that protects sovereignty of data, but in a very affordable way. And India has done it, been a pioneer of it, obviously domestically, with Adar and so forth. And for those, if it hasn’t been disclosed enough this week, more than 50 countries are building payment systems and identity systems on that stack. So that’s a great example of DPI. So think of AI in that mold of second -mover advantage, leapfrogging, following the mold of cloud -based sort of solutions that can be low -cost.

Now let me just quickly talk about two areas where there are huge gaps in public sector access to data or ownership or simply knowledge of solutions that AI, and particularly the AI that we, the way in which we apply data science, is to geospatial data. And these are also two areas that are of critical, fundamental, if not existential importance to developing countries. One is sustainable urbanization, and the second is climate adaptation. Anywhere you go in the world, actually rich or poor countries, if you survey the average person on the street and ask them, what is the biggest problem plaguing your society, nine out of ten people that I speak to in dozens of countries around the world is affordable housing.

And I think that’s a really important part of the problem. just sustained urbanization that is so organic, so rapid, so unplanned accelerating around the world but now finally governments have the tools again, geospatial tools, mapping tools understanding which districts, which settlements are expanding and why where are people coming from what kinds of housing need to be built where governments have always been fighting backwards or if not given up, quite frankly, on grappling with these issues but now we have foresight AI -powered geospatial tools that can look decades ahead and say this has been your time series urban expansion this is how you map it out this is where you should be building what and when and so bringing together demographics bringing together infrastructure bringing together migration, fiscal spending and directed targeting in a way that is a great use case for AI that almost the entire development developing world could use and has barely begun to use and is very cost -effective, right?

So that’s number one. The second, equally if not more important, is climate adaptation. Climate risk is both acute and chronic. We’re talking about monsoons, floods, fires that are becoming more frequent and devastating. And, of course, complex climate modeling is not the kind of thing that any individual country can or should finance. We have global climate models that are AI -powered, that are developed with the world’s best institutions, that are publicly financed, that are now available to be downscaled for your country. And this is something that, again, especially developing countries, especially countries of the south, that are most affected by climate dislocation, by climate risk, can and should take advantage of. But, again, to be clear, they have barely begun to do so yet.

So targeting infrastructure investments, targeting your infrastructure to adapt to climate risk, where do you need to build? Seawalls, flood barriers, flood control measures. irrigation systems for drought, all of that, we are again, just like urbanization, are years and years and years behind, and there’s almost no country on this panel, almost no country in the world, even wealthy countries, that are ahead of the curve on this. The entire planet is behind, as we know from COP summits, but it is countries of the south that are going to be the worst affected on the fastest timeline. If you’re not using the tools that are available right now, AI -powered climate modeling, scaling, adaptation scoring, in order to plan your national infrastructure, and then putting together the public -private partnerships to get it done, you’re behind.

So this is about global public goods, right? Affordable housing, a manageable urbanization at a global scale, climate adaptation for people everywhere, but it has to be delivered locally. And that’s why it’s incumbent on each nation represented here, each nation of the global south to really harness and take advantage of these tools.

Moderator

Thank you so much, Dr. Parag. Again, lots of questions in my head. Kip Wainscott, Executive Director of Global AI Policy from JPMorgan Chase, one of our biggest supporters as well. Thank you so much for supporting us for this event. You are not here just because of that, I can assure you. But I’d love to hear what you have to say, particularly the question that I brought forward earlier about the obstacles, in particular in where you see it. But again, financial services, model risk management, and all of that in the safety architecture of AI. Yeah. Go ahead.

Kip Wainscott

Thank you. A lot to unpack there. This is a great panel, by the way. So many esteemed panelists. I really feel privileged to share the microphone with all of you. You know, it’s interesting thinking about these things from the vantage of JPMorgan Chase because we’re really kind of interrogating the questions from sort of three different perspectives, right? One is one of the world’s largest financial houses that’s deeply invested in artificial intelligence. We have an acute interest in unlocking the value of this technology and seeing the growth potential of this technology. But there’s a simple truth that we recognize, and that is that AI is only valuable if it is deployed, and deployment depends on trust.

And so really building out, you know, we have an interest in this sort of multi -stakeholder dialogue about what that trust model that is going to unlock diffusion, you know, not just across enterprises, but in the public sphere across the global south, and really putting this technology into organizations that are impacting people’s real lives. The second perspective from which we’re looking at all of this is as a deployer of the technology ourselves. We are one of the world’s largest deployers of AI. And what’s interesting, we’re also one of the most regulated industries in the financial sector, and yet financial services have been the earliest adopters of AI. We’ve been using artificial intelligence through this sort of evolutionary ramp of really more than a decade.

To combat fraud, to protect consumers, to create just more efficient personalization of financial services. And I think one reason why you see financial services companies so ready and eager to adopt is because we have that existing trust architecture. Trust isn’t just a feature of financial services. It’s the core business model. And so we have… Thank you. We have these, you mentioned, you know, model risk management. We have these rigorous practices of evaluating models, of documenting governance and oversight, of really ensuring that there’s ongoing monitoring across all of our technology deployments in a way that just lends itself to what I would call a comfort in sort of building the trust ecosystem for responsible deployment. And then sort of the third prism that we look at this issue from is as a purchaser of these technologies and like kind of almost a procurement lens here.

We spend $20 billion annually on our technology budget. That puts us in this really sizable position in the innovation ecosystem of, you know, startups and scale -ups that they want to sell their products. They’re building innovative new artificial intelligence applications with the hope that they’re going to be able to sell their products to the world. Selling it to JPMorgan Chase. And we see a real innovation. inefficiency right now in the fact that there isn’t a shared sort of set of expectations for trust, for, you know, what these products should be benchmarked against in order for us to ingest them, you know, in a way that, you know, we have the confidence is going to serve our customers well, is going to reflect our, you know, our responsibilities, our duty of care as a, you know, a regulated industry.

And so, you know, it speaks to, I mean, I think one, the need to bring these diverse perspectives to this conversation around governance so that we can really kind of get past the sort of compartmentalization of like AI safety as sort of a siloed conversation and accelerated AI adoption as a different conversation. This is the same conversation. And, you know, what we really need to, I think the purpose of both of those conversations is to really get past the sort of compartmentalization of AI safety as sort of conversations is to really align on this trust model that is going to ensure that we can deploy these technologies, you know, in a very broad and impactful way across the economy.

Moderator

I’m going to put you on the spot while you have your microphone a little bit you’ve been here throughout the week and you’ve heard the panelists just now speak from what you’ve been hearing throughout the week how optimistic are you in terms of I think His Excellency Sokeng mentioned about collaboration how confident are you in building these collaborations to build trust in AI just from the conversations that you’ve had this week?

Kip Wainscott

Yeah, no I’m optimistic I think just the fact that we’re here in New Delhi and having this summit in this environment this is a much bigger summit, I think it’s a more inclusive cross section of voices and so I think that that reflects that this conversation is getting bigger that we’ve moved past this focus on that technical capabilities, which is kind of where we have been, to now I think capability has really been almost commoditized and legitimacy has not. And we’re in this phase now where we really need to establish the legitimacy of these technologies and that they are fit for purpose, that they can be trusted and deployed across these different societal sectors. And so I think that I am optimistic.

It requires intention. And I’m seeing the intentionality, I think, around the curation of these conversations. I think there’s a lot to carry forward here. Also, some of you may have seen we’re very near the end of this summit. And I think before we were even halfway through, excuse me, I’m running on fumes at this point, but we were more than halfway through the week and people were already writing up the assessment. Of what, you know, what the themes were, what the takeaways were. and people were saying, you know, oh, this summit is no longer about, you know, responsibility or safety or, and I just, that isn’t my perception of these conversations. It really is that, you know, we’re just talking about them in a different way.

We’re talking about them in how they are going to impact real lives, how we can take this technology into the economy in real valuable ways. And in order to do that, we have to include that sort of trust dimension.

Moderator

Okay. Thank you so much. We have eight minutes left, and I’ve been told that we have to finish on time, but I really want to get this question in and hopefully be able to hear from everyone. So Ambassador Garcia and Your Excellency Sokeng and Ibu Ayu, what are you taking away? What are you taking home from this panel, first of all? Or from the week that you’ve been here, reflecting on what Kip just mentioned?

Eugenio Vargas Garcia

Yes, thank you. First, I think India was very successful bringing this summit to the Global South for the very first time. But this was the Bletchley process that began in the UK in 2023. We have Seoul, Paris. So this, what we have been discussing here, is something that is more inclusive. And some new concerns were added to the agenda, sustainability, not that it was not discussed before, but with this perspective coming from the Global South, which is important. So I would conclude with three recommendations, because we need to be practical. We are thinking mostly we agree on high -level principles in terms of AI governance. But when we think of countries lacking resources, or having other competing priorities, so they need to decide what to do and prioritize in many cases.

So I think they should start small and have a few small scales. quick impact projects so that they can build on proven success so let’s say focus on some education, healthcare, agriculture then focus on some specific projects and then build to reach the next level second is that we need to seek they need to seek international partners sometimes it’s it’s useful and needed to enhance national capabilities it’s difficult for a single country alone to do this investing in infrastructure and do something that’s expensive so seek international cooperation and third, as I said before engage in these discussions at an international level engage in tech diplomacy and send more people to discuss where I think it’s important including the United Nations thank you so much

H.E. Sokeng

Thank you. Having seen the time, I’ll just go very quickly on the last sentence. Coming to this summit, I agree with our parents that it’s very inclusive and we can see perspective from all the stakeholders, from the government, the industry, academia, and even the startup. So learning from this, I have just one wish, which is that we have to be honest with each other, the industry, the government, and bear in mind that we are here to protect people for the people. So whatever we do, we need to think about people first. With that, please consider that when we think of governance frameworks, the regulation of the law that the government might put should be the mechanism to promote innovation.

It’s not an obstacle. It’s not an obstacle for the innovation. So in order to do that, we need to build trust also, and we need to be honest with each other. Thank you.

Moderator

Ibu Ayu, quickly.

Aju Widya Sari

Thank you. Actually, I’m very impressed with the spirit of Prime Minister Modi yesterday. I think every country has the same spirit regarding to the AI. So the three points that I’m taking from this summit, one is collaboration, indeed, and then inclusive, because if we consider about inclusive, we need intention from government, from industry, from the people. And then the last is investment, because investment, you know that AI needs more and more investment. This is a collaboration come. But the issue will come is how we define the sovereign, because sovereign is based on the… the needs of the country. how we define, is it equal or not? And still, it’s an under question of me also.

Moderator

Thank you so much, Iwayu. Dr. Parag, I’m going to have you bring it home in one minute. What you’re taking home, in particular in your conversations with some of the different governments here.

Parag Khanna

Well, the first thing is I actually want to echo Kip’s point is that we’re at an inflection point where we can’t, we’ve been, in phase one, let’s say, there was a lot of harping about trust. Can we trust? Can we not trust? And I think it’s a good thing that that pressure was there, but now that pressure to have transparency in models has delivered to some degree. And that it’s been done in a way where public and private have not been on opposite sides of the discussion, but have really partnered. So I think we’re really beyond that. And now we can move from models and theory into action and application. And that’s the part of the stack that we want to be on.

The infrastructure build -out is there. It’s being provided. The apps are being developed. They’re being deployed. I have seen a little bit of but would want to see a lot more in subsequent editions of this, especially as this summit, you know, migrates around the world now and remains perhaps in the hands of developing countries on the application side as much as possible and that we think not just about very specific verticals as we have been here and elsewhere, sort of your health care, education, I’ve emphasized climate and others, but probably something more societal and around resilience. You know, resilience is a term that comes up a lot but doesn’t really get quantified enough. And if we can push for that, that’s going to help us to establish performance benchmarks, not just of models but in applications.

And that’s really what I think everyone wants to see to make sure that AI doesn’t become something of not just a financial bubble but something almost of a policy bubble as well.

Related ResourcesKnowledge base sources related to the discussion topics (36)
Factual NotesClaims verified against the Diplo knowledge base (3)
Confirmedmedium

“Tejpreet S Chopra opened the session by emphasizing that AI‑driven workforce and economic strategies are the summit’s top priority for governments worldwide, which are asking how AI will reshape society, industry and employment.”

The opening remarks of the summit explicitly highlighted AI-driven workforce and economic strategies as a critical topic, matching the report’s description [S9] and the agenda listing Tejpreet S Chopra as a speaker on AI strategies for jobs and economic development [S1].

!
Correctionhigh

“Satvinder Singh described DEFA as the largest legally‑binding regional digital agreement under negotiation by the 11 ASEAN nations and India, intended to create a digitally‑interconnected market of 700 million people.”

The knowledge base indicates that DEFA is being negotiated by the ten ASEAN member states (not eleven) and does not yet include India as a signatory; it is described as the first regional digital economy agreement covering digital ID, payments, data flows and trade [S51] and [S102].

Additional Contextlow

“Satvinder Singh described DEFA as the largest legally‑binding regional digital agreement under negotiation by the 11 ASEAN nations and India, intended to create a digitally‑interconnected market of 700 million people.”

DEFA aims to establish a region-wide digital public infrastructure and is positioned as a pioneering regional digital-economy framework, which provides context for its significance even though the membership details differ from the claim [S51].

External Sources (112)
S1
Shaping the Future AI Strategies for Jobs and Economic Development — – Nihar Shah- Vinod Jhawar- Narendra Singh- Aju Widya Sari
S2
Digital democracy and future realities | IGF 2023 WS #476 — Nima Iyer:Yes, yes, yes, definitely. Thank you so much for that. Thank you. But what you said also got me thinking about…
S3
Transforming Agriculture_ AI for Resilient and Inclusive Food Systems — -Arwin Datumaya Wahyudi Sumari: Indonesian Air Force officer and professor at the State Polytechnic of Malang, co-invent…
S4
Shaping the Future AI Strategies for Jobs and Economic Development — -Dr. Mahendra Karpan- Interventional cardiologist and presidential advisor to Guyana, expert in healthcare transformatio…
S5
Building the Workforce_ AI for Viksit Bharat 2047 — -Dr. Jitendra Singh- Role/Title: Honorable Minister, Minister of State for Personnel, Minister of State for Personal Gri…
S6
ElevenLabs Voice AI Session &amp; NCRB/NPMFireside Chat — -Shailendra Pal Singh: Role/title not explicitly mentioned, but appears to be a co-presenter/expert on Bhashini translat…
S7
S8
Shaping the Future AI Strategies for Jobs and Economic Development — So I appreciate everybody who’s out here. My name is Tej Trikot Chopra, and I’m the founder and CEO of Industry .AI. So …
S9
https://dig.watch/event/india-ai-impact-summit-2026/shaping-the-future-ai-strategies-for-jobs-and-economic-development — So I appreciate everybody who’s out here. My name is Tej Trikot Chopra, and I’m the founder and CEO of Industry .AI. So …
S10
S11
[WebDebate] The UN beyond the West: How do countries from the Global South make their mark? — Dr Eugenio Vargas Garcia is senior adviser on Peace and Security at the Office of the President of the 74th Session of t…
S12
Eugenio Vargas Garcia — Eugenio Vargas Garcia has 30 years of professional experience in foreign policy and diplomacy. He holds a PhD in History…
S13
Shaping the Future AI Strategies for Jobs and Economic Development — – Dipali Khanna- Kip Wainscott – Parag Khanna- Narendra Singh
S14
S15
Shaping the Future AI Strategies for Jobs and Economic Development — -H.E. Sokeng- Same as Son Sokeng, referred to with diplomatic title
S17
Open Forum #47 Demystifying WSis+20 — – **UNKNOWN** – Role/title not specified in transcript Kurtis Lindqvist: I’m Kris Lindqvist. I’m the President and CEO …
S18
Shaping the Future AI Strategies for Jobs and Economic Development — – Dipali Khanna- Kip Wainscott – Parag Khanna- Narendra Singh
S19
WS #280 the DNS Trust Horizon Safeguarding Digital Identity — – **Audience** – Individual from Senegal named Yuv (role/title not specified)
S20
Building the Workforce_ AI for Viksit Bharat 2047 — -Audience- Role/Title: Professor Charu from Indian Institute of Public Administration (one identified audience member), …
S21
Nri Collaborative Session Navigating Global Cyber Threats Via Local Practices — – **Audience** – Dr. Nazar (specific role/title not clearly mentioned)
S22
Keynote-Olivier Blum — -Moderator: Role/Title: Conference Moderator; Area of Expertise: Not mentioned -Mr. Schneider: Role/Title: Not mentione…
S23
Keynote-Vinod Khosla — -Moderator: Role/Title: Moderator of the event; Area of Expertise: Not mentioned -Mr. Jeet Adani: Role/Title: Not menti…
S24
Day 0 Event #250 Building Trust and Combatting Fraud in the Internet Ecosystem — – **Frode Sørensen** – Role/Title: Online moderator, colleague of Johannes Vallesverd, Area of Expertise: Online session…
S25
Shaping the Future AI Strategies for Jobs and Economic Development — -H.E. Sokeng- Same as Son Sokeng, referred to with diplomatic title
S26
Kingdom of Cambodia — The technical team that drafted the sub-decree was led by H.E. Dr. HENG Sokkung , Secretary of State of MIST…
S27
ISBN: — – H.E. Dr. Amani Abou-Zeid, African Union Commission – H.E. Ms. Aurélie Adam Soulé Zoumarou, Benin – Dr. Ann Aerts, …
S28
Shaping the Future AI Strategies for Jobs and Economic Development — – Satvinder Singh- Dr. Mahendra Karpan- Vinod Jhawar- H.E. Sokeng – Tejpreet S Chopra- Satvinder Singh- Son Sokeng- Vin…
S29
Shaping the Future AI Strategies for Jobs and Economic Development — So I appreciate everybody who’s out here. My name is Tej Trikot Chopra, and I’m the founder and CEO of Industry .AI. So …
S30
Panel Discussion: 01 — When asked to rate global AI infrastructure progress on a scale of one to ten, Minister Patria gave it 6 out of 10, high…
S31
Leveraging the postal network for a sustainable and inclusive deployment of digital infrastructure and services (UPU) — However, challenges in achieving connectivity and ensuring secure cash distribution in remote areas highlight the unique…
S32
WS #214 AI Readiness in Africa in a Shifting Geopolitical Landscape — Fundamental infrastructure challenges—including limited computing power, inadequate connectivity, and capacity gaps—requ…
S33
Signature Panel: Building Cyber Resilience for Sustainable Development by Bridging the Global Capacity Gap — Indonesia:Thank you. Moderator, Mr. Robin, good afternoon to all delegations here, allow me this morning to convey three…
S34
Enhancing the digital infrastructure for all | IGF 2023 Open Forum #135 — Dian:Okay, thank you, Mr. Hayasi for the question. So I would like to address it in the role that Indonesia expect from …
S35
Comprehensive Report: UN General Assembly High-Level Meeting on the 20-Year Review of the World Summit on the Information Society (WSIS) Outcomes — At the same time, Indonesia is also preparing for the next frontier by advancing national frameworks on artificial intel…
S36
WS #53 Promoting Children’s Rights and Inclusion in the Digital Age — Speaker 3: Hello, everyone. My topic is the future of learning digitalization in primary education for sustainable de…
S37
Responsible AI for Children Safe Playful and Empowering Learning — AI could easily offer little prompts that inspire me to play. It could support diverse learning methods. AI could help u…
S38
DIGITAL DIVIDENDS — In addition to foundational skills, workers are being required to use more critical thinking and problem solving, commu…
S39
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — “as we go from one gig to nine to ten gig … we have to realize that india is challenged by three physical things that …
S40
Comprehensive Report: “Converging with Technology to Win” Panel Discussion — Energy constraints are real – gas-fired power plants remain the primary scalable solution for data centers due to physic…
S41
Redrawing the Geography of Jobs / Davos 2025 — Audience: Hello, can you hear me? I’m Suin Lee, I’m one of the shop social entrepreneur working in education sector. …
S42
The Innovation Beneath AI: The US-India Partnership powering the AI Era — But. I think for entrepreneurs, it’s an extraordinary opportunity. And those that will win. in my mind over the next few…
S43
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Panel Discussion Moderator Amitabh Kant NITI — Bhattacharya identifies specific challenges faced by blue-collar workers including limited access to job opportunities, …
S44
(Interactive Dialogue 3) Summit of the Future – General Assembly, 79th session — – Mohamed Muizzu (President of Maldives) – Co-chair Mohamed Muizzu: I thank my esteemed co-chair for his statement. A…
S45
IGF 2025: Africa charts a sovereign path for AI governance — African leaders at theInternet Governance Forum (IGF) 2025 in Oslocalled for urgent action to build sovereign and ethica…
S46
Climate diplomacy — Climate diplomacy also focuses on building alliances and partnerships beyond formal negotiation settings. This involves …
S47
Technology and Diplomacy: The Rise of Multilateralism in the Bay Area — Eugenio V. Garcia, the Brazilian Tech Diplomat in San Francisco, stressed the need for the participation of developing c…
S48
AI and the future of digital global supply chains (UNCTAD) — Governments in developing countries play a crucial role in fostering technological progress. They need to understand the…
S49
Can Digital Economy Agreements Limit Internet Fragmentation? | IGF 2023 Day 0 Event #76 — Digital Economy Agreements (DEAs) challenge the traditional boundaries of trade law in terms of scope and institutional …
S50
Media Briefing: Unlocking ASEAN’s Digital Future – Driving Inclusive Growth and Global Competitiveness / DAVOS 2025 — This comment introduces the concept of a regional digital economy agreement for ASEAN, positioning it as a potentially g…
S51
ASEAN set to introduce region-wide digital economy agreement — The Association of Southeast Asian Nations (ASEAN), supported by the World Economic Forum and the ASEAN-Korea Cooperatio…
S52
Global AI Policy Framework: International Cooperation and Historical Perspectives — Werner identifies three critical barriers that prevent AI for good use cases from scaling globally. He emphasizes that d…
S53
AI Infrastructure and Future Development: A Panel Discussion — Four years ago, a data center project had 100 electricians with 80 experts and 20 beginners. Now projects have 2,000 ele…
S54
Conversational AI in low income &amp; resource settings | IGF 2023 — Dino Cataldo Dell’Accio:Thank you very much, Dr. Gupta, for inviting me to participate in this very relevant, very impor…
S55
MedTech and AI Innovations in Public Health Systems — Shri Saurabh Jain from the Government of India outlined the SAHI (Strategy for Artificial Intelligence in Public Health)…
S56
WS #53 Leveraging the Internet in Environment and Health Resilience — – June Parris- Yao Amevi A. Sossou Artificial intelligence and other technologies should be designed to support rather …
S57
Scaling Trusted AI_ How France and India Are Building Industrial &amp; Innovation Bridges — And it’s very useful. It’s used to benchmark applications and performance on quantum computers and using AI techniques a…
S58
Developing capacities for bottom-up AI in the Global South: What role for the international community? — ### Infrastructure Prerequisites Versus Pragmatic Implementation Jovan Kurbalija: Thank you. She’s quiet. Okay, okay. G…
S59
AI for agriculture Scaling Intelegence for food and climate resiliance — This comment is profoundly insightful because it cuts through the AI hype and addresses the fundamental challenge of res…
S60
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Amb Thomas Schneider — Rather than creating entirely new governance structures, Schneider advocated building upon existing international dialog…
S61
Shaping the Future AI Strategies for Jobs and Economic Development — -Workforce Transformation and Job Impact: A central theme throughout both panels was whether AI will replace or enhance …
S62
Bridging the Digital Skills Gap: Strategies for Reskilling and Upskilling in a Changing World — Economic | Development | Sociocultural The argument emphasizes that the primary threat to employment is not AI replacin…
S63
Comprehensive Report: AI’s Impact on the Future of Work – Davos 2026 Panel Discussion — Continuous learning and adaptability are essential for future workforce
S64
Can Digital Economy Agreements Limit Internet Fragmentation? | IGF 2023 Day 0 Event #76 — Digital Economy Agreements (DEAs) challenge the traditional boundaries of trade law in terms of scope and institutional …
S65
ASEAN set to introduce region-wide digital economy agreement — The Association of Southeast Asian Nations (ASEAN), supported by the World Economic Forum and the ASEAN-Korea Cooperatio…
S66
Media Briefing: Unlocking ASEAN’s Digital Future – Driving Inclusive Growth and Global Competitiveness / DAVOS 2025 — This comment introduces the concept of a regional digital economy agreement for ASEAN, positioning it as a potentially g…
S67
Powering AI _ Global Leaders Session _ AI Impact Summit India Part 2 — This comment reframes the entire AI development narrative by identifying energy as the primary bottleneck rather than th…
S68
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — -Infrastructure Constraints and Resource Management: Significant focus on three critical bottlenecks – power consumption…
S69
Designing Indias Digital Future AI at the Core 6G at the Edge — Roy emphasizes that infrastructure challenges, particularly power consumption and site requirements, are the main factor…
S70
Conversational AI in low income &amp; resource settings | IGF 2023 — Dino Cataldo Dell’Accio:Thank you very much, Dr. Gupta, for inviting me to participate in this very relevant, very impor…
S71
MedTech and AI Innovations in Public Health Systems — Shri Saurabh Jain from the Government of India outlined the SAHI (Strategy for Artificial Intelligence in Public Health)…
S72
WS #53 Leveraging the Internet in Environment and Health Resilience — – Jorn Erbguth- June Parris- Yao Amevi A. Sossou – June Parris- Yao Amevi A. Sossou Artificial intelligence and other …
S73
Leaders TalkX: ICT application to unlock the full potential of digital – Part II — – Niraj Verma- Celestin Kadjidja- Ran Evan Xiao Liao Connectivity alone is not enough; what matters is meaningful use o…
S74
Scaling Trusted AI_ How France and India Are Building Industrial &amp; Innovation Bridges — And it’s very useful. It’s used to benchmark applications and performance on quantum computers and using AI techniques a…
S75
AI for agriculture Scaling Intelegence for food and climate resiliance — This comment is profoundly insightful because it cuts through the AI hype and addresses the fundamental challenge of res…
S76
WS #100 Integrating the Global South in Global AI Governance — AUDIENCE: Any insights or thoughts? Just one quick thought around good practices I’ve seen governments adopt in the r…
S77
Developing capacities for bottom-up AI in the Global South: What role for the international community? — ### Infrastructure Prerequisites Versus Pragmatic Implementation Jovan Kurbalija: Thank you. She’s quiet. Okay, okay. G…
S78
Bridging the AI innovation gap — The tone is consistently inspirational and collaborative throughout. The speaker maintains an optimistic, forward-lookin…
S79
Powering the Technology Revolution / Davos 2025 — The tone was generally optimistic and forward-looking, with panelists highlighting opportunities for innovation and prog…
S80
Panel Discussion Inclusion Innovation &amp; the Future of AI — The discussion maintained a constructive and collaborative tone throughout, with panelists building on each other’s poin…
S81
Panel Discussion AI in Digital Public Infrastructure (DPI) India AI Impact Summit — The tone was consistently optimistic and forward-looking throughout the conversation. Speakers expressed excitement abou…
S82
The Innovation Beneath AI: The US-India Partnership powering the AI Era — The tone was consistently optimistic and forward-looking throughout, with panelists expressing excitement about opportun…
S83
What policy levers can bridge the AI divide? — The discussion maintained a collaborative and optimistic tone throughout, with participants sharing experiences construc…
S84
National Disaster Management Authority — The discussion maintained a collaborative and solution-oriented tone throughout, with participants sharing both challeng…
S85
Safe Smart Cities and Climate Frustration — The discussion maintained a collaborative and solution-oriented tone throughout. Speakers were optimistic about the pote…
S86
Emerging Markets: Resilience, Innovation, and the Future of Global Development — The tone was notably optimistic and forward-looking throughout the conversation. Panelists consistently emphasized oppor…
S87
Debating Technology / Davos 2025 — The tone of the discussion was largely thoughtful and measured, with the speakers acknowledging both the promise and ris…
S88
Wrap up — These key comments fundamentally reframed the discussion from typical technology policy debates to deeper philosophical …
S89
AI-Driven Enforcement_ Better Governance through Effective Compliance &amp; Services — These key comments fundamentally shaped the symposium by establishing a framework for responsible, human-centric AI adop…
S90
Reskilling for the Intelligent Age / Davos 2025 — These key comments shaped the discussion by broadening the focus from purely technical skills to encompass leadership ab…
S91
From summer disillusionment to autumn clarity: Ten lessons for AI — Additionally, the EU’s long-negotiated AI Act imposes strict rules on AI systems (e.g. high-risk systems must meet safet…
S92
Building the Workforce_ AI for Viksit Bharat 2047 — The tone was formal and optimistic throughout, maintaining a diplomatic and collaborative atmosphere. Speakers consisten…
S93
How the EU’s GPAI Code Shapes Safe and Trustworthy AI Governance India AI Impact Summit 2026 — The tone is constructive and collaborative throughout, with speakers building on each other’s points rather than disagre…
S94
Impact &amp; the Role of AI How Artificial Intelligence Is Changing Everything — The discussion maintained a cautiously optimistic tone throughout, balancing enthusiasm for AI’s potential with realisti…
S95
AI for equality: Bridging the innovation gap — The conversation maintained a consistently optimistic yet realistic tone throughout. Both speakers demonstrated enthusia…
S96
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — The tone was collaborative and solution-oriented throughout, with participants acknowledging both the urgency and comple…
S98
Leaders’ Plenary | Global Vision for AI Impact and Governance Morning Session Part 1 — Honourable Prime Minister Modi, Excellencies, dear colleagues, ladies and gentlemen. It is a great honour for me to be i…
S99
AI Transformation in Practice_ Insights from India’s Consulting Leaders — Both speakers positioned AI as one of the most significant disruptive forces in a generation, requiring organisations to…
S100
Driving Indias AI Future Growth Innovation and Impact — Thank you so much, Dr. Mohindra. I’m going to request you to please stay back on stage. I’d also like to invite Manish G…
S101
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — According to Moroccan Strategy Digital 2030, we consider AI as long -term strategic choice, reshaping competitiveness, s…
S102
ASEAN launches regionwide Digital Economy Framework Agreement — The Association of Southeast Asian Nations (ASEAN)has initiated talksabout a Digital Economy Framework Agreement (DEFA) …
S103
LDCs Participation in Digital Economy Agreements and E-commerce Provisions in FTA (Cambodia) — LDCs in Asia have made ambitious commitments when they joined the ASEAN E-commerce Agreement and RCEP agreement. As more…
S104
Digital Economy Agreements and the Future of Digital Trade Rulemaking (DiploFoundation) — In summary, digital economy agreements play a crucial role in transforming the digital landscape and supporting business…
S105
A regional approach to e-commerce and digital trade in the Pacific (UNCTAD) — In Pacific countries, which are characterised by geographical dispersion and remoteness, digitalisation plays a crucial …
S106
ASEAN Digital Generation Report: Pathway to ASEAN’s inclusive digital transformation and recovery — The World Economic Forum (WEF)releaseda report on the pathway to the South-East Asia region’s (ASEAN) inclusive digital …
S107
WS #231 Address Digital Funding Gaps in the Developing World — Singh added regional context, noting that the Asia Pacific region presents unique challenges with the most advanced and …
S108
© 2019, United Nations — Latin America and Asia present more dynamic entrepreneurship and innovation ecosystems than those found in …
S109
Living in an Unruly World: The Challenges We Face — Over time, this changed gradually. The communist countries of Indochina joined ASEAN, as did Brunei and Myanmar. In 1997…
S110
ASEF OUTLOOK REPORT 2016/2017 — 116 Data available for 36 ASEM countries only. Data on Bangladesh, Brunei Darussalam, Cyprus, Germany, the Lao PDR, Latv…
S111
A Decade Later-Content creation, access to open information | IGF 2023 WS #108 — Geoff Huston:Yes, I have some modest thoughts here. Part of this is the beauty of markets is also a weakness. In transfo…
S112
Broadband from Space! Can it close the Digital Divide? | IGF 2023 WS #468 — Despite the notable advantages, several challenges need to be addressed to further develop and improve Starlink’s intern…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
A
Aju Widya Sari
2 arguments113 words per minute491 words258 seconds
Argument 1
Indonesia faces significant infrastructure gaps that limit AI deployment, including insufficient 5G coverage, limited fixed broadband backbone, and a shortage of GPU‑enabled data centre capacity.
EXPLANATION
The speaker highlights that while mobile broadband penetration is high, the country still relies on 4G and lacks extensive 5G rollout. Fixed broadband and backbone networks are under‑developed, especially in remote districts, and existing data centres do not provide enough GPU resources for AI workloads.
EVIDENCE
She notes that Indonesia’s mobile broadband penetration is above 95 % but remains based on 4G, and fixed broadband penetration is only about 70 % in sub-district areas, requiring greater backbone investment [569-572]. She also points out that current data centre providers have limited GPU capacity, which hampers AI processing needs [575-578]. The government’s AI roadmap and ethical AI guidelines are being prepared to address these challenges [579-592].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Minister Patria rates global AI infrastructure at 6/10, highlighting Indonesia’s digital divide and archipelagic challenges [S30]; the postal network discussion notes connectivity hurdles in remote Indonesian areas [S31].
MAJOR DISCUSSION POINT
Infrastructure and capacity gaps for AI
Argument 2
The summit’s key lessons for Indonesia are the need for collaboration, inclusive policies, and sustained investment, while clarifying the definition of digital sovereignty.
EXPLANATION
The speaker summarizes that effective AI adoption requires joint effort among government, industry, and civil society, inclusive approaches that reach all citizens, and clear investment strategies. She also raises the question of how digital sovereignty should be defined and operationalised.
EVIDENCE
She states that the three main takeaways are collaboration, inclusivity, and investment, and that the definition of sovereign digital infrastructure remains an open question [716-722].
MAJOR DISCUSSION POINT
Strategic priorities post‑summit
D
Dr. Mahendra Karpan
2 arguments134 words per minute1394 words622 seconds
Argument 1
AI‑enabled telemedicine can dramatically improve healthcare access in Guyana’s remote and underserved communities.
EXPLANATION
By leveraging satellite connectivity and AI‑driven diagnostic tools, health workers in isolated villages can receive real‑time specialist support, reducing the need for costly patient travel and addressing critical health challenges such as cardiovascular disease.
EVIDENCE
He describes a network of over 200 telemedicine sites equipped with Starlink, enabling community health workers to transmit EKGs, X-rays and other vitals to specialists for real-time diagnosis, which is vital given the country’s dispersed coastal population and limited medical staff [81-86].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Karpan’s telemedicine example with satellite-connected sites and remote diagnostics is documented in the summit report on AI strategies for jobs and economic development [S1].
MAJOR DISCUSSION POINT
Telemedicine as a solution for remote healthcare
Argument 2
Digital primary schools can leapfrog traditional education models, offering personalised learning that frees children’s time for play and development.
EXPLANATION
AI‑driven digital classrooms can tailor content to each child’s strengths and weaknesses, potentially reducing school hours while improving literacy and numeracy outcomes, and can be replicated across the Caribbean region.
EVIDENCE
He reports that Guyana has created a digital primary school that adapts to individual learners, has attracted interest from neighboring Caribbean nations, and aims to condense eight hours of instruction into three focused hours [363-371].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
A parallel discussion on primary-level digitalisation in Bangladesh highlights similar AI-driven personalised learning concepts [S36].
MAJOR DISCUSSION POINT
AI‑powered education for primary learners
N
Narendra Singh
2 arguments183 words per minute889 words291 seconds
Argument 1
India’s low construction cost for data centres gives it a decisive advantage in the global AI race.
EXPLANATION
Building a megawatt of data‑centre capacity in India costs roughly 4–6 million USD, far cheaper than the 12 million USD typical in the US, Singapore or Dubai, because most hardware is manufactured locally. This cost advantage can fuel a trillion‑dollar AI industry.
EVIDENCE
He cites the per-megawatt cost comparison (4-6 million USD in India versus 12 million USD elsewhere) and notes that 80-90 % of required components are produced domestically, highlighting the scale of the opportunity [218-230].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The report on AI strategies notes India’s data-centre construction cost of $4-6 million per MW versus $12 million elsewhere, underscoring the cost advantage [S1].
MAJOR DISCUSSION POINT
Cost competitiveness of Indian data‑centre infrastructure
Argument 2
Affordable AI compute is essential to avoid job displacement and to protect existing call‑centre employment.
EXPLANATION
High AI‑chip and service fees risk making AI solutions more expensive than traditional call‑centre operations, potentially leading to large‑scale job losses. Policy should regulate AI pricing and promote upskilling to keep human agents viable.
EVIDENCE
He explains that AI calls cost about 7 rupees each versus 1 rupee for a human call-centre, and argues that without policy intervention, millions of jobs could be lost, emphasizing the need for upskilling initiatives [245-251].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Broader analysis of digital dividends warns that technology can threaten jobs and wages, supporting concerns about AI-driven displacement [S38].
MAJOR DISCUSSION POINT
Balancing AI cost with employment protection
V
Vinod Jhawar
2 arguments160 words per minute941 words350 seconds
Argument 1
Data‑centre expansion in India is constrained by power supply, land availability, and a shortage of skilled personnel, but renewable energy and high‑voltage grids can provide sustainable solutions.
EXPLANATION
Nextra’s experience shows that while power and land are major bottlenecks, sourcing renewable electricity and connecting to 700 kV national grids can ensure reliable, low‑carbon operation. Upskilling programmes are needed to address the talent gap.
EVIDENCE
He lists power, land and skill challenges (165-167), describes sourcing renewable energy and high-voltage grid connections (181-186), and notes the need for immediate skill-upgrade programmes at schools and universities (196-202) [165-186][196-202].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Discussion of heterogeneous compute stresses India’s physical constraints-land, water, power-and the need for renewable energy and high-voltage grids [S39]; energy-constraint analysis for data centres further confirms the challenge [S40].
MAJOR DISCUSSION POINT
Sustainable data‑centre development
Argument 2
AI tools empower blue‑collar workers to self‑upskill and become entrepreneurs without formal degrees.
EXPLANATION
Generative AI can break language barriers and provide on‑the‑job learning, enabling workers to acquire new competencies, start businesses, or become coaches, thereby reshaping the education system toward self‑directed learning.
EVIDENCE
He explains that AI tools remove English as a barrier, allow self-learning, and can lead to specialist, entrepreneurial or coaching roles without traditional qualifications [362-371].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
A panel on AI for blue-collar workers describes AI-powered marketplaces that address skill gaps and enable entrepreneurship [S43].
MAJOR DISCUSSION POINT
AI‑driven grassroots upskilling
M
Mohamed Kinaanath
1 argument91 words per minute1484 words975 seconds
Argument 1
The Maldives is establishing a comprehensive AI governance framework, including an AI readiness assessment, a national AI master plan, an AI Act, and an independent AI governance body.
EXPLANATION
Through a UNESCO‑supported AI Readiness Assessment, the government is drafting a master plan and legislation to ensure ethical AI deployment, with a multi‑stakeholder advisory council to oversee implementation.
EVIDENCE
He references the AI Readiness Assessment Report (468-470), the forthcoming AI master plan and AI Act (471-474), and the recommendation to create an independent AI governance body and advisory council (472-474) [468-474].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The Maldives’ AI readiness assessment and forthcoming master plan and AI Act are outlined in the summit’s Maldives-focused session report [S44].
MAJOR DISCUSSION POINT
Institutionalizing AI governance in the Maldives
E
Eugenio Vargas Garcia
2 arguments121 words per minute414 words204 seconds
Argument 1
AI can be a strategic tool for climate adaptation and sustainability, but developing countries need tech diplomacy to secure access to these technologies.
EXPLANATION
He argues that AI‑powered climate models and energy‑efficient data‑centre designs are essential for small island states, and that diplomatic engagement is required to ensure equitable technology transfer and capacity building.
EVIDENCE
He mentions AI’s role in climate-related data-centre energy efficiency and the necessity of tech diplomacy for the Global South to benefit from AI advancements [567-574].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Tech-diplomacy’s role in AI access for the Global South is highlighted in a discussion on technology and diplomacy [S47]; climate diplomacy’s emphasis on partnerships for climate-related AI solutions provides additional context [S46].
MAJOR DISCUSSION POINT
AI for climate resilience and the role of tech diplomacy
Argument 2
Countries should start with small, high‑impact AI pilots in health, education and agriculture, seek international partners, and engage in tech diplomacy to scale AI responsibly.
EXPLANATION
He recommends a phased approach: launch quick‑impact projects, collaborate with foreign expertise, and participate in global forums to build capacity and secure resources.
EVIDENCE
He outlines three recommendations-small pilot projects, international cooperation, and tech-diplomacy engagement-drawing on his observations of summit discussions [697-704].
MAJOR DISCUSSION POINT
Pragmatic roadmap for AI adoption in the Global South
P
Parag Khanna
2 arguments164 words per minute1473 words535 seconds
Argument 1
Developing countries can achieve a second‑mover advantage in AI by leveraging cloud‑based and edge solutions that minimise capital expenditure.
EXPLANATION
Instead of building costly proprietary infrastructure, nations can adopt low‑cost, sovereign‑cloud and edge computing models, supported by digital public infrastructure (DPI), to rapidly scale AI services.
EVIDENCE
He describes the concept of AI leapfrogging, the benefits of cloud-based models, and the role of DPI in providing affordable, neutral platforms for AI deployment [603-618].
MAJOR DISCUSSION POINT
AI leapfrogging through low‑cost cloud and edge architectures
Argument 2
AI‑driven geospatial analytics can guide sustainable urbanisation and climate‑adaptation planning, delivering cost‑effective infrastructure decisions.
EXPLANATION
Geospatial AI tools can model decades of urban growth, predict housing needs, and assess climate risks, enabling governments to prioritise investments such as flood barriers or affordable housing.
EVIDENCE
He cites AI-powered urban expansion modeling, climate risk assessment, and the need for targeted infrastructure like seawalls and flood control, emphasizing that these tools are currently under-used worldwide [624-639].
MAJOR DISCUSSION POINT
Geospatial AI for urban and climate planning
T
Tejpreet S Chopra
3 arguments197 words per minute2261 words687 seconds
Argument 1
AI‑driven strategies are essential for redesigning workforce policies, building digital infrastructure, and ensuring inclusive economic growth.
EXPLANATION
The panelist frames AI as the most critical topic for governments, calling for three pillars: workforce redesign, digital/computing infrastructure, and inclusive, responsible AI‑fueled growth.
EVIDENCE
He lists the three critical elements-workforce redesign, digital infrastructure, and inclusive growth-while emphasizing AI’s impact on society, workforce and industries [4-8][25-29].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The US-India partnership powering the AI era underscores the strategic importance of AI-driven policy and infrastructure for inclusive growth [S42].
MAJOR DISCUSSION POINT
Strategic pillars for AI‑enabled economic development
Argument 2
India’s cheap renewable energy positions it to win the AI “arms race” by providing low‑cost compute power.
EXPLANATION
He notes that solar and wind tariffs have fallen dramatically, making energy among the cheapest globally, which can power AI supercomputers and data centres at competitive rates.
EVIDENCE
He cites his own solar farm cost dropping from 18 rupees/kWh to 2 rupees/kWh and wind farm costs similarly decreasing, highlighting India’s advantage in AI-related energy costs [203-212].
MAJOR DISCUSSION POINT
Energy cost advantage for AI competitiveness
Argument 3
AI should augment rather than replace human labour; continuous upskilling and lifelong learning are vital to harness AI benefits while preserving jobs.
EXPLANATION
He stresses that collaboration, not displacement, is the preferred model, and that societies must invest in ongoing education and skill development to keep pace with rapid AI advances.
EVIDENCE
He summarises the panel’s key takeaways on collaboration, the need for continuous learning, and the importance of upskilling across ages [376-379][395-397].
MAJOR DISCUSSION POINT
Collaboration over displacement and the need for lifelong learning
S
Son Sokeng
1 argument128 words per minute501 words234 seconds
Argument 1
Cambodia is drafting a national AI strategy and governance framework that prioritises people, infrastructure, and AI adoption, with a goal to train 100,000 AI‑ready professionals and upskill 10,000 government officials.
EXPLANATION
The roadmap includes a digital skills programme, a national AI governance draft, and targets for talent development, reflecting a human‑centred approach to AI policy.
EVIDENCE
He outlines the AI strategy’s five priorities, the 100 k talent target, and the training of over 10 k officials as part of the Cambodia Digital Skills Roadmap [548-564].
MAJOR DISCUSSION POINT
Human‑centred AI strategy and talent development in Cambodia
K
Kip Wainscott
2 arguments154 words per minute964 words374 seconds
Argument 1
Trust is the cornerstone for AI deployment; financial services already possess robust trust architectures that can be extended to other sectors through shared standards.
EXPLANATION
He argues that AI’s value is unlocked only when users trust it, and that the financial industry’s model risk management and governance practices provide a template for broader AI trust frameworks.
EVIDENCE
He describes the financial sector’s model-risk management, governance documentation, and ongoing monitoring as essential trust mechanisms that could be standardised for AI across industries [658-670].
MAJOR DISCUSSION POINT
Leveraging financial‑sector trust models for AI
Argument 2
There is growing optimism that multi‑stakeholder collaboration will establish AI legitimacy and move the conversation from technical capability to responsible, trusted deployment.
EXPLANATION
He notes that the summit’s inclusive format is fostering intentionality and legitimacy, shifting focus toward operationalising AI responsibly rather than merely showcasing technical feats.
EVIDENCE
He expresses optimism about the collaborative environment, the need for legitimacy, and the transition from hype to responsible deployment [681-689].
MAJOR DISCUSSION POINT
Optimism about collaborative trust‑building
D
Dipali Khanna
2 arguments136 words per minute674 words295 seconds
Argument 1
Women must be placed at the centre of AI initiatives; gender inclusion is critical for equitable AI development.
EXPLANATION
She observes that women are often peripheral in AI discussions and stresses that strong female leadership is needed to ensure AI benefits are gender‑balanced.
EVIDENCE
She points out that women are at the periphery of AI work and that strong women leaders are essential to manage the sector effectively [490-493].
MAJOR DISCUSSION POINT
Gender inclusion in AI
Argument 2
Philanthropic “patient capital” can catalyse trusted AI by funding regulatory sandboxes, talent pipelines, and institutional capacity building.
EXPLANATION
The Rockefeller Foundation can provide risk‑tolerant funding that supports early‑stage AI governance experiments, helps develop skilled personnel, and strengthens public institutions to foster trustworthy AI ecosystems.
EVIDENCE
She outlines how patient capital can back regulatory sandboxes, talent development, and institutional foundations for trusted AI deployment [511-519].
MAJOR DISCUSSION POINT
Catalytic role of philanthropic capital for trusted AI
A
Audience
2 arguments154 words per minute294 words114 seconds
Argument 1
Hydrogen fuel‑cell technology has not scaled due to infrastructure gaps and R&D challenges.
EXPLANATION
An audience member asks why hydrogen fuel cells remain experimental, and the response highlights the lack of nationwide hydrogen infrastructure and ongoing research needs.
EVIDENCE
The question about hydrogen fuel‑cell deployment is raised (232‑233) and the answer cites bottlenecks such as missing hydrogen infrastructure and the need for further R&D collaboration (237‑244).
MAJOR DISCUSSION POINT
Barriers to large‑scale hydrogen adoption
Argument 2
The biggest challenge for AI pilot companies is overcoming technical barriers that limit real‑world impact.
EXPLANATION
An audience member, representing an AI startup, asks about the primary obstacles, prompting a call for discussion on technical challenges that hinder scaling.
EVIDENCE
The audience member states that technical barriers are the biggest challenge for AI pilots (279‑283).
MAJOR DISCUSSION POINT
Technical hurdles for AI startups
M
Moderator
2 arguments165 words per minute860 words312 seconds
Argument 1
Trust is now a pre‑condition for scaling AI; the summit must move from abstract principles to operational blueprints that demonstrate how trust can be built in practice.
EXPLANATION
The moderator frames the session as focusing on concrete mechanisms for trustworthy AI deployment, emphasizing that trust is essential for adoption across governments, enterprises and societies.
EVIDENCE
He notes that trust is no longer a downstream concern but a condition for scale and that the session will surface operational blueprints for building trust (410‑425).
MAJOR DISCUSSION POINT
Trust as a prerequisite for AI scale
Argument 2
Identifying the single biggest obstacle to operationalising trust in each participant’s context is essential for collaborative problem‑solving.
EXPLANATION
The moderator repeatedly asks panelists to pinpoint their main trust‑related challenge, underscoring the need for shared understanding to drive collective action.
EVIDENCE
He repeats the request for each speaker to name their biggest obstacle to operationalising trust (536‑543).
MAJOR DISCUSSION POINT
Pinpointing trust‑related obstacles
H
H.E. Sokeng
1 argument144 words per minute158 words65 seconds
Argument 1
Honesty among industry, government and civil society is vital; regulation should enable innovation rather than impede it, and trust must be cultivated through transparent collaboration.
EXPLANATION
He stresses that all stakeholders must be truthful with each other, that regulatory frameworks should promote, not block, innovation, and that building trust is a collective responsibility.
EVIDENCE
He calls for honesty, notes that regulation should promote innovation and not be an obstacle, and emphasizes the need to build trust through collaborative effort (706‑714).
MAJOR DISCUSSION POINT
Regulation as an enabler of innovation and trust
S
Satvinder Singh
2 arguments175 words per minute1913 words652 seconds
Argument 1
The Digital Economy Framework Agreement (DEFA) will boost digital integration across ASEAN, delivering the greatest economic and employment benefits to the least‑developed member states.
EXPLANATION
By creating a legally binding digital market for 700 million people, DEFA can double the region’s digital economy size and generate jobs especially in LDCs that currently lack digital infrastructure.
EVIDENCE
He explains that DEFA’s impact will be strongest for LDCs, showing job and growth potential, and that money (jobs, economic growth) is the key driver for participation (38‑44).
MAJOR DISCUSSION POINT
DEFA’s role in inclusive digital growth
Argument 2
Upskilling and continuous learning are essential to equip the workforce for AI‑driven change, especially for older generations.
EXPLANATION
He argues that while younger workers adapt more easily, all age groups need lifelong learning programmes to stay relevant as AI reshapes job markets.
EVIDENCE
He notes that upskilling must be continuous, that older generations may find it harder, and that lifelong skills are becoming the norm (303‑309).
MAJOR DISCUSSION POINT
Continuous upskilling for AI readiness
N
Nihar Shah
3 arguments198 words per minute1151 words348 seconds
Argument 1
Energy consumption, cooling and water use are critical blind spots for AI infrastructure that must be addressed through renewable sources and efficient design.
EXPLANATION
He highlights that AI data‑centres require massive power and cooling, and that water consumption is often overlooked; renewable energy and innovative cooling solutions are needed to avoid bottlenecks.
EVIDENCE
He discusses the need for energy, cooling, and water considerations, emphasizing renewable energy and the lack of attention to these issues (110‑114).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Analysis of heterogeneous compute notes cooling and power as major constraints for AI deployment [S39]; a separate report stresses energy constraints for data-centre scaling [S40].
MAJOR DISCUSSION POINT
Energy, cooling and water as AI infrastructure bottlenecks
Argument 2
AI can improve hardware efficiency by designing better chips and data‑centre components, delivering up to 30 % performance gains.
EXPLANATION
He cites examples where AI‑driven design outperformed human engineers, suggesting that similar approaches could reduce AI infrastructure costs and energy use.
EVIDENCE
He mentions DeepMind’s AI designing chips with a 30 % performance improvement and the broader potential for AI‑enhanced data‑centre design (351‑356).
MAJOR DISCUSSION POINT
AI‑assisted hardware optimisation
Argument 3
Hydrogen fuel‑cell adoption is hindered by the lack of a national hydrogen infrastructure and ongoing R&D challenges.
EXPLANATION
He notes that without widespread hydrogen refuelling stations and further research, fuel‑cell technology cannot be scaled, though collaborations with national missions are underway.
EVIDENCE
He points out bottlenecks such as missing hydrogen infrastructure and the need for R&D, referencing collaboration with India’s National Hydrogen Mission (237‑244).
MAJOR DISCUSSION POINT
Infrastructure and R&D gaps for hydrogen fuel cells
Agreements
Agreement Points
Similar Viewpoints
Unexpected Consensus
Differences
Different Viewpoints
Unexpected Differences
Takeaways
Key takeaways
AI will augment rather than fully replace most jobs, especially white‑collar roles; continuous upskilling and lifelong learning are essential. Human empathy and judgment remain irreplaceable in critical sectors such as healthcare and emergency response. Building affordable, renewable‑powered compute infrastructure (large‑scale data centres and edge AI) is a prerequisite for AI‑driven economic growth. Power, cooling and skilled‑personnel shortages are the main bottlenecks for scaling AI infrastructure. Inclusive AI strategies are needed for developing economies: DEFA for ASEAN, AI‑enabled telemedicine and digital schools in Guyana, AI readiness assessments in Maldives and Cambodia, and broadband/5G gaps in Indonesia. Trust must be embedded from the outset; multi‑stakeholder governance bodies, AI Acts, and model‑risk‑management practices are critical for responsible deployment. Public‑private partnerships, catalytic patient capital, and international tech diplomacy are vital to fund and operationalise AI projects in the Global South. Sector‑specific AI applications (telemedicine, primary‑care surveillance, agricultural management, geospatial urban planning, climate‑adaptation modelling) demonstrate immediate productivity and societal benefits. Government subsidies (e.g., India’s AI mission funding, GPU pricing) and lower renewable energy costs can make AI affordable for MSMEs and spur a digital‑first economy.
Resolutions and action items
Proceed with the Digital Economy Framework Agreement (DEFA) across ASEAN to create a legally‑binding digital interoperability regime. Finalize and implement national AI strategies and governance frameworks in Maldives, Cambodia, and Indonesia, including independent AI oversight bodies. Scale up renewable‑energy‑sourced data‑centre campuses (Nextra) and pursue net‑zero targets by early 2030s. Launch large‑scale upskilling and reskilling programmes targeting 100,000 AI‑ready talent in Cambodia and broader continuous learning initiatives in the region. Provide targeted subsidies and low‑cost GPU access (as in India’s AI mission) to lower entry barriers for MSMEs and AI startups. Encourage development of indigenous AI chips to reduce hardware costs and improve margins for AI services. Establish international collaboration mechanisms (tech diplomacy, patient‑capital funds, regulatory sandboxes) to support AI deployment in health, agriculture, climate and urban planning. Create pilot projects with built‑in trust mechanisms (transparent models, auditability, grievance redress) to demonstrate responsible AI at scale.
Unresolved issues
High cost of AI services versus cheaper human labor (e.g., call‑centre costs) and its impact on employment. Infrastructure gaps: insufficient 5G/broadband coverage, limited GPU capacity, and cooling/power constraints in many regions. Talent shortage: lack of skilled engineers to operate and maintain data centres and AI systems. Need for indigenous AI chip production to achieve sustainable cost reductions. Hydrogen fuel‑cell deployment remains limited due to infrastructure and R&D bottlenecks. Exact mechanisms for balancing regulation with innovation—how to design AI Acts that promote growth without stifling it. Long‑term governance of AI ethics and risk assessment frameworks; many countries are still drafting policies. Determining the optimal balance between cloud‑centric and edge‑centric AI architectures for diverse use‑cases.
Suggested compromises
Adopt a collaboration‑first approach: AI augments human workers rather than displaces them, preserving empathy‑driven roles. Implement policy that promotes innovation while providing safeguards—regulation as an enabler, not a barrier. Combine cloud and edge solutions: use large‑scale data‑centres for heavy workloads and edge AI for factory‑floor applications. Provide subsidies and low‑cost GPU access to offset high AI‑chip expenses while encouraging domestic chip development. Leverage international partnerships and tech diplomacy to share resources and expertise, reducing individual country burden. Encourage self‑learning AI tools for blue‑collar workers to upskill without formal degree requirements. Pilot AI projects with built‑in trust frameworks (transparent models, independent audits) before wider rollout.
Thought Provoking Comments
DEFA is the largest regional digital agreement in the world, legally binding, and its biggest beneficiaries will be the least‑developed economies in ASEAN, giving them the greatest per‑capita gains in jobs and economic growth.
Highlights how a coordinated digital framework can directly uplift the poorest members of a region, turning a typical top‑down tech narrative on its head.
Shifted the conversation from generic AI benefits to concrete policy mechanisms; prompted other panelists to reference regional cooperation and the need for inclusive frameworks.
Speaker: Satvinder Singh
In Guyana we have set up 200 tele‑medicine sites with Starlink connectivity, allowing community health workers to get real‑time specialist advice; AI can improve diagnosis speed and accuracy, but the human touch in emergencies can never be replaced.
Provides a vivid, real‑world example of AI augmenting scarce healthcare resources while preserving essential human empathy, grounding abstract AI debates in lived experience.
Steered the discussion toward concrete health‑care applications, leading others (e.g., Tejpreet) to explore AI’s role in medical diagnostics and the risks of hallucinations.
Speaker: Dr. Mahendra Karpan
Energy and cooling are blind spots in AI scaling; AI can even design better chips (30% performance gain) and data‑center architectures, but without addressing power, cooling, and water consumption we’ll hit hard bottlenecks.
Introduces the often‑overlooked physical infrastructure constraints that limit AI growth, and shows AI itself can help solve those constraints, adding a meta‑layer to the discussion.
Prompted Vinod and Narendra to discuss renewable‑energy‑sourced data centres and cost advantages, expanding the dialogue from software to hardware and sustainability.
Speaker: Nihar Shah
AI’s biggest impact on jobs right now is on white‑collar roles through collaborative augmentation, not on blue‑collar jobs; governments will not hand over full automation of high‑skill jobs, and policy will need to manage this transition.
Challenges the common fear that AI will wipe out all jobs, refocusing the debate on augmentation, sectoral differences, and the necessity of proactive policy.
Redirected the conversation toward upskilling, regulatory frameworks, and the societal implications of AI, influencing later remarks by other panelists on education and continuous learning.
Speaker: Satvinder Singh
Leap‑frogging with AI means using cloud‑based, sovereign solutions to get second‑mover advantage; AI‑powered geospatial tools can guide sustainable urbanisation and climate‑adaptation, yet most countries are still years behind.
Frames AI as a strategic tool for development challenges (housing, climate) rather than just a commercial technology, and introduces the concept of second‑mover advantage for the Global South.
Opened a new thematic strand on AI for public‑good applications (urban planning, climate resilience), prompting participants to consider AI’s role beyond economic growth.
Speaker: Parag Khanna
Trust is not a downstream concern but the condition for scale; we need a shared set of expectations and trust models so that AI can be deployed responsibly across sectors, especially in regulated industries like finance.
Elevates trust from an abstract principle to a practical prerequisite for AI adoption, linking governance, procurement, and deployment in a concrete way.
Unified the panel around the need for common trust frameworks, influencing the closing remarks and reinforcing the summit’s focus on operationalising trusted AI.
Speaker: Kip Wainscott
Women are at the periphery of AI discussions; we must bring women back to the centre and ensure they are not just managed by men but are leaders in shaping AI.
Spotlights a critical equity gap often ignored in tech dialogues, urging the panel to consider gender inclusion as integral to AI governance.
Introduced a social‑inclusion dimension that was not previously addressed, prompting later comments about inclusive policy and the need for diverse stakeholder participation.
Speaker: Dipali Khanna
Overall Assessment

The identified comments acted as catalytic moments that moved the panel from a broad, introductory framing of AI’s economic promise to a nuanced, multi‑dimensional dialogue. Satvinder Singh’s DEFA insight and Dr. Karpan’s tele‑medicine case grounded the talk in regional policy and concrete health outcomes. Nihar Shah’s infrastructure warning and Vinod’s renewable‑energy response shifted focus to the physical limits of AI scaling. The discussion on jobs, led by Satvinder Singh, reframed AI as augmentative rather than purely disruptive, prompting calls for upskilling and continuous learning. Parag Khanna’s leap‑frogging narrative expanded the scope to climate and urban challenges, while Kip Wainscott’s emphasis on trust tied all technical and policy threads together, establishing a shared prerequisite for deployment. Finally, Dipali Khanna’s reminder of gender inclusion added a vital equity lens. Collectively, these comments redirected the conversation toward inclusive, sustainable, and trust‑based AI strategies, shaping the summit’s concluding emphasis on collaboration, continuous learning, and responsible governance.

Follow-up Questions
What are the technical barriers and biggest challenges for an AI pilot company to achieve full‑time impact?
Identifying these obstacles is crucial for startups to scale AI solutions effectively and attract investment.
Speaker: Audience member (CTO of MindEquity.ai, founder of AI Society)
Why have hydrogen fuel cells not been implemented at large scale in railways and buses in India and abroad?
Understanding the bottlenecks (technical, infrastructural, economic) can guide policy and funding for clean‑energy transport.
Speaker: Harsh Vartan (audience)
How can upskilling/reskilling strategies preserve jobs while keeping a human‑in‑the‑loop?
Balancing automation with employment security is essential for social stability and inclusive economic growth.
Speaker: Audience member (question to Mr. Sathinder) and Satvinder Singh (panelist)
Will governments provide subsidies or incentives for AI projects similar to the solar‑energy subsidies that boosted the solar revolution in India?
Financial incentives could accelerate AI adoption among SMEs and reduce the cost barrier for innovative AI applications.
Speaker: Audience member (question after discussion on solar subsidies)
What is the single biggest obstacle to operationalising trust in AI in each participant’s context, and how can this room help address it?
Pinpointing trust barriers (legal, technical, cultural) is a prerequisite for responsible AI deployment at scale.
Speaker: Moderator (directed to panel)
How optimistic are you about building collaborations that establish trust in AI across sectors and borders?
Assessing confidence levels helps gauge momentum for multi‑stakeholder governance frameworks.
Speaker: Kip Wainscott (Executive Director, Global AI Policy, JPMorgan Chase)
Area for further research: Quantitative impact of AI on ASEAN labour markets, especially the differential effects on white‑collar versus blue‑collar jobs.
Robust data is needed to design policies that mitigate displacement while leveraging productivity gains.
Speaker: Satvinder Singh
Area for further research: Energy, cooling, and water consumption bottlenecks for large‑scale data‑center expansion supporting AI workloads.
Sustainable infrastructure planning requires detailed metrics on power, thermal management, and water use.
Speaker: Nihar Shah (Lawrence Berkeley National Lab)
Area for further research: Development of indigenous AI chips to lower compute costs and reduce dependence on imported hardware.
Chip cost is a major barrier for AI adoption in emerging markets; local chip design could democratise access.
Speaker: Narendra Singh (MD, RackBank / NeveCloud)
Area for further research: Using AI to design more efficient chips and data‑center architectures (AI‑for‑AI optimisation).
Meta‑optimization could yield significant performance and energy gains, accelerating sustainable AI growth.
Speaker: Nihar Shah
Area for further research: Impact of cross‑border telemedicine on healthcare workforce roles and patient outcomes.
Understanding how AI‑enabled remote diagnostics reshapes job requirements and quality of care is vital for health policy.
Speaker: Dr. Mahendra Karpan
Area for further research: Effectiveness of digital schooling and AI‑driven personalised education for primary children in remote regions.
Evaluating learning outcomes will inform scaling of AI‑enhanced education in low‑resource settings.
Speaker: Dr. Mahendra Karpan
Area for further research: Comparative analysis of cloud‑centric versus edge‑centric AI deployment models for Indian MSMEs.
Determining optimal architecture influences cost, latency, and scalability for the 70 million Indian MSMEs.
Speaker: Tejpreet S. Chopra and Vinod Jhawar
Area for further research: AI‑powered geospatial tools for sustainable urbanisation and climate‑adaptation planning in developing countries.
These tools can guide infrastructure investment and risk mitigation, but require validation and localisation.
Speaker: Parag Khanna
Area for further research: Creation of a unified, interoperable trust framework and standards for AI models across sectors and jurisdictions.
Standardised trust metrics would streamline procurement, compliance, and cross‑border deployment of AI systems.
Speaker: Kip Wainscott
Area for further research: Role of tech diplomacy and international cooperation in building AI capacity for Global South nations.
Coordinated policies and knowledge‑sharing can overcome resource constraints and accelerate AI adoption.
Speaker: Son Sokeng (Cambodia) and Eugenio Vargas Garcia (Brazil)
Area for further research: Mitigating AI hallucinations and ensuring safety in medical diagnosis applications.
Preventing erroneous AI outputs is critical to maintain trust and patient safety in healthcare.
Speaker: Tejpreet S. Chopra

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Scaling AI for Billions_ Building Digital Public Infrastructure

Scaling AI for Billions_ Building Digital Public Infrastructure

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel opened by framing AI and cybersecurity as a two-way relationship, with AI being used to protect systems while security is needed to safeguard AI models themselves [1-4]. Daisy highlighted that AI brings both an opportunity to manage increasingly complex threats at machine scale and a set of risks such as model jail-breaking, data leakage and vulnerabilities in open-source models [12-23]. Samrat noted that AI has moved from the application layer into core infrastructure, making it a fundamental component of system design [25-28]. Narendra warned that the rapid, “breakneck” adoption of AI gives adversarial nation-states and enterprises powerful new tools, while the lack of a separate control plane makes models vulnerable to drift and poisoning, creating national-scale security challenges across sectors [40-44][48-52].


Lakshmi argued that today’s digital infrastructure is already fragile and that AI will amplify this fragility by vastly increasing east-west traffic and long-lived API calls at the edge, stressing networks and platforms [61-70][71-76]. She proposed an “AI operating system” that layers context, agentic control and governance to ensure trust and prevent model misuse [87-90]. Richard added that human users remain the weakest link, especially as deep-fakes blur the line between legitimate and malicious communications, and that resilience now requires detailed visibility into AI-driven actions and careful pacing of deployments [99-110]. Daisy reinforced the gap between enterprise ambition and readiness, noting that most large firms lack data strategy, compute capacity, and the ability to understand AI-related threats, and she called for a shift from hardware-centric security appliances to a virtual, distributed security mesh that accommodates AI’s probabilistic nature [119-151].


Dharshan emphasized the dual emotions of hope and fear, pointing out that AI levels the playing field for defenders through SOC agents and can also create new talent pipelines, while CXOs must balance regulatory compliance with operational and strategic AI risks [158-176][184-191]. Pradeep expanded on this by describing three risk lenses-compliance, operational (model reliability and trust), and strategic (reputation and financial impact)-and stressed that AI acts as a force multiplier for both attackers and defenders [223-235]. Narendra highlighted the need for capacity building, assessment frameworks and sandbox regulations to evaluate AI security before production deployment, leveraging existing institutional structures such as CERT-India and sectoral sandboxes [241-270]. Looking ahead, Lakshmi outlined a self-developed assessment framework that plots capability (talent, culture, platform) against desired outcomes (efficiency, revenue, trust), and warned that AI-native companies will likely disrupt existing business models within five years [282-315]. The discussion concluded that while AI promises transformative benefits for cybersecurity, realizing them requires coordinated governance, trust mechanisms, infrastructure redesign, and strategic foresight to mitigate emerging risks [91-93][318-322].


Keypoints

Major discussion points


AI is both a security tool and a new attack surface.


The panel opened by distinguishing “AI for cybersecurity” and “cybersecurity for AI” and highlighted that AI brings opportunity (e.g., scaling security operations) and challenge (e.g., model jail-breaks, data leakage, poisoning) [3][8-13][21-24].


Speed of adoption outpaces risk mitigation, creating a geopolitical “arms race.”


Narendra emphasized that AI is being adopted at a “breakneck speed” and that nation-states and adversarial enterprises are already weaponising it, widening the gap between defenders’ productivity gains and attackers’ capabilities [40-45][46-49].


Current digital infrastructure is fragile, and AI amplifies that fragility.


Lakshmi warned that enterprises are “running towards the cliff” because existing IT/OT systems are already weak; AI multiplies the strain (e.g., massive east-west traffic, long-lived API sessions at the edge) and demands a fundamentally new “AI operating system” with trust and governance layers [61-70][71-76][84-90].


Governance, trust, and risk-assessment frameworks are essential for responsible AI deployment.


The need for an AI operating system that embeds context, agents, and a trust/governance layer was reiterated, and Pradeep outlined three risk lenses-compliance, operational, and strategic-that boards must adopt to evaluate AI-driven decisions and trustworthiness [84-90][206-214][222-232].


Future outlook: AI will reshape talent, business models, and national strategy, but only if organizations act now.


Dharshan highlighted the “hope” side-AI leveling the defender-attacker playing field and creating new talent pipelines-while Lakshmi and others warned that without a clear assessment framework and strategic foresight, AI-native disruptors will overtake incumbents within five years [158-170][184-190][278-315].


Overall purpose / goal of the discussion


The panel was convened to examine the dual impact of artificial intelligence on cybersecurity-both as an enabler for defending systems and as a new vulnerability vector-and to surface practical, strategic, and policy-level actions (governance models, risk frameworks, infrastructure redesign, talent development) that governments, enterprises, and regulators should adopt to harness AI responsibly while mitigating its emerging threats.


Overall tone and its evolution


Opening (0:00-2:00): Curious and exploratory, with participants framing AI as a transformative opportunity.


Mid-section (2:00-10:00): The tone shifts to cautionary urgency, emphasizing rapid adoption, adversarial use, and the fragility of existing infrastructure.


Later segment (10:00-20:00): Becomes constructive and solution-focused, introducing concepts such as AI operating systems, trust layers, and new governance models.


Closing (20:00-38:00): Moves toward a forward-looking, balanced optimism-recognising risks but also highlighting strategic opportunities, talent development, and the need for proactive planning over the next five years.


Overall, the conversation progresses from inquisitive optimism to measured concern, then to pragmatic recommendations, ending on a cautiously hopeful note about shaping AI-driven cybersecurity futures.


Speakers

Samrat Kishor


Expertise: AI, cybersecurity, digital infrastructure (moderator)


Role/Title: Moderator/Host of the panel discussion[S7]


Daisy Chittilapilly


Expertise: AI, cybersecurity, networking, digital transformation


Role/Title: Cisco representative (speaker on AI and resilience)


G. Narendra Nath


Expertise: National security, cybersecurity policy, AI governance


Role/Title: Government official involved in national security and cybersecurity frameworks (CERT India, DRD)[S2]


Dharshan Shanthamurthy


Expertise: Cybersecurity, AI, deep-tech consulting, thought leadership


Role/Title: Leader at a hardcore deep-tech cybersecurity company, consultant and thought-leader for large enterprises and government officials[S1]


Pradeep Sekar


Expertise: AI risk management, cybersecurity strategy, regulatory compliance


Role/Title: Panelist (cybersecurity professional)


Richard Marko


Expertise: Cybersecurity resilience, AI-enabled threats, human factors in security


Role/Title: Speaker (cybersecurity expert)


A. S. Lakshminarayanan


Expertise: Digital infrastructure, AI operating systems, trust & governance in AI


Role/Title: Executive at Tata Communications (referred to as “Lakshmi, sir, from Tata”)[S9]


Additional speakers:


Ms. Zazie


Expertise:


Role/Title:


Full session reportComprehensive analysis and detailed insights

1. Opening & framing – Samrat Kishor opened the panel by framing artificial intelligence (AI) and cybersecurity as a two-way relationship: AI can be deployed for cybersecurity, while cybersecurity is required for AI itself. He asked Daisy Chittilapilly about the big-picture changes that AI is bringing to security [1-4][6-7].


2. Opportunity & challenge – Daisy Chittilapilly (Cisco) explained that, as with any new technology, AI is simultaneously an opportunity and a challenge. The expanding cyber-threat landscape, driven by ever-greater connectivity and “fidgetal” lives, has out-grown human-scale defence, prompting a shift to machine-scale tools [12-14]. AI promises to improve security management at that scale [15-16], yet it also introduces novel risks: models can be jail-broken, confidential data may leak, and open-source models carry inherent vulnerabilities that must be detected and mitigated [21-24].


3. AI as infrastructure – Samrat noted that AI has moved from being a mere application-layer add-on to becoming a fundamental component of the technology stack, embedded in the systems that organisations design, build and operate [25-30][31-33]. He then turned to G. Narendra Nath for a national-security perspective.


4. National-security perspective (Narendra)


– AI adoption is occurring at “breakneck speed”, outpacing the development of safeguards, and nation-states as well as large adversarial enterprises are already weaponising AI [40-45].


– Unlike traditional systems that separate control- and data-planes, AI models use the data itself as the control plane, making them vulnerable to poisoning, drift and non-deterministic behaviour; over time a model can “drift” and stop behaving as expected, blurring the line between a cyber-security incident and a poor AI design [48-52].


– The rapid spread across finance, telecom, power and other critical sectors raises systemic risk [55-60].


5. Critical-infrastructure view (Lakshmi) – Samrat asked A. S. Lakshminarayanan (Lakshmi) about the state of existing digital infrastructure. She argued that enterprises are “running towards the cliff” because current IT/OT systems are already weak, and AI will multiply that fragility roughly a hundred-fold by dramatically increasing east-west traffic and long-lived API sessions at the edge [61-70][71-76].


– To address this, she proposed an AI operating system composed of a context layer, an agentic layer and a trust/governance layer, enabling organisations to turn LLM-derived knowledge into governed, actionable intelligence [84-90].


– Lakshmi warned that AI will scale decisions, not just transactions, and that, just as booking.com and fintechs disrupted incumbents after the internet wave, a new class of AI-native companies will likely upend existing business models[308-315].


6. Corporate-AI-responsibility – Samrat linked the discussion to the need for “corporate AI responsibility”, likening it to corporate social responsibility (CSR) as a governance imperative.


7. Resilience & human factor (Richard) – Richard Marko highlighted the human factor as the weakest link. Deep-fakes and AI-generated phishing make it harder to distinguish legitimate from malicious communications [98-100], and true resilience now requires granular visibility into what AI agents are doing in the background, how commands are transferred, and whether they can be intercepted or altered [101-105][106-110].


8. Digital-infrastructure readiness (Daisy)


– Daisy presented Cisco’s AI readiness index, revealing a stark ambition-versus-reality gap: about 90 % of just under 1 000 large Indian enterprises plan to deploy AI agents this year, yet only two-thirds have a data strategy, one-fourth possess sufficient compute capacity, and less than one-fifth understand AI-related threats [118-123].


– She argued that traditional, hardware-centric security appliances are becoming obsolete; security must become virtual, distributed and embedded in the network fabric, rewiring the entire stack (silicon, compute, networking) to accommodate AI’s probabilistic nature, which demands new rules for applications that can no longer guarantee deterministic outputs [124-132][136-151].


9. Hope vs. fear (Dharshan) – Dharshan Shanthamurthy described the emotional duality surrounding AI. He noted that AI levels the playing field for defenders-e.g., SOC agents can automate shift handovers-and creates new talent pipelines for a deep-tech cybersecurity workforce [158-176]. He called for an AI security operating system or playbook that enables organisations to proactively leverage AI rather than merely react to threats [184-191].


10. Board-level risk lenses (Pradeep) – Pradeep Sekar outlined three risk lenses for board-level oversight: compliance risk (e.g., EU AI Act, sectoral regulations), operational risk (model reliability, availability, trust) and strategic risk (reputation and financial impact of AI-driven attacks) [222-236]. He cited Microsoft’s Security Co-pilot as a concrete example of AI automating SOC tasks [212-215], and warned that attackers can industrialise phishing and social-engineering at unprecedented scale [216-219].


11. Government capacity-building (Narendra)


– Narendra highlighted a “digital-AI divide” across sectors and stressed the need for capacity-building and assessment frameworks.


– Existing institutional mechanisms such as CERT-India, CIPC, and sector-specific sandboxes-RBI’s sandbox for finance and the telecom regulator’s sandbox-can be leveraged to test AI systems before production [241-247].


– He announced a government-funded project (started November 2024) to develop AI-security assessment standards, alongside the ETI framework, to provide systematic evaluation of AI deployments [260-267][268-270].


12. Five-year outlook (Lakshmi) – Lakshmi described Tata Communications’ internal assessment framework, which plots capability (talent, culture, platform) against outcomes (efficiency, revenue, trust) on a two-axis matrix [282-295][300-304]. She warned that the next five years will determine the long-term health of companies, as AI-native disruptors reshape markets [308-315].


13. Nation-state perspective (Narendra – final) – In response to Samrat’s final question, Narendra asserted that AI will be a competitive advantage for nations that adopt it responsibly. He emphasized the urgency of mitigating adverse effects through capacity building, clear frameworks and a five-year roadmap[318-322].


14. Consensus & disagreements – Across the discussion, the panel reached strong consensus on three core themes: (1) AI’s dual nature as both a security enabler and a new attack surface; (2) the necessity of a layered AI governance model-often termed an AI operating system or AI security operating system/playbook; and (3) the urgency of developing assessment frameworks, sandboxes and capacity-building programmes [84-90][190-192][206-210][91-93].


However, disagreements emerged regarding the primary mitigation route: Daisy advocated for a virtual, distributed security mesh embedded in the network fabric [124-132], whereas Narendra emphasised procedural safeguards such as assessment frameworks and regulatory sandboxes [241-247][260-267][268-270]. A second divergence concerned capacity-building focus: Daisy highlighted a universal enterprise-wide AI readiness gap [118-123], while Narendra called for sector-specific initiatives [242-252][254-259]. Finally, Richard’s human-centric view of resilience contrasted with Lakshmi’s infrastructure-centric emphasis [98-100][68-76].


15. Closing – Samrat thanked the panel and the audience, underscoring the need for coordinated governance that blends corporate AI responsibility with board-level risk lenses, while simultaneously investing in talent, infrastructure, and robust assessment mechanisms [91-93][318-322]. This balanced outlook reflects cautious optimism: if acted upon, AI’s transformative potential can be harnessed without compromising cybersecurity resilience.


Session transcriptComplete transcript of the session
Samrat Kishor

The context is, have you overdone it? Right? When we talk about AI and cybersecurity, these two areas, how do they come together? There’s AI for cybersecurity, and there is cybersecurity for AI. Right? So what we’re going to do is, we’re going to discuss both aspects. We’re going to at least try. So, you know, the first question, and I’d like to actually point it to Ms. Zazie, you know, what has changed, you know, if you were to look at the larger picture, the big picture, you know, in terms of AI coming into cybersecurity? What has changed?

Daisy Chittilapilly

I think as what happens with all technologies, and AI is no different in that sense. It is, of course, as we’ve been hearing over the last few days, a technology that will redefine humanity and how we live, work, play, all of that. But one thing that it hasn’t come, and with all of the other technologies, that have come before it is that it’s both an opportunity and a challenge. And it’s particularly true when it comes to the security space. So on one side, there is the promise that, you know, for some time now, with the advent of technologies, number of things getting connected, all of our lives going fidgetal, cyber threats have become, the landscape has, of course, expanded, and threats have become more and more complex and complicated.

And for some time now, we’ve not been able to manage cybersecurity at human scale. So machine scale was, you know, a lot of tooling was already in that space. So there is the promise with AI that you can manage security better. So there is definitely that opportunity. But at the same time, there is the recognition, like Dario Amadai said on the main stage yesterday, that his biggest concern and all of our concerns is that AI brings a set of risks, which not all of us have. And there are a lot of them that we know of at this point in time today. So both of these, so it’s also, like I said, that commonality is there with all technologies that came before it.

It is both a challenge and an opportunity and a challenge. Because we’ve got to protect models from being jailbroke. We’ve got to make sure that the models don’t leak our confidential information or poison our data. We’ve got to make sure that most of these are open source models, that they come with inherent vulnerabilities, so how do we detect them? So we’ve got to think about securing AI as well.

Samrat Kishor

Absolutely, and very rightly said. So it’s becoming a fundamental part of the infrastructure that is being then used to build applications. So earlier I think the perspective that changed was that we were looking at AI just at the application layer, but it’s gone much below in the infrastructure. It’s got embedded into the kind of systems which are now getting created and deployed. So we’re looking at AI as a way to make sure that we’re not just building a system that is just going to be running on AI. We’re also looking at AI as a way to make sure that we’re not just building a system that is just going to be running on AI. So we’re looking at AI as a way to make sure that we’re not just building a system that is going to be running on AI.

So we’re looking at AI as a way to make sure that we’re not just building a system that is going to be running on AI. So, and that is where I’d like to bring in, Narendra, you, your perspectives on what are you seeing in terms of national security? You know, is it something which is giving us a spike, a blip, something which you can discuss, disclose here?

G. Narendra Nath

Yeah, I mean, it’s required to be discussed. That’s one thing that’s definite. No, one, you know, I take the points that you’ve said. One thing about all the other technological revolutions, as you said, is that, you know, there was a time frame over which that seeped into the system. Okay, and then we had time to look at how do I use it beneficially and also to look at the adversarial effects of it and how do I mitigate those things. Case of AI is that it’s really happening at a breakneck speed. And there’s also an adoption, a willingness to adopt into enterprises of the different AI tools that are there. So that is where the scary part is there.

And the other is the adversarial part of the AI is that. though you use AI for cyber security but the issue is that there are nation states or big enterprises which are adversarial enterprises which would be using AI as a tool for doing it and they have got a lot of motivation to put in effort and thought process into how do I use it more effectively. Then the persons who are actually using AI for their own benefit, in terms of they are looking at how do I improve my productivity, how do I improve my efficiency, that’s the focus area that they are in. So this is where there is a disconnect and this has to be really bridged and that’s where the problem is.

The summit actually in one way it’s helping people become conscious about some of the measures that have to be taken. That is one part. The other is the difference between other systems and this is a little technical in the sense that in the other systems we have a separate control plane and a separate data plane. There we could actually control they provide access limits to the control plane. but here the data itself is the control so you have that poisoning of models happening through the inputs that are there so you could have a drift and over a period of time you will find that the model will not be behaving as you would expect it to behave and it’s not also very deterministic so there are challenges in how do I protect it now there’s AI systems to see that it gives me the consistent results after a period of time then there is also lack of clarity about what is the cyber security issue there and what is the issue of malfunctioning or a poor design of an AI system that lack of clarity also results in the challenges that are there.

I think these are the preliminary thoughts that I have. So at the national scale the issue is that when you have multiple entities at the enterprise scale and financial sector, the telecom sector and all of them and the power sector adopting AI the effect on compromises and the critical information machine infrastructure is something that would actually make us wake up and then see that what could be done. Those are issues that are there.

Samrat Kishor

Excellent pointers, sir. Excellent pointers. And I think since you brought in the private sector and the way they’ve evolved and they’re also subjected to these risks which are evolving in nature. I’d like to bring in Lakshmi, sir, from Tata here. So, sir, a lot of infrastructure is being built, connected, communicated using what you’re building for the nation. So, how are you seeing the paradigm shift from let’s say how it used to be before AI was commoditized and everyday technology. It used to be the labs. Now it’s out in everybody’s hands. So what is the change that you are seeing and the impact you’re seeing on critical infrastructure?

A. S. Lakshminarayanan

I don’t think people have woken up to the fact that they are fast running towards the cliff. Because I genuinely think that that the digital infrastructure in enterprises today are already fragile. And we know that from an enterprise security point of view, there are so many attacks that are happening. And we know that there are huge issues when it comes to, for example, now we more talk about IT, OT security, the operational technology in factories were never in the purview of IT security. And there are, you know, security today and digital infrastructure in general is still very fragile. It’s islands of different OEM technologies and many, many things. And, you know, I don’t want to, you know, it is a major issue.

Now, on top of this fragility, you add AI. And this fragility is going to be multiplied 100 times. It comes over, right, on many, many kinds of platforms. because AI is going to increase the network traffic, especially the east -west traffic, by, again, multifold. The number of API calls that somebody… And we all are saying, oh, I’ll embed AI at the edge of the device, and if I have a banking application, I’ll do that, but nobody has thought through. If I put an inference there, or if you put an inferencing at the edge, the number of API calls these have to do is tremendous, and these API calls are long -lived sessions. They’re not traditional API calls.

So the edge infrastructure is going to come under tremendous strain. So that’s why I’m saying that in all our excitement of AI, I’m very passionate and excited about AI, but I genuinely feel that people are not looking at the foundations and… …properly. So that is very fragile, and that is one point I want to make. The second point about this is I would like to expand the scope of this discussion. It’s not about AI and cybersecurity alone. It’s also about a broader trust question. I think we all know, you know, whether fake, the messages, you don’t know. Apply that in the enterprise context. And there was a talk about, you know, model drifts and so on and so forth.

So what we at Tadacom are doing, one is to protect the digital infrastructure through many, many things that we can do. And the unfortunate part is I don’t think enterprises have woken up to the fact that they have to do it. So I tell them that you can’t build a skyscraper with a foundation of a bungalow, which is what they’re trying to do. But when it comes to the drift and the trust part of it, I do believe that enterprises require, require an AI operating system. and what we mean by that AI operating system is something that brings the context together because LLMs will provide the knowledge. To make that knowledge into actionable intelligence, you need the context layer, you need the agentic layer, and more importantly, you need to have a trust and governance layer which will control what an agent will do or will not do.

And if I take that control in my hands, and say that I will configure and ensure this agent will do something or not do something, I can make use of the models underneath a lot more intelligently. So I think rather than focusing on whether this LLM is good or that LLM is good and so on, this AI operating system is what is required for people to build an application which will ensure that all of these are governed properly.

Samrat Kishor

Sir, that’s a great point. In fact, I was having a conversation a few days back, and I was saying that that from the time of corporate social responsibility, it’s time to evolve to corporate AI responsibility, where corporates start talking about how they’re controlling and owning the actions of the AI that they’re building and deploying. Great perspective, sir. Thank you very much. At this point, I’d like to bring in Richard to sort of continue the talk about digital infrastructure and resilience. So how has resilience in your perspective evolved when we talk about AI risks to cybersecurity and vice versa?

Richard Marko

Well, the question of resilience is a complex question. So I will bring a few aspects that I think are very important. So it is well understood that there are a lot of people who are in the industry and that people are typically the weakest link in cybersecurity. the reason is that we as human beings we were not evolved to deal with machines, computers and so on and most of us don’t have really deep technical knowledge about how systems work and so on so we are to a big extent dependent on relatively superficial understanding and so we are more easy to be tricked by different social engineering tricks and so on. Now with AI this is becoming a big issue because how you can distinguish a scam from a real communication when the scam communication looks exactly like the real communication I’m talking about deep fakes and so on so this is one aspect of the risk connected directly with people.

The other aspect is that we want AI to empower people to do things, more things and make them in an easier way so we have those agents and we give them some or we want to give them some commands like do this for me or that for me but we don’t understand all the steps that the agent will take on our behalf when performing those tasks and each of those tasks can be there can be a risk factor involved without us knowing like if you want to perform this action you will need to have those additional tools to achieve that and where you get those additional tools if AI decides on your behalf these are the tools you need software packages, whatever it is and they get to your computer without this being supervised then this is a problem so and we have to be very careful and we have to be very careful where I’m heading, like resilience here is really protecting or paying attention to details.

What is actually happening? What is running in the background? How are your commands transferred to the agents? Is there a possibility for them to be intercepted, to be modified? So it’s even it was difficult and complex even before advent of the new AI agentic approach. Now it’s becoming even more important to really go into all the details and we just heard from Lakshmi that he sees that we are moving towards a cliff. Well, depends on us of course. We want to go fast. We want to employ. We are all excited about AI but we maybe sometimes we need to slow down a little bit and make sure that the pieces are in the place and cyber security is not overlooked.

Samrat Kishor

Excellent, excellent perspectives. And I think an offshoot to that question can be to Ms. Daisy, which is what are you seeing as changing when you’re talking about digital infrastructure and especially the connectivity which it needs because you’re at Cisco, right? And here is something which is connecting a lot of things to a lot of other things. So how are you seeing changes happening, especially when you talk about resilience and what’s going on inside digital infrastructure?

Daisy Chittilapilly

So I think Lakshmi touched on a very important point of the underlying, the fragility of the underlying infrastructure. And that is something that I want to reiterate. For the past few years, we’ve been publishing an AI readiness index. And the good news is that we are as ready as everybody else. The bad news is maybe we’re not as ready as we think we are, which is the point Lakshmi is making, right? 90 % of… just under 1 ,000 large enterprises that we spoke to in India want to deploy agents this year. Forty percent want that agent to work alongside a human being, but only about two -thirds of those enterprises really have a data layer, data strategy, a data platform, and a data governance strategy.

Only about one -fourth have the compute capacity they need. Only about one -third are able to understand AI threats and deal with them. And less than one -fifth have the innovation engine to think about building and scaling and maintaining AI applications and use cases. So clearly there is this ambition versus reality gap which we have to solve for. That’s not a problem as long as we all know that that’s where it is, and they were acutely aware of this issue. Thank you. the other thing is AI is essentially leading to what this means is that we are rewiring and restacking the enterprise it’s not just networks it’s compute it’s silicon I know you know at the national level silicon security is a conversation so all this resiliency which we used to build almost like a bolt on at the top and particularly we used to think of it only as cyber resiliency it’s a system resilience which is built into all layers of the infrastructure stack all layers of the AI stack and that’s why at Cisco since you asked me a network specific question we used to have we used to deal with connectivity largely as connectivity and now we know the persona of that end port that connects to an end device that might be doing an inferencing or it’s in the data center that persona has to be that it will be on one side it will be a switch or a router but on the other side it will also be a security defense point.

So this ability of building special grade of security appliances and putting them in various parts of the network is fast becoming an outdated idea. And the point we’ve got to do is we’ve got to break it into a number of virtual instances that can go wherever you want the security policy to be. So it becomes a very virtual distributed mesh rather than hardware. Yes, there will be hardware. I’m not saying it will go away. But this ability to infuse it into the fabric and networks tend to be the all pervasive fabrics. That’s the way at least at Cisco we’re thinking about it. So this domains of networking and security are crashing together. So secure networking is like the conversation in the network space particularly.

The other part about this is the performance requirement which also Lakshmi alluded to. AI will put pressure on the underlying infrastructure. In a way it’s an exponential technology. The demands it will create on its underlying layers is also exponential. So we’ve got to almost build a new category of technology. Silicon systems, applications, everything. A new category has to be built and we have to build it in new ways. You cannot build it in the ways of how we built it in the past. Applications is an interesting one. We used to give an input, expect the same output on the other side. But now if you are going to deploy AI models, this thing is probabilistic.

And I refer to it. So you want to get it to a degree of assistance so that you cannot expect in a financial application or a very important citizen service application, you give an input and the output has to be deterministic. But you’re using at the core of it a probabilistic technology. So that refinement also takes a whole lot of work. So it’s rethinking in ways in all layers. of the, from silicon to software to systems, you have to rethink everything. Every rule we have to rethink.

Samrat Kishor

Excellent, excellent. And since you brought in that perspective of rethinking, reimagining, and how we’re using AI in the operating system of the company, I’d like to bring in Darshan here. So Darshan, you do a great work, you do a lot of great work in creating thought leadership content as well as doing consulting work for very large companies. Of course, there are CXOs and a very highly ranked official of the government sitting here, but then what are the other six CXOs thinking about when it comes to AI? Is it still a compliance thing, or has it percolated into strategy?

Dharshan Shanthamurthy

First of all, thank you. I’ll probably add some context to whatever I’ve heard so far. So first of all, my views is any technology disruption brings in two emotions, right? So hope as well as fear. And I’m sure all the other panelists have rightfully covered the fear construct of AI in cyber safety. And rightly so. No disputing that truth. But there is a huge hope component from a cybersecurity company like for us because we are a hardcore deep tech cybersecurity company. I see a lot of opportunities. And we as a country, India, can also be, we are at the sweet spot between intersection between AI and cybersecurity. And this topic is very aptly crafted because I think it’s a huge opportunity for us to also utilize.

And I’ll tell you why. Cybersecurity has so far been a very asymmetric equation. The intruders have always had an advantage over the defenders or anyone who’s actually defending a network because they just need to get one thing right and we need to get everything right. So it’s always been asymmetric. But with AI, now all of a sudden we are at the level playing field from a technology standpoint to identify a needle in the haystack. For example, one classic use case. Can be an agent. security operations center. Because at the end of the day, if you have ever visited a security operations center, it is a 24 bar 7 someone, analyst looking at a screen and almost an inhuman job, so to speak.

But today, with AI, now you’ve got a level playing field because we’ve seen those kinds of use cases being deployed at our SOC, where even a shift handover is done by an agent. So a lot of real use cases. So I’m on the hope side. There’s a lot of opportunities that today we have. And second, in terms of talent, I mean, we have a lot of youngsters sitting in this room who are looking to grow. We have spoken so much about other services, other areas evaporating in terms of job opportunities. I think we can create the world’s cyber security talent combination with AI because cyber security and AI are not two different fields. They actually, cyber security needs AI and AI needs cyber security.

So I think we are at a very, very opportunistic opportune time for us. to really ride this wave and create world -class talent which can address. So now on the second part which you just spoke about, that’s what we are hearing at the CXOs globally since we deal with a lot of people in the payment ecosystem. CXOs obviously have the same construct of hope versus fear, right? So some are obviously being a CISO or a CIO. There is amount of fear that is also coming in because these are real problems, right? For example, deep fakes or spear phishing attacks have become more robust with AI. But one of the key things that we are trying to explain is that, yes, those are things that you need to address, no doubt, but can you also look at how you can take advantage of those AI?

And Lakshmi rightly pointed out, how do you have an AI operating system? Similarly, we talk about how you can have an AI security operating system, right? Which you should have a playbook on how to leverage AI rather than being on the defense player. So those are the… Those are my views. Samarath.

Samrat Kishor

Excellent, excellent views and thank you very much for those perspectives and I’m glad that I still see people coming in, you know, this is an interesting session and some people standing as well. So I would like to bring in Pradeep now. Pradeep, you know, as a follow on to what I just asked Darshan, here is something which is, you know, at the top and we’re saying, you know, while it is percolating into strategy a bit, do you think that we should have a dedicated function within an organization and what are you seeing currently not just in India but elsewhere as well?

Pradeep Sekar

Yeah, thank you for that. So probably adding on to what Dharshan said, right, I don’t mind the hope and the fear thing because being in cyber security space, both of them do add more to what we can do, right, for the industry as a whole, for the country as a whole, if you would. When we look at strategically, when we talk to, let’s say, leaders and boards at companies in India across the world, predominantly when the conversation is about AI, the topic goes towards innovation, competitiveness, and ability to bring in, let’s say, productivity gains, right? What often gets missed is that AI is quietly reshaping the risk equation within the enterprise, right? Now, cybersecurity, right, so can no longer be just about protecting systems and the data.

Now, don’t get me wrong, right? Cybersecurity is still needed to be able to identify all the systems within your enterprise, enterprise beyond the extended enterprise, as well as be able to protect the data that is on all of these systems. But it needs to evolve into something more, given the AI landscape, which is, I love how Lakshmi put it, right? It’s going to be about trust. So going forward, can cybersecurity, how can it evolve to start protecting decision -making and trust? Because trust is starting to become measurable, right, through provenance, through authenticity, as well as verification. Now, all of these mechanisms are going to come in, in a way. that we are able to identify, measure, rate, risk, rank, and call out whether this particular transaction that you’re doing, whether it’s a payment approval or it’s an executive communication, is trustworthy or not.

And then accordingly, the agent of the system that’s allowing the transaction to go through allows it or not, right? So that’s something that we’re seeing, and AI in this context is a force multiplier, right, on both sides. For us as defenders, we are seeing, like Darshan said, how we are able to detect, identify threats at scale and speed that we have never seen before, right? And definitely, right, bringing in, again, it’s not going to, so if you ask, okay, is it going to completely revamp how we do and run SOCs? A little yes. It’s not going to replace all the analysts, but definitely in terms of certain tasks that we are doing, we already started seeing Microsoft with its security co -pilot, how it can automate tasks, right?

Like different agents doing different tasks, so we’re already starting to see that. Now, but in addition to that, it’s also helping attackers on the other side of the equation, which is it is industrializing disruption at scale. Think fishing. Think social engineering. Now, all of these manipulation, now it’s happening at an unprecedented scale. That’s going to continue, and you’re going to see it continue for, let’s say, the next few years because that’s where we’re headed in terms of air -aided phishing. I would say, yeah, definitely manipulation and how this is going to impact the industry as a whole. Now, I would say that’s pretty much how all of these, the shift is, the tectonic shift is happening, right, across.

So as I would say working with leaders and board members, we are looking at how to look at these risks and how to frame these risks, and here usually we see three lenses. So one is the compliance risk, which is am I complying with the EU AI Act, right? Am I complying with the TDPDP or other sectoral guidance? So it’s more of a check -in -the -box approach. Maybe helps with me in protecting against regulatory exposure, but not with systemic risk. like what Ms. Daisy was saying. The second angle which some companies have started to move towards is the operational risk, right? Where the boards are starting to ask, the models am I using? Is it reliable?

Is it safe? Is it trustworthy? And what is the risk if this particular model, a service provider who’s providing that model goes down? So that’s the operational risk angle that we’re seeing more of. The third angle which I think it’s very few companies doing today is probably the strategic risk angle, right? Where in being able to call out if I’m using this particular, if there’s an AI -driven attack, identity attack, right? That is reducing or impacting the reputation of my organization with my customers, what is my exposure in financial terms? Now these are questions that boards would start to need to ask because we are starting to ask those questions and get those questions from leaders in order to how are we able to measure those and how do you quantify risk in financial terms?

And be able to convey that to the board as well because that’s what at the end of the day boards are concerned in being able to. the stakeholders and shareholders.

Samrat Kishor

That’s great and those are some interesting lenses that you put to the whole conversation. Sir, I’d like to bring you in now from your vantage point. When we talk about India’s DPI, we are implementing AI into systems which cater to healthcare, which cater to telecom, across the citizen supply chain, if you will. So how do we make sure that the AI deployments that we’re doing and what capabilities do we have to make sure that these deployments are secure and they’re taking care of the risks that the fellow panelists highlighted?

G. Narendra Nath

The financial sector, for example, is mature. But let’s say, take the health sector. It’s not as mature as others. But if you look at the enthusiasm, for example, of the health sector to adopt AI, you’ll find that the level of enthusiasm is similar to what is there in the other sectors. So that is a big challenge. We’ve been engaging with the health sector, for example. We’ve had recent meetings also to say that, how do I improve the cybersecurity posture of that sector? So that’s a big challenge, actually. So we had this digital divide. We have a cybersecurity divide that’s there. And now we are going to have this AI divide that’s going to be there across enterprises in different sectors.

So that is a challenge that is required to be addressed. That, I think, is the capacity building part. And also coming up with frameworks where people have access to that framework and understand what is really required to be done. And you talked of assessment. When an enterprise is coming with the AI system, is it secure? Is it doing the work? Is it doing the work it’s supposed to do? So we don’t have those assessment frameworks now. So if you’re aware, you know, the testing and assessment part is an important part, and creating that infrastructure so that people could go and then test and assess, that is an important part. The department of DRD has come up with an ETI framework, if you’re aware of it.

Similarly, from our office also, we funded a project, and still it’s around a year back. It started in November of 2024, we funded the project for coming with an assessment framework for AI systems. So that one is the security aspect of that, and the other is, of course, the functional aspect, you know, that also. In the sense that somebody claims that this AI system does something. How do you actually assess that? So I think one is the capacity building part, and the other is the, you know, having the frameworks in place is good. One thing good about this country is that we have an institutional framework that’s been established, especially because of cybersecurity over the period of time, like we have got the CERT India and CIPC, or their institutional framework.

and also the sectoral regulators also come up with sandboxing regulations in the sense that if you want to try out something new, you have these regulations that helps you to try out something new. So I think these like in the financial sector, you have the RBI sandboxing, the telecom sector also this mechanism. So I think people start using these sandboxes to prove technologies, to prove applications, to prove use cases and that will help them to actually understand how it really works before they deploy in production. That I think would help going forward.

Samrat Kishor

Awesome. Thank you, sir. And I think it’s enlightening and enriching for all of us here to know your perspective especially what the government is doing towards it. I’d like to bring in Lakshmi, sir from Tata here for the next question. So, sir, if we reconvene five years from now here, what are we going to be talking about? What did we do? What did we get right?

A. S. Lakshminarayanan

I think this discussions are very healthy. whether AI with a positive lens or with a fear lens. I think we need to – I – on two comments. One is, you know, the question on assessment. We ourselves in TataCom, when we asked ourselves the question, where do you want to be five years from now? And I made a statement that the next five years will determine the health of the company for the next 50 years because the technologies are moving very fast. So for an assessment framework, we developed a framework ourselves. We studied a lot of material. We didn’t find something good. So we developed an assessment framework where on one axis we plotted the capability. You know, it includes talent.

It includes the platform, which is when I said, look, no point in doing individual use cases in an organization. How many use cases will you do? You need a platform approach, which is where we said AI operating system is required. So that is maturing. So one – on one axis. On one axis, we are going to plot the capability. So it’s talent, even culture. AI, I don’t know whether people have appreciated this is a very different paradigm even now in the discussions I see people talking about how AI can help automate things and do things faster no, that’s not what AI will do AI, you know while the previous technologies of cloud and internet have helped companies to scale transactions AI is going to scale decisions and when you’re scaling decisions you need to think of a different paradigm all together, and we are still talking in the old paradigm of what tasks can be automated and how it can be done so this is a new paradigm, so in the capability axis the culture dimensions would have to be thought through carefully and talent appropriate, I find some of the younger talent are more easy to train on AI than some of the older, unfortunately so I think the whole talent and capability equation is one axis, and we’re going to plot ourselves and the other axis is on the outcomes what outcomes do you really want to deliver with AI?

And there, you know, outcomes could be more on efficiency. Outcomes could be more on the revenue enhancement. Outcomes could be more on the trust and the customer satisfaction. All those outcomes need to be plotted. I must admit, we ourselves are somewhere in the lower quadrant, and I hope we as a company will move to the top quadrant. And that needs to be defined, and that needs to be visualized. And only then you can move towards that. And that’s what we’re driving the company towards. And all the platform development that we’re doing, strengthening our infrastructure for enterprises, and we’ve shared some of these assessments to our customers as well. So that is one. So I hope that most people would see themselves moving towards the top quadrant in five years’ time.

The second thing that I worried about in the context of strategy is, again, you know, when people talk about AI and strategy, and I think that’s something that’s been really important to me, and I think that’s something that’s been really important to me, And I believe that, like in the previous technology when we had Internet and cloud, there were new business models that came about. So we had intermediaries coming, the booking .coms and others who disintermediated many, many people, or fintechs who came and did things better than the larger banks. And only when the larger entities woke up to the fact that these people are going to eat their lunch. And that’s what happened in the previous wave of technology.

In the AI, I think similar disruption is waiting to happen. Don’t know where and when and what. But if a strategy does not think about that, as to what disruptions are going to happen, we would have missed the bus. So five years from now, I would expect a new class of companies who are AI native, who are out there going to disrupt the existing business model. So those are the two things I would expect in five years to happen.

Samrat Kishor

Fabulous. And sir, one last question to you. If you were to give me a call five years from now and say, Samrat, this is how… nation states have changed what would that be?

G. Narendra Nath

See one is AI and I have talked somewhere else also, adoption of AI is a competitive advantage so that’s why you have to adopt AI and you don’t have any other because there are other nations who are going to adopt, there are other enterprises going to adopt AI and they are going to try to look at how do I do business better so that way going down you will find that we would adopt AI and this conference is very good for that five is down the line the other is protecting yourself from the adverse effects of AI because it’s a very powerful tool and then it’s just a thought process but I think as pointed out just one year we have found that such a lot of development has happened we do not know where this is really going to lead us so the thing is for us to be on our toes and to actually look through that how is this technology going to affect the way we do business and how we run our countries and then also and then you know this development of capacity capability and identify the dependencies that we have when this technology is adopted and try to see that how do I mitigate the dangers of those dependencies this is where I think the thought process would be and this is where I think the road map for the next five years for us.

Samrat Kishor

Thank you, thank you very much sir and thank you all the panelists for taking time out and agreeing to do this for the audience I see the room is full and a lot of people waiting on the sides as well thank you all for paying attention please put your hands together for the esteemed panel that we have here together we have to conclude this panel only for the paucity of time otherwise we could have gone on thank you very much Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (41)
Factual NotesClaims verified against the Diplo knowledge base (7)
Confirmedhigh

“AI can be deployed for cybersecurity while cybersecurity is required for AI.”

The knowledge base highlights the dual nature of AI in security, noting it can both enhance defenses and introduce new risks, confirming the two-way relationship described [S34].

Confirmedhigh

“AI is simultaneously an opportunity and a challenge; the expanding cyber‑threat landscape has out‑grown human‑scale defence, prompting a shift to machine‑scale tools.”

Reports describe escalating threats in scale, sophistication and frequency, and stress the need for AI-driven tools to keep pace, supporting the opportunity-challenge framing [S119] and [S120] and the human-capacity gap [S121].

Confirmedhigh

“AI introduces novel risks such as model jail‑breaking, confidential data leakage, and vulnerabilities in open‑source models.”

Open-source model risks and broader AI security concerns are documented, including potential data exposure and model manipulation [S122] and [S123]; agentic AI behaviours that can act independently are also noted [S114].

Additional Contextmedium

“AI has moved from being a mere application‑layer add‑on to becoming a fundamental component of the technology stack.”

AI is described as a technology that will redefine how societies work and is being embedded as core infrastructure, indicating its shift from peripheral to foundational status [S1] and its rapid advancement [S128].

Confirmedhigh

“AI adoption is occurring at “breakneck speed”, outpacing the development of safeguards, and nation‑states as well as large adversarial enterprises are already weaponising AI.”

UN remarks emphasize AI’s rapid pace and the urgency of governance, while other sources note that legislation is being drafted at breakneck speed, reflecting concerns about weaponisation and insufficient safeguards [S67] and [S129].

Additional Contextmedium

“AI models use the data itself as the control plane, making them vulnerable to poisoning, drift and non‑deterministic behaviour.”

The knowledge base discusses AI model vulnerabilities such as data poisoning and model drift, underscoring the control-plane nature of data in AI systems [S34].

Confirmedhigh

“The rapid spread of AI across finance, telecom, power and other critical sectors raises systemic risk.”

Threats to critical infrastructure and the systemic nature of AI-driven risks are highlighted in discussions on cyber-national security intersections [S120] and [S126].

External Sources (129)
S1
Scaling AI for Billions_ Building Digital Public Infrastructure — -Dharshan Shanthamurthy- Works with a cybersecurity company, provides consulting and thought leadership for large enterp…
S2
Scaling AI for Billions_ Building Digital Public Infrastructure — -G. Narendra Nath- Government official working on national security and cybersecurity policy, involved with CERT India a…
S3
Scaling AI for Billions_ Building Digital Public Infrastructure — – G. Narendra Nath- Pradeep Sekar – Richard Marko- Pradeep Sekar
S4
Scaling AI for Billions_ Building Digital Public Infrastructure — – Daisy Chittilapilly- A. S. Lakshminarayanan- G. Narendra Nath – Daisy Chittilapilly- Dharshan Shanthamurthy- Pradeep …
S5
Scaling AI for Billions_ Building Digital Public Infrastructure — – Richard Marko- Dharshan Shanthamurthy
S6
Event page with the recording — – **Marko Markovic**: Role/title not mentioned. Appears to be a travel guide content creator or host, providing detailed…
S7
Scaling AI for Billions_ Building Digital Public Infrastructure — -Samrat Kishor- Moderator/Host of the discussion
S8
Announcement of New Delhi Frontier AI Commitments — -Bharat: Role/Title: Not specified (invited as distinguished leader of organization), Area of expertise: Not specified …
S9
Scaling AI for Billions_ Building Digital Public Infrastructure — – A. S. Lakshminarayanan- G. Narendra Nath- Pradeep Sekar – Daisy Chittilapilly- A. S. Lakshminarayanan- G. Narendra Na…
S10
Thousands of companies vulnerable to cyberattacks due to exploited flaw in open-source AI framework, researchers find — Security analysts havewarned about actively exploitinga contentious vulnerability within the widely-used open-source AI …
S11
Hackers exploit AI: The hidden dangers of open-source models — As AI adoption grows, security experts warn that malicious actors are finding new ways to exploitvulnerabilities in open…
S12
Generative AI presents the biggest data-risk challenge in history — Cybersecurity specialistswarnthat generative AI systems, such as large language models, are creating a data risk frontie…
S13
https://dig.watch/event/india-ai-impact-summit-2026/scaling-ai-for-billions_-building-digital-public-infrastructure — And I refer to it. So you want to get it to a degree of assistance so that you cannot expect in a financial application …
S14
AI as critical infrastructure for continuity in public services — “I believe that there is perhaps awareness challenge as well as the capacity challenge, because I think that this whole …
S15
How AI Is Transforming Indias Workforce for Global Competitivene — And I think there is a pretty big gap. Actually, I think that gap is good for workforce. Because no matter what the capa…
S16
Employees embrace AI but face major training and trust gaps — SnapLogic haspublished new researchhighlighting how AI adoption reshapes daily work across industries while exposing tru…
S17
How AI Is Transforming Diplomacy and Conflict Management — “people developing these models don’t even have full legibility over how they’re working”[72]. “adversarial negotiations…
S18
WS #279 AI: Guardian for Critical Infrastructure in Developing World — Daniel Lohrman: Yeah, but I cannot, the video is not started, so I don’t know if you can see me, but I can certainly s…
S19
Internet Governance Forum 2024 — However, the session also discussed the inherent risks associated with AI systems. Daniel Lohrmann emphasized concerns s…
S20
Building the Next Wave of AI_ Responsible Frameworks &amp; Standards — And this is, you can see up here on the screen, the QR code, and you can scan the QR code and then you’ll get access to …
S21
Advancing Scientific AI with Safety Ethics and Responsibility — Both speakers agree that evaluation should occur before deployment rather than after, with Speaker 1 emphasizing socio-t…
S22
WS #193 Cybersecurity Odyssey Securing Digital Sovereignty Trust — Adisa recommends following the UK and Singapore model of creating regulatory sandboxes where innovators can test AI syst…
S23
Signature Panel: Building Cyber Resilience for Sustainable Development by Bridging the Global Capacity Gap — In adherence to its national strategy, Ireland actively participates in initiatives like EU Cybernet, aimed at bolsterin…
S24
Agenda item 6 — Brunei Darussalam:Thank you, Mr. Chair. I extend my delegation’s gratitude to you for your ongoing guidance in this work…
S25
Agenda item 5: discussions on substantive issues contained in paragraph 1 of General Assembly resolution 75/240 (continued)/5/OEWG 2025 — This comment helped refocus the discussion on the importance of capacity building as a foundational element for inclusiv…
S26
360° on AI Regulations — In conclusion, the analysis reveals that AI regulation is guided by existing laws, and there is a complementary nature b…
S27
Generative AI: Steam Engine of the Fourth Industrial Revolution? — The adoption of newer technologies is not limited to a specific industry and is prevalent across all sectors. Currently,…
S28
Open Forum #75 Shaping Global AI Governance Through Multistakeholder Action — Moret argues that private sector companies have a responsibility to actively prevent AI systems from being used to viola…
S29
Bridging the AI innovation gap — This was identified as a critical need but requires further research into specific skill gaps and capacity building requ…
S30
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — And this requires proactive and coherent policy responses. First, people must be at the center of AI strategy, as we hea…
S31
Laying the foundations for AI governance — Dawn Song: Yeah, that’s a great question. I think in AI safety and security, we are facing huge challenges. The field is…
S32
AI-driven Cyber Defense: Empowering Developing Nations | IGF 2023 — A proactive approach to cybersecurity, global cooperation, and the shared responsibility of being cyber ready are crucia…
S33
Emerging Shadows: Unmasking Cyber Threats of Generative AI — Dr. Yazeed Alabdulkarim:Yeah, regulations are basically a controversial topic because many believe that it’s challenging…
S34
Challenging the status quo of AI security — – Sounil Yu- Babak Hodjat AI technology has two sides: it can enhance security measures and help improve existing secur…
S35
Cybersecurity in the Age of Artificial Intelligence: A World Economic Forum Panel Discussion — Cybersecurity | Infrastructure Rosenworcel argues that the rapid expansion of connected devices creates software vulner…
S36
The opportunity costs of an arms race — Conflict can easily erupt due tomisinterpreted intent.This is one aspect, among many, at which multilateral forums on se…
S37
Modern Diplomacy — This concern has been discussed earlier in relation to the Gulf War. IT greatly enhances the role of brainpower in rel…
S38
Keynote-Roy Jakobs — “Innovation and governance must advance together With speed Because trust determines adoption … If they move at differ…
S39
(Day 5) General Debate – General Assembly, 79th session: morning session — Murat Nurtleu – Kazakhstan: Mr. President, Mr. Secretary General, Excellencies, ladies and gentlemen, let me first cong…
S40
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Virginia Dignam: Thank you very much, Isadora. No pressure, I see. You want me to say all kinds of things. I hope that i…
S41
Digital Public Infrastructure, Policy Harmonisation, and Digital Cooperation – AI, Data Governance,and Innovation for Development — Adamma Isamade: Okay, so I’ll be very brief, the truth is, I hope your boss is not watching. Ah, my boss is always watch…
S42
WS #283 AI Agents: Ensuring Responsible Deployment — As the session reached its time limit (with Prendergast noting the final 10 minutes), the discussion revealed both the p…
S43
Panel Discussion Summary: AI Governance Implementation and Capacity Building in Government — Context-based analysis and stakeholder engagement are crucial for effective risk assessment
S44
How AI Drives Innovation and Economic Growth — Kremer argues that while there are forces that may widen gaps, AI has significant potential to narrow development dispar…
S45
AI for Democracy_ Reimagining Governance in the Age of Intelligence — Authorities and independent media will lag behind while malicious actors remain behind. one step ahead. Accountability w…
S46
IBM CEO’s take on AI’s influence on the business landscape — IBM’s CEO, Arvind Krishna, has left no room for doubt – AI is set to revolutionize the business world. Earlier this year…
S47
Shaping the Future AI Strategies for Jobs and Economic Development — These key comments transformed what could have been a superficial discussion about AI benefits into a sophisticated anal…
S48
Agenda item 5: discussions on substantive issues contained in paragraph 1 of General Assembly resolution 75/240 (continued)/4/OEWG 2025 — Malawi: Thank you, Chair. Thank you, Chair, for giving us the floor. Malawi acknowledges the critical role of capacit…
S49
Panel Discussion Summary: AI Governance Implementation and Capacity Building in Government — The discussion revealed a common theme across different contexts: the gap between policy ambition and implementation cap…
S50
Building the AI-Ready Future From Infrastructure to Skills — And Manhattan Project, about 65 % of the entire funding of Manhattan Project was at Oak Ridge National Laboratory. And i…
S51
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Development | Legal and regulatory Evidence-Based Policymaking and Research Integration Part of the roadmap emphasizes…
S52
S53
Building Trustworthy AI Foundations and Practical Pathways — “Frontier risks are risks which are very, very difficult to observe, right?”[59]. “There are social risks which are easi…
S54
Day 0 Event #173 Building Ethical AI: Policy Tool for Human Centric and Responsible AI Governance — – Practical, actionable recommendations based on risk assessment Chris Martin: And guys, I know this seems daunting. …
S55
Keynote Adresses at India AI Impact Summit 2026 — The speakers demonstrate remarkable consensus across multiple dimensions: the strategic importance of U.S.-India partner…
S56
Advancing Scientific AI with Safety Ethics and Responsibility — -Global South Perspectives and Adaptation: A significant focus was placed on how emerging scientific powers can shape AI…
S57
Scaling AI for Billions_ Building Digital Public Infrastructure — The conversation highlighted the critical importance of building proper foundations before implementing AI capabilities,…
S58
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — This distinction has profound implications for risk mitigation strategies. Safety requires internal controls and model v…
S59
NSA’s AISC releases guidance on securing AI systems — The National Security Agency’s Artificial Intelligence Security Center (NSA AISC) hasintroducednew guidelines to bolster…
S60
AI Development Beyond Scaling: Panel Discussion Report — Bengio highlights that current AI systems can develop sub-goals that weren’t chosen by humans and can go against instruc…
S61
Keeping AI in check — Societies should not be forgetful of the fact that technology is a product of the human mind and that the most intellige…
S62
AI Meets Cybersecurity Trust Governance &amp; Global Security — “AI governance now faces very similar tensions.”[27]”AI may shape the balance of power, but it is the governance or AI t…
S63
AI Governance Dialogue: Steering the future of AI — Doreen Bogdan Martin: Thank you. And we now have a chance together to reflect on AI governance with someone who has a un…
S64
From principles to practice: Governing advanced AI in action — Chris Meserole: concerns possible global alignment? Well, first of all, it’s great to be here and just, you know, a wond…
S65
Challenging the status quo of AI security — – Sounil Yu- Babak Hodjat AI technology has two sides: it can enhance security measures and help improve existing secur…
S66
Can National Security Keep Up with AI? / Davos 2025 — AI technology has both beneficial and potentially harmful applications. This dual-use nature creates dilemmas and challe…
S67
9821st meeting — Ecuador:Mr. President, I thank the United States for convening this important meeting. I also thank the Secretary Genera…
S68
Panel discussion: International law, cyber-norms, CBMs, capacity building,institutional dialogue — Dr Katherine Getao, one of the esteemed panellists, highlighted the dual nature of digitalisation, presenting both signi…
S69
Agenda item 5: Day 1 Afternoon session — Australia:Thank you, Chair. The relevance and value of our open-ended working group relies upon us candidly exploring an…
S70
Agentic AI drives a new identity security crisis — New research from Rubrik Zero Labswarnsthat agentic AI is reshaping the identity landscape faster than organisations can…
S71
Cybersecurity in the Age of Artificial Intelligence: A World Economic Forum Panel Discussion — – Nadav Zafrir- Jill Popelka- Marc Murtra Cybersecurity | Infrastructure Rosenworcel argues that the rapid expansion o…
S72
Challenging the status quo of AI security — – Sounil Yu- Babak Hodjat AI technology has two sides: it can enhance security measures and help improve existing secur…
S73
Smart machines, dark intentions: UN urges global action on AI threats — The United Nations haswarnedthat terrorists could seize control of AI-powered vehicles to launch devastating attacks in …
S74
The opportunity costs of an arms race — Conflict can easily erupt due tomisinterpreted intent.This is one aspect, among many, at which multilateral forums on se…
S75
(Day 3) General Debate – General Assembly, 79th session: morning session — William Samoei Ruto – Kenya: Your Excellency, President of the 79th Session of the United Nations General Assembly, Amb…
S76
(Day 5) General Debate – General Assembly, 79th session: morning session — Murat Nurtleu – Kazakhstan: Mr. President, Mr. Secretary General, Excellencies, ladies and gentlemen, let me first cong…
S77
Interim Report: — 27. Other risks are more a product of humans than AI. Deep fakes and hostile information campaigns are merely the l ates…
S78
Global AI adoption rises quickly but benefits remain unequal — Microsoft’s AI Economy Institute hasreleased its 2025 AI Diffusion Report, detailing global AI adoption, innovation hubs…
S79
Scaling AI for Billions_ Building Digital Public Infrastructure — And the other is the adversarial part of the AI is that. though you use AI for cyber security but the issue is that ther…
S80
AI-driven Cyber Defense: Empowering Developing Nations | IGF 2023 — Audience:Thank you for giving me the floor. My name is Ada Majalo. I’m coming from the Africa IGF as a MAG member. Very …
S81
Informal Stakeholder Consultation Session — So naturally, it amplifies the current structure that streams users’ controls over their data. It further strengthens a …
S82
WS #283 AI Agents: Ensuring Responsible Deployment — As the session reached its time limit (with Prendergast noting the final 10 minutes), the discussion revealed both the p…
S83
Open Forum #33 Building an International AI Cooperation Ecosystem — Sajid Rahman: Thank you, and good afternoon. You know, it’s a great pleasure to speak about something which is not only …
S84
Safe and Responsible AI at Scale Practical Pathways — Guardrails, Human‑in‑the‑Loop, and Risk‑Assessment Mechanisms Are Essential for Reliable Deployment
S85
Closing remarks – Charting the path forward — Al Mesmar highlights that as AI systems become more powerful, governing access to computational infrastructure and large…
S86
Setting the Rules_ Global AI Standards for Growth and Governance — The discussion revealed significant consensus across diverse stakeholders on fundamental questions about AI standards. A…
S87
IBM CEO’s take on AI’s influence on the business landscape — IBM’s CEO, Arvind Krishna, has left no room for doubt – AI is set to revolutionize the business world. Earlier this year…
S88
AI will not replace people – but people who use AI will replace people who do not | IBM’s Report — According toIBM’s report, executives estimate that around 40% of their workforce will need to reskill due to implementin…
S89
GermanAsian AI Partnerships Driving Talent Innovation the Future — AI and digital technologies are reshaping how businesses operate faster than ever before. For companies the challenge is…
S90
How AI in 2026 will transform management roles and organisational design — In 2026, AI will transform management structures and automate tasks as companies strive to demonstrate real value. By 20…
S91
AI for equality: Bridging the innovation gap — Cherie Blair: Look at this place, it’s buzzing. It’s amazing. Cherie Blair: Well, I have to say I’m a bit of a techie e…
S92
Open Forum #53 AI for Sustainable Development Country Insights and Strategies — Anshul Sonak: ≫ Thanks, Yiping. Good morning. So, calling from Silicon Valley, this is a very interesting conversation. …
S93
Opening address of the co-chairs of the AI Governance Dialogue — Majed Sultan Al Mesmar: Bismillah ar-Rahman ar-Rahim. Excellencies, distinguished guests, colleagues, friends, As-salamu…
S94
Democratizing AI: Open foundations and shared resources for global impact — The tone was consistently collaborative, optimistic, and forward-looking throughout the discussion. Speakers maintained …
S95
OPENING SESSION | IGF 2023 — It involves both possibilities and risks and is a transformative technology. The Hiroshima AI process will aim to reflec…
S96
New Technologies and the Impact on Human Rights — The discussion maintained a collaborative and constructive tone throughout, despite addressing complex and sometimes con…
S97
Webinar session — The discussion maintained a diplomatic and constructive tone throughout, with participants demonstrating nuanced thinkin…
S98
AI and Digital Developments Forecast for 2026 — The tone begins as analytical and educational but becomes increasingly cautionary and urgent throughout the conversation…
S99
Comprehensive Summary: AI Governance and Societal Transformation – A Keynote Discussion — The tone begins confrontational and personal as Hunter-Torricke distances himself from his tech industry past, then shif…
S100
Agenda item 5: Day 2 Afternoon session — China has adopted a proactive stance towards developing and harmonising new norms within the cybersecurity sphere, showc…
S101
Leaders’ Plenary | Global Vision for AI Impact and Governance Morning Session Part 1 — 25 ,000 people. And I think it’s possible. I think it’s possible to use the technology at the expense that it has reache…
S102
WS #145 Revitalizing Trust: Harnessing AI for Responsible Governance — Brandon Soloski: Okay, that’s interesting. I hear a little bit of a delay. Good idea. All right. Good afternoon, early…
S103
Open Forum #30 High Level Review of AI Governance Including the Discussion — Abhishek Singh: That will really empower people globally. What do we expect from the Global Digital Compact to make this…
S104
Open Forum #58 Collaborating for Trustworthy AI an Oecd Toolkit and Spotlight on AI in Government — These key comments fundamentally shaped the discussion by challenging conventional narratives about AI development and g…
S105
Closing Ceremony — The discussion maintains a consistently positive and collaborative tone throughout, characterized by gratitude, celebrat…
S106
Closing Session  — The tone throughout the discussion was consistently formal, collaborative, and optimistic. It maintained a celebratory y…
S107
High Level Dialogue with the Secretary-General — The tone was largely serious and earnest, with participants speaking candidly about shortcomings in current youth engage…
S108
Global Risks 2025 / Davos 2025 — Kashim Shettima: Well, the word for crisis in the Chinese language is wei desu, wei stands for danger and desu for op…
S109
AI: Lifting All Boats / DAVOS 2025 — The tone was largely optimistic and solution-oriented, with speakers acknowledging challenges but focusing on opportunit…
S110
Ethical principles for the use of AI in cybersecurity | IGF 2023 WS #33 — Dennis Kenji Kipker:Yeah, thank you very much, Jeannie, and thank you for the possibility to speak here today. As a prof…
S111
Open Forum #3 Cyberdefense and AI in Developing Economies — # Expert Panel Discussion: Cyber Defence and Artificial Intelligence Challenges for Developing Economies Jose Cepeda: a…
S112
WS #31 Cybersecurity in AI: balancing innovation and risks — Dr. Alison: Okay. Thank you. So I speak from a personal perspective here. So I don’t know if, realistically, I don’t…
S113
Tech Transformed Cybersecurity: AI’s Role in Securing the Future — Helmut Reisinger:Yeah. Good afternoon, everybody. As-salamu alaykum. I am representing Palo Alto Networks. We are a cybe…
S114
How agentic AI is transforming cybersecurity — Cybersecurity is gaining a new teammate—one that never sleeps and acts independently.Agentic AIdoesn’t wait for instruct…
S115
Cutting through Cyber Complexity / DAVOS 2025 — Hoda Al Khzaimi highlights how AI and emerging technologies are rapidly changing the cybersecurity landscape. She argues…
S116
Beyond answers: How AI is redefining web communication for International Geneva — It may seem paradoxical to look backward when facing advanced technology. However, in an age where AI generates content …
S117
Scaling AI Beyond Pilots: A World Economic Forum Panel Discussion — Julie Sweet points out that despite the focus on AI scaling, the vast majority of data infrastructure work that companie…
S118
Agenda item 5: Day 2 Morning session — The country reflects on the UK’s own dealings with acknowledged Russian cyber interferences in its political system, dee…
S119
Comprehensive Report: World Economic Forum Panel Discussion on Cybersecurity Resilience — Cyber threats are escalating in scale, sophistication, and frequency
S120
Opening of the session — The cyber threat landscape is rapidly evolving, with increasing sophistication and complexity of attacks targeting criti…
S121
Call for action: Building a hub for effective cybersecurity | IGF 2023 — There is a deep-rooted concern about the ever-expanding gap in the cybersecurity field. Despite technological advances, …
S123
Don’t waste the crisis: How AI can help reinvent International Geneva — Risks extend beyond breaches of confidential data. Everyday interactions—like querying AI platforms—can inadvertently ex…
S124
Building Indias Digital and Industrial Future with AI — As India advances in digital public infrastructure and its AI ambitions, the key is how we ensure these systems remain t…
S125
INTERNATIONAL CIIP HANDBOOK 2008 / 2009 — The establishment of these organizational units and their location within the government structures are influenced by va…
S126
WS #84 The Venn Intersection of Cyber and National Security — It led to a detailed discussion of India’s cybersecurity initiatives and frameworks, offering insights into how one nati…
S127
LEBANON NATIONAL CYBER SECURITY STRATEGY — –  Use proven and well-known security information feeds to compile the necessary information to build a reputation data…
S128
Open Forum: A Primer on AI — Artificial Intelligence is advancing at a rapid pace
S129
Dare to Share: Rebuilding Trust Through Data Stewardship | IGF 2023 Town Hall #91 — Many laws are being developed at a breakneck speed
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
D
Daisy Chittilapilly
3 arguments165 words per minute1068 words386 seconds
Argument 1
AI offers machine‑scale security management but introduces model leakage, jail‑breaking, and open‑source vulnerabilities
EXPLANATION
Daisy explains that AI can handle security tasks at machine scale, addressing the growing complexity of cyber threats, but it also brings new risks such as models being jail‑broken, leaking confidential data, and the presence of vulnerabilities in open‑source AI models.
EVIDENCE
She notes that AI promises to manage security at machine scale, referencing existing tooling that already operates in this space (lines 14‑16). She then highlights specific risks: the need to protect models from jail‑breaking, prevent confidential information leakage, and detect vulnerabilities in open‑source models (lines 21‑24).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Open-source AI tools have been shown to contain exploitable flaws, such as the vulnerability in the Ray framework, and researchers report malicious code being embedded in open-source models, confirming the risk of model leakage and jail-breaking [S10][S11][S12].
MAJOR DISCUSSION POINT
Dual nature of AI in cybersecurity
AGREED WITH
G. Narendra Nath, Dharshan Shanthamurthy, Pradeep Sekar
Argument 2
AI pressures every layer of the stack, requiring a shift from hardware‑centric security appliances to a virtual, distributed security mesh
EXPLANATION
Daisy argues that the traditional approach of placing dedicated security appliances at fixed network points is becoming outdated. Instead, security must be virtualised and distributed across the fabric, allowing policies to be applied wherever needed.
EVIDENCE
She describes the move from hardware‑centric appliances to breaking security policies into many virtual instances that can be placed anywhere in the network, creating a virtual distributed mesh rather than relying on fixed hardware (lines 124‑132).
MAJOR DISCUSSION POINT
AI integration into critical infrastructure and system fragility
DISAGREED WITH
G. Narendra Nath
Argument 3
There is a significant AI readiness gap in large enterprises, with many lacking data strategy, compute capacity, threat understanding, and innovation capability.
EXPLANATION
Daisy points out that while organisations are eager to deploy AI agents, most do not have the foundational data platforms, sufficient compute resources, or the ability to understand and mitigate AI‑related threats, creating a mismatch between ambition and reality.
EVIDENCE
She cites Cisco’s AI readiness index showing that only about two‑thirds of enterprises have a data layer and strategy, one‑fourth have adequate compute capacity, one‑third can understand AI threats, and less than one‑fifth possess an innovation engine to build and scale AI applications (lines 118-123).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Surveys reveal significant skill and trust gaps in AI adoption, with many organisations lacking data strategies, compute resources, and expertise, underscoring the readiness gap highlighted [S16][S23][S25][S29][S15].
MAJOR DISCUSSION POINT
Enterprise capability gaps for AI adoption
G
G. Narendra Nath
5 arguments186 words per minute1261 words405 seconds
Argument 1
Rapid AI adoption outpaces mitigation; adversarial nation‑states exploit AI, and data becomes the control plane leading to model poisoning
EXPLANATION
Narendra points out that AI is being adopted at breakneck speed, leaving little time for mitigation measures. He warns that nation‑states and adversarial enterprises can weaponise AI, and because data now acts as the control plane, models are vulnerable to poisoning and drift.
EVIDENCE
He notes the breakneck speed of AI adoption and the willingness of enterprises to adopt AI tools (lines 40‑42). He then explains that adversarial nation‑states are using AI as a tool, while data itself serves as the control plane, enabling model poisoning and drift over time (lines 48‑49).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Analyses of AI geopolitics note that nation-states are weaponising AI and that data now serves as a control plane, making models vulnerable to poisoning attacks [S17][S19][S33].
MAJOR DISCUSSION POINT
Dual nature of AI in cybersecurity
AGREED WITH
Daisy Chittilapilly, Dharshan Shanthamurthy, Pradeep Sekar
Argument 2
Announces the creation of AI assessment frameworks and sandboxing initiatives to evaluate security and functionality before production deployment
EXPLANATION
Narendra says that there is currently a lack of assessment frameworks for AI systems, and the government is funding projects to develop such frameworks, including an ETI framework and sandbox mechanisms to test AI before production.
EVIDENCE
He mentions the absence of assessment frameworks and the launch of a project funded in November 2024 to create an AI assessment framework, referencing the department of DRD’s ETI framework (lines 260‑267).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Recent initiatives propose AI assessment frameworks and sandbox environments to test AI before production, as documented in emerging standards and pilot projects [S20][S21][S22].
MAJOR DISCUSSION POINT
Governmental frameworks, capacity building, and assessment mechanisms
AGREED WITH
Daisy Chittilapilly, Pradeep Sekar
DISAGREED WITH
Daisy Chittilapilly
Argument 3
Highlights existing institutional structures (CERT‑India, CIPC) and sector‑specific sandboxes (RBI, telecom) as foundations for capacity building
EXPLANATION
Narendra emphasizes that India already has institutional bodies like CERT‑India and CIPC, as well as sectoral sandbox regulators such as RBI for finance and telecom, which can be leveraged to test and validate AI technologies safely.
EVIDENCE
He cites the established institutional framework of CERT‑India and CIPC, and mentions sectoral sandbox mechanisms like RBI’s sandbox and telecom’s sandbox that help organisations trial new technologies (lines 268‑270).
MAJOR DISCUSSION POINT
Governmental frameworks, capacity building, and assessment mechanisms
Argument 4
Calls for continuous vigilance: adopting AI for competitive advantage while proactively mitigating dependencies and adverse effects
EXPLANATION
Narendra stresses that AI adoption is a competitive necessity, but nations must also guard against its adverse effects by building capacity, identifying dependencies, and mitigating risks associated with rapid AI deployment.
EVIDENCE
He states that AI adoption provides a competitive edge, but stresses the need to protect against adverse effects, develop capacity, and mitigate dependencies as AI evolves (lines 318‑322).
MAJOR DISCUSSION POINT
Future outlook and five‑year vision for AI and cybersecurity
Argument 5
AI adoption is uneven across sectors, with the health sector lagging behind, creating an AI divide that requires sector‑specific capacity building and assessment frameworks.
EXPLANATION
Narendra highlights that while sectors like finance are relatively mature, the health sector shows high enthusiasm but low maturity, leading to a divide that must be addressed through tailored capacity‑building initiatives and the development of assessment frameworks for AI systems.
EVIDENCE
He contrasts the maturity of the financial sector with the enthusiasm yet lower maturity of the health sector, describing a “digital divide” and an emerging “AI divide” across enterprises, and calls for sector‑specific capacity building and assessment frameworks (lines 242-252, 254-259).
MAJOR DISCUSSION POINT
Sectoral disparities in AI readiness and need for tailored frameworks
D
Dharshan Shanthamurthy
4 arguments174 words per minute590 words202 seconds
Argument 1
AI levels the playing field for defenders while also creating new asymmetric threats for attackers
EXPLANATION
Dharshan observes that AI gives defenders a technological parity with attackers, enabling large‑scale threat detection, yet it also equips adversaries with powerful tools to launch sophisticated attacks, maintaining an asymmetric threat landscape.
EVIDENCE
He notes that AI provides a level playing field for defenders, allowing them to identify needles in haystacks, while also empowering attackers to industrialise disruption at scale (lines 170‑171 and 162‑176).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Studies describe AI giving defenders parity with attackers while also enabling sophisticated, large-scale attacks, reflecting the dual asymmetric impact [S33][S27].
MAJOR DISCUSSION POINT
Dual nature of AI in cybersecurity
Argument 2
Calls for an AI security operating system/playbook that enables organizations to leverage AI defensively rather than merely reacting to threats
EXPLANATION
Dharshan advocates for a structured AI security operating system or playbook that guides organisations on how to proactively use AI for defence, shifting from a reactive posture to a strategic one.
EVIDENCE
He explicitly calls for an AI security operating system/playbook that organizations should have to leverage AI defensively (lines 190‑192).
MAJOR DISCUSSION POINT
Governance, trust, and the need for an AI operating system
AGREED WITH
A. S. Lakshminarayanan, Pradeep Sekar, Samrat Kishor
Argument 3
Notes AI as a force multiplier that can both empower SOCs and industrialise attacks, urging a balanced “hope vs. fear” approach
EXPLANATION
Dharshan highlights that AI can dramatically boost security operation centres (SOCs) by automating tasks, but the same technology also enables attackers to scale phishing and social‑engineering attacks, so a balanced perspective is required.
EVIDENCE
He describes AI empowering SOCs through automation (e.g., Microsoft security copilot) while also industrialising attacks such as AI‑driven phishing, emphasizing the need for a balanced hope‑versus‑fear stance (lines 162‑176).
MAJOR DISCUSSION POINT
Strategic risk management and organizational risk lenses
Argument 4
AI offers India a strategic opportunity to develop world‑class cybersecurity talent, positioning the country as a global leader in AI‑driven security.
EXPLANATION
Dharshan argues that the convergence of AI and cybersecurity can be leveraged to create a skilled talent pipeline, enabling India to harness AI for defence while also fostering a new generation of experts who can drive innovation in the sector.
EVIDENCE
He notes that AI can create a level playing field for defenders, and emphasizes the chance to build “world‑class talent” by combining cybersecurity and AI expertise, highlighting the hope side of AI for India (lines 158-183).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Reports on India’s AI workforce transformation and programs to empower developing nations highlight AI as a lever for building world-class cybersecurity talent [S15][S32].
MAJOR DISCUSSION POINT
Talent development and national opportunity in AI‑enabled cybersecurity
A
A. S. Lakshminarayanan
3 arguments162 words per minute1324 words488 seconds
Argument 1
AI multiplies existing digital‑infrastructure fragility, especially at the edge, by vastly increasing east‑west traffic and long‑lived API sessions
EXPLANATION
Lakshminarayanan warns that current enterprise digital infrastructure is already fragile, and the addition of AI will amplify this fragility, particularly at the edge where AI inference will generate massive east‑west traffic and long‑lived API sessions.
EVIDENCE
He describes the fragility of today’s digital infrastructure and explains that AI will multiply this fragility 100‑fold, increasing east‑west traffic and long‑lived API sessions at the edge (lines 68‑76).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Research points to AI increasing traffic loads and exposing fragility in edge infrastructures, raising concerns about digital-infrastructure resilience [S31][S18].
MAJOR DISCUSSION POINT
AI integration into critical infrastructure and system fragility
Argument 2
Proposes an AI operating system that unites context, agentic, and trust/governance layers to safely orchestrate LLM‑driven actions
EXPLANATION
Lakshminarayanan suggests building an AI operating system that combines a context layer, an agentic layer, and a trust/governance layer, enabling LLMs to produce actionable intelligence while being governed securely.
EVIDENCE
He outlines the three layers—context, agentic, and trust/governance—that together form an AI operating system to safely orchestrate LLM actions (lines 87‑90).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Frameworks calling for layered AI governance, combining context, agency, and trust components, are being drafted to orchestrate LLM actions safely [S20][S21].
MAJOR DISCUSSION POINT
Governance, trust, and the need for an AI operating system
AGREED WITH
Dharshan Shanthamurthy, Pradeep Sekar, Samrat Kishor
Argument 3
Projects that the next five years will define long‑term health, emphasizing the rollout of an AI operating system and the emergence of AI‑native companies that disrupt existing business models
EXPLANATION
Lakshminarayanan foresees the next five years as decisive for the health of enterprises, focusing on deploying an AI operating system and anticipating AI‑native firms that will disrupt current business models.
EVIDENCE
He discusses Tata Com’s internal assessment framework, the need for an AI operating system, and predicts AI‑native companies will disrupt existing models within five years (lines 278‑315).
MAJOR DISCUSSION POINT
Future outlook and five‑year vision for AI and cybersecurity
P
Pradeep Sekar
3 arguments177 words per minute829 words280 seconds
Argument 1
Emphasises protecting decision‑making and trust through provenance, authenticity, and verification mechanisms
EXPLANATION
Pradeep stresses that cybersecurity must evolve to safeguard not only systems and data but also the decision‑making process, using provenance, authenticity, and verification to ensure trust in AI‑driven transactions.
EVIDENCE
He explains that trust can be measured via provenance, authenticity, and verification, allowing organisations to assess whether a transaction is trustworthy before an AI‑driven agent acts (lines 206‑210).
MAJOR DISCUSSION POINT
Governance, trust, and the need for an AI operating system
AGREED WITH
A. S. Lakshminarayanan, Dharshan Shanthamurthy, Samrat Kishor
Argument 2
Introduces three risk lenses for boards: compliance (e.g., EU AI Act), operational (model reliability and availability), and strategic (reputation and financial impact of AI‑driven attacks)
EXPLANATION
Pradeep outlines three perspectives for board‑level risk management: compliance with regulations, operational concerns about model reliability and uptime, and strategic implications such as reputational and financial damage from AI‑driven attacks.
EVIDENCE
He details the three lenses—compliance (EU AI Act, TDPDP), operational (model reliability, service continuity), and strategic (reputation, financial impact)—and notes how boards are beginning to ask these questions (lines 222‑236).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Regulatory analyses outline compliance, operational, and strategic risk dimensions for AI, aligning with board-level lenses discussed in AI governance literature [S26][S20].
MAJOR DISCUSSION POINT
Strategic risk management and organizational risk lenses
AGREED WITH
G. Narendra Nath, Daisy Chittilapilly
Argument 3
AI acts as a force multiplier for both defenders and attackers, enhancing detection capabilities while also enabling large‑scale AI‑driven phishing and social‑engineering attacks.
EXPLANATION
Pradeep explains that AI dramatically improves the speed and scale at which security operations can detect threats, but the same technology empowers adversaries to automate and amplify phishing and other social‑engineering campaigns, intensifying the overall threat landscape.
EVIDENCE
He describes AI helping defenders to detect threats at unprecedented scale and speed, while also noting that attackers can industrialise disruption through AI‑driven phishing and social engineering (lines 212-219).
MAJOR DISCUSSION POINT
Dual impact of AI as a force multiplier in cybersecurity
R
Richard Marko
2 arguments137 words per minute463 words201 seconds
Argument 1
Highlights humans as the weakest link; deep‑fakes and AI‑generated phishing amplify social‑engineering risks
EXPLANATION
Richard points out that people remain the most vulnerable element in cybersecurity, and AI‑generated deep‑fakes and sophisticated phishing increase the effectiveness of social‑engineering attacks.
EVIDENCE
He states that humans are the weakest link and that AI makes it harder to distinguish scams from genuine communications, citing deep‑fakes and AI‑enhanced phishing as examples (lines 98‑100).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The rise of deep-fake media and AI-generated phishing campaigns is documented as amplifying human-centric social-engineering threats [S33][S27].
MAJOR DISCUSSION POINT
Human factor, resilience, and evolving threat landscape
Argument 2
Stresses the need for granular visibility into AI agent actions, guarding against interception or manipulation of commands
EXPLANATION
Richard argues that organisations must have detailed insight into what AI agents are doing, ensuring that commands are not intercepted, altered, or executed without supervision.
EVIDENCE
He calls for visibility into what is running in the background, how commands are transferred, and whether they can be intercepted or modified, emphasizing the importance of detailed scrutiny (lines 101‑105).
MAJOR DISCUSSION POINT
Human factor, resilience, and evolving threat landscape
S
Samrat Kishor
2 arguments175 words per minute1101 words375 seconds
Argument 1
AI has moved from being an application‑layer add‑on to a fundamental component embedded throughout the infrastructure stack.
EXPLANATION
Samrat explains that AI is no longer just a feature on top of existing systems; it is now woven into the core infrastructure that underpins applications, changing how systems are designed and deployed.
EVIDENCE
He notes that AI is becoming a fundamental part of the infrastructure used to build applications and that the perspective has shifted from viewing AI only at the application layer to it being embedded deep in the infrastructure (lines 26-32).
MAJOR DISCUSSION POINT
Shift in AI integration within technology stacks
Argument 2
Enterprises should adopt a corporate AI responsibility framework, similar to corporate social responsibility, to own and control the actions of AI systems they deploy.
EXPLANATION
Samrat argues that organizations need to formalise accountability for AI, ensuring that AI behaviours are governed, transparent, and aligned with ethical standards, moving beyond mere compliance to proactive stewardship.
EVIDENCE
He likens the emerging need to “corporate AI responsibility” to traditional CSR, stating that corporates must discuss how they control and own the actions of the AI they build and deploy (lines 91-93).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Multistakeholder discussions stress corporate responsibility for preventing AI misuse, advocating formal AI stewardship frameworks akin to CSR [S28][S26].
MAJOR DISCUSSION POINT
Governance and ethical stewardship of AI
Agreements
Agreement Points
AI brings both significant opportunities for security and new, serious risks, creating a dual‑nature dynamic.
Speakers: Daisy Chittilapilly, G. Narendra Nath, Dharshan Shanthamurthy, Pradeep Sekar
AI offers machine‑scale security management but introduces model leakage, jail‑breaking, and open‑source vulnerabilities Rapid AI adoption outpaces mitigation; adversarial nation‑states exploit AI, and data becomes the control plane leading to model poisoning AI levels the playing field for defenders while also creating new asymmetric threats AI acts as a force multiplier for both defenders and attackers, enhancing detection capabilities while also enabling large‑scale AI‑driven phishing and social‑engineering attacks
All four speakers note that AI can improve cyber-defence (e.g., machine-scale management, parity for defenders) but at the same time introduces novel threats such as model leakage, nation-state weaponisation, and AI-driven phishing, highlighting a clear opportunity-risk duality [8-21][40-44][48-49][170-176][212-219].
POLICY CONTEXT (KNOWLEDGE BASE)
This view reflects the widely-recognised dual-use nature of AI, highlighted in security analyses that note AI can both strengthen defenses and create novel threats [S65] and in broader policy discussions on AI’s dual-use challenges [S66].
A structured AI governance framework (often described as an AI operating system or security playbook) is essential to ensure trustworthy, controllable AI actions.
Speakers: A. S. Lakshminarayanan, Dharshan Shanthamurthy, Pradeep Sekar, Samrat Kishor
Proposes an AI operating system that unites context, agentic, and trust/governance layers to safely orchestrate LLM‑driven actions Calls for an AI security operating system/playbook that enables organizations to leverage AI defensively rather than merely reacting to threats Emphasises protecting decision‑making and trust through provenance, authenticity, and verification mechanisms Enterprises should adopt a corporate AI responsibility framework, similar to corporate social responsibility, to own and control the actions of AI systems they deploy
Each speaker stresses the need for a layered, policy-driven AI governance model-whether called an AI operating system, a security playbook, or corporate AI responsibility-to manage context, agency, and trust, and to keep AI actions under organisational control [87-90][190-192][206-210][91-93].
POLICY CONTEXT (KNOWLEDGE BASE)
Calls for a formal AI operating-system echo emerging policy guidance such as the NSA’s AI Security Center playbook and the call for structured, inclusive AI governance in international forums [S59][S62].
Developing assessment frameworks, capacity‑building initiatives, and clear risk‑management lenses is critical for safe AI deployment.
Speakers: G. Narendra Nath, Daisy Chittilapilly, Pradeep Sekar
Announces the creation of AI assessment frameworks and sandboxing initiatives to evaluate security and functionality before production deployment There is a significant AI readiness gap in large enterprises, with many lacking data strategy, compute capacity, threat understanding, and innovation capability Introduces three risk lenses for boards: compliance (e.g., EU AI Act), operational (model reliability and availability), and strategic (reputation and financial impact of AI‑driven attacks)
All three speakers call for formal mechanisms-assessment frameworks and sandboxes, readiness programmes to close capability gaps, and board-level risk lenses-to manage AI risks and build capacity across sectors [260-267][118-123][222-236].
POLICY CONTEXT (KNOWLEDGE BASE)
The emphasis on assessment frameworks and capacity-building aligns with recommendations from multistakeholder panels that stress evidence-based approaches, interdisciplinary skill development and governance foundations before deployment [S49][S51][S57].
Similar Viewpoints
Both emphasise the creation of a dedicated AI operating system/playbook that integrates context, agency, and governance to make AI deployments safe and controllable [87-90][190-192].
Speakers: A. S. Lakshminarayanan, Dharshan Shanthamurthy
Proposes an AI operating system that unites context, agentic, and trust/governance layers to safely orchestrate LLM‑driven actions Calls for an AI security operating system/playbook that enables organizations to leverage AI defensively rather than merely reacting to threats
Both the government representative and the private‑sector executive stress the importance of formal AI assessment frameworks and sandbox‑type testing to ensure secure AI roll‑out [260-267][283-295].
Speakers: G. Narendra Nath, A. S. Lakshminarayanan
Announces the creation of AI assessment frameworks and sandboxing initiatives to evaluate security and functionality before production deployment We developed an assessment framework ourselves… (internal AI operating system and capability matrix) to evaluate AI readiness
Unexpected Consensus
Alignment between a government official and a private‑sector leader on the need for sandbox‑based assessment frameworks for AI security.
Speakers: G. Narendra Nath, A. S. Lakshminarayanan
Announces the creation of AI assessment frameworks and sandboxing initiatives to evaluate security and functionality before production deployment We developed an assessment framework ourselves… (internal AI operating system and capability matrix) to evaluate AI readiness
While Narendra discussed national-level sandbox initiatives and the ETI framework, Lakshminarayanan, representing a major telecom company, independently reported building an internal assessment framework, showing an unexpected convergence of public-policy and private-sector approaches to AI risk assessment [260-267][283-295].
Overall Assessment

The panel shows strong convergence on three core themes: (1) AI’s dual nature of opportunity and risk; (2) the necessity of a layered AI governance/operating‑system model; (3) the urgent need for assessment frameworks, capacity building, and risk‑lens tools. These shared positions cut across government, industry, and academia, indicating a high degree of consensus on how AI should be integrated securely into critical infrastructure and enterprise practice.

High consensus – most speakers align on the same strategic priorities, suggesting that future policy and industry road‑maps are likely to co‑evolve around governance frameworks, capacity development, and balanced risk‑benefit perspectives.

Differences
Different Viewpoints
Different preferred mechanisms for securing AI systems
Speakers: Daisy Chittilapilly, G. Narendra Nath
AI pressures every layer of the stack, requiring a shift from hardware‑centric security appliances to a virtual, distributed security mesh Announces the creation of AI assessment frameworks and sandboxing initiatives to evaluate security and functionality before production deployment
Daisy argues that security should be embedded in the network fabric through a virtual, distributed mesh and an AI operating system that governs LLM actions (lines 124-132, 87-90) [124-132][87-90]. Narendra, by contrast, stresses the need for formal assessment frameworks and sandbox environments to test AI systems prior to deployment, emphasizing regulatory and testing approaches rather than architectural redesign (lines 260-267, 268-270) [260-267][268-270]. Both seek safer AI but diverge on whether the solution is architectural (virtual mesh) or procedural (assessment/sandbox).
POLICY CONTEXT (KNOWLEDGE BASE)
The debate over internal safety controls versus external threat-detection mirrors analyses that distinguish safety-focused model validation from security-focused defensive measures [S58].
Focus of capacity‑building efforts – enterprise‑wide AI readiness versus sector‑specific AI divide
Speakers: Daisy Chittilapilly, G. Narendra Nath
There is a significant AI readiness gap in large enterprises, with many lacking data strategy, compute capacity, threat understanding, and innovation capability AI adoption is uneven across sectors, with the health sector lagging behind, creating an AI divide that requires sector‑specific capacity building and assessment frameworks
Daisy highlights a broad ambition-reality gap across enterprises, pointing to missing data layers, compute, and threat expertise (lines 118-123) [118-123]. Narendra focuses on sectoral disparities, noting that finance is mature while health is enthusiastic but immature, calling for tailored capacity-building and assessment frameworks for each sector (lines 242-252, 254-259) [242-252][254-259]. The disagreement lies in whether capacity-building should be pursued as a universal enterprise initiative or as targeted sector-specific programs.
Unexpected Differences
Human factor versus infrastructure‑centric view of AI risk
Speakers: Richard Marko, A. S. Lakshminarayanan
Highlights humans as the weakest link; deep‑fakes and AI‑generated phishing amplify social‑engineering risks AI multiplies existing digital‑infrastructure fragility, especially at the edge, by vastly increasing traffic and long‑lived API sessions
Richard stresses that the primary AI-related security challenge lies with people and social engineering (lines 98-100) [98-100], whereas Lakshminarayanan argues that the core problem is the fragility of the underlying digital infrastructure, which AI will exacerbate (lines 68-76) [68-76]. The contrast between a human-centric risk focus and an infrastructure-centric risk focus was not anticipated given the overall technical nature of the panel.
POLICY CONTEXT (KNOWLEDGE BASE)
The contrast between human-centric risk considerations and infrastructure-centric security aligns with studies on AI as critical public-service infrastructure that stress adoption barriers and human factors, as well as broader calls for human responsibility in AI governance [S52][S61].
Overall Assessment

The panel displayed broad consensus on AI’s dual nature as both opportunity and risk, but diverged on the primary pathways to secure AI—architectural redesign versus procedural assessment, enterprise‑wide versus sector‑specific capacity building, and differing governance mechanisms. These disagreements highlight the need for coordinated policy that integrates technical, regulatory, and organizational strategies.

Moderate to high: while participants share common goals (secure, trustworthy AI), they propose distinct, sometimes competing, approaches. This could lead to fragmented initiatives unless a harmonised framework that balances architectural, regulatory, and governance measures is established.

Partial Agreements
All three speakers agree that AI governance and trust are essential, but they differ on the primary mechanism: Lakshminarayanan suggests a layered AI operating system (lines 87-90) [87-90]; Daisy focuses on embedding security in the network fabric via a virtual mesh (lines 124-132) [124-132]; Pradeep stresses provenance‑based verification of AI‑driven decisions (lines 206-210) [206-210]. The shared goal is trustworthy AI, yet the implementation pathways diverge.
Speakers: A. S. Lakshminarayanan, Daisy Chittilapilly, Pradeep Sekar
Proposes an AI operating system that unites context, agentic, and trust/governance layers to safely orchestrate LLM‑driven actions AI pressures every layer of the stack, requiring a shift from hardware‑centric security appliances to a virtual, distributed security mesh Emphasises protecting decision‑making and trust through provenance, authenticity, and verification mechanisms
Samrat calls for a formal corporate AI responsibility model (lines 91-93) [91-93], while Pradeep proposes board‑level risk lenses (compliance, operational, strategic) to govern AI (lines 222-236) [222-236]. Both aim to embed AI governance at the organizational level, but Samrat emphasizes a responsibility framework, whereas Pradeep focuses on risk‑lens based oversight.
Speakers: Samrat Kishor, Pradeep Sekar
Enterprises should adopt a corporate AI responsibility framework, similar to corporate social responsibility, to own and control the actions of AI systems they deploy Introduces three risk lenses for boards: compliance, operational, and strategic
Takeaways
Key takeaways
AI presents both a powerful opportunity for scaling cybersecurity defenses and a new set of risks such as model leakage, jail‑breaking, data‑driven control‑plane attacks, and open‑source vulnerabilities. The speed of AI adoption outpaces the development of mitigation measures, creating a gap where adversarial nation‑states and sophisticated attackers can exploit AI tools. Existing digital‑infrastructure is already fragile; AI amplifies this fragility—especially at the edge—by dramatically increasing east‑west traffic and long‑lived API sessions. Security must shift from hardware‑centric, perimeter‑only appliances to a virtual, distributed security mesh that is embedded throughout the network and AI stack. A dedicated AI operating system (or AI security operating system) is needed to provide context, agentic control, and governance/trust layers for safe LLM‑driven actions. Human factors remain the weakest link; AI‑generated deep‑fakes and phishing heighten social‑engineering threats, demanding granular visibility into AI agent behavior. Boards should evaluate AI risk through three lenses: compliance (e.g., EU AI Act), operational (model reliability, availability), and strategic (reputation and financial impact of AI‑driven attacks). Governmental bodies are beginning to create assessment frameworks, sandbox environments, and capacity‑building programs to address AI security at national scale. The next five years are critical: organizations need to mature AI governance, talent, and platform capabilities, while AI‑native companies are expected to disrupt existing business models.
Resolutions and action items
Develop and adopt AI assessment frameworks for security and functional validation (suggested by G. Narendra Nath and A. S. Lakshminarayanan). Leverage sector‑specific sandboxes (RBI, telecom, etc.) to pilot AI solutions before production deployment. Create an AI operating system that integrates context, agentic, and trust/governance layers (proposed by A. S. Lakshminarayanan). Establish AI security operating system/playbooks to shift organizations from reactive defense to proactive, AI‑enabled security operations (highlighted by Dharshan Shanthamurthy). Invest in capacity building: talent development, data strategy, compute resources, and governance mechanisms (noted by Daisy Chittilapilly). Encourage corporate AI responsibility frameworks to govern AI actions and outcomes (raised by Samrat Kishor).
Unresolved issues
Specific standards and metrics for measuring AI trust, provenance, and authenticity across enterprises remain undefined. How to operationalize continuous monitoring of AI model drift and prevent long‑term degradation without clear industry guidelines. The exact mechanisms for integrating AI governance into existing IT/OT environments, especially at the edge, were not detailed. Methods for quantifying strategic AI risk (financial impact, reputation) for board reporting need further development. Coordination between national security agencies and private sector on threat intelligence sharing for AI‑enabled attacks was mentioned but not resolved.
Suggested compromises
Balance rapid AI adoption with deliberate security integration—adopt AI for competitive advantage while simultaneously investing in mitigation and governance (as advocated by multiple speakers). Combine hope (AI as a force multiplier for defenders) with fear (AI as a new attack vector) to drive balanced investment in both offensive and defensive capabilities. Transition from hardware‑only security appliances to a hybrid approach that includes virtual, distributed security functions, acknowledging current infrastructure constraints while moving toward a more flexible model.
Thought Provoking Comments
AI is both an opportunity and a challenge – we can use it to manage security at machine scale, but we also have to protect models from jailbreaking, data leakage, poisoning, and the inherent vulnerabilities of open‑source models.
She framed AI in cybersecurity as a dual‑edged sword, highlighting not just the promise of automation but the concrete new attack surfaces that AI introduces.
Set the stage for the entire panel by establishing the central tension; prompted other speakers (e.g., Narendra and Lakshmi) to discuss specific risks (model drift, edge‑infrastructure strain) and mitigation strategies.
Speaker: Daisy Chittilapilly
The adoption of AI is happening at breakneck speed, and while many enterprises use AI to boost productivity, nation‑states and adversarial enterprises are weaponising AI, creating a disconnect that must be bridged.
He highlighted the unprecedented rapidity of AI uptake and the parallel rise of sophisticated AI‑enabled threats, emphasizing a strategic gap between defenders and attackers.
Shifted the conversation from technical challenges to a geopolitical perspective; led to deeper discussion on national‑scale implications and the need for faster, coordinated responses.
Speaker: G. Narendra Nath
Digital infrastructure is already fragile; adding AI will multiply that fragility a hundredfold, especially by exploding east‑west traffic and long‑lived API sessions at the edge, demanding an AI operating system with context, agentic, and trust‑governance layers.
He quantified the systemic impact of AI on existing IT/OT ecosystems and introduced the concept of an AI operating system as a holistic governance framework.
Introduced a new architectural paradigm that redirected the discussion toward platform‑level solutions; other panelists (Daisy, Richard) referenced this when talking about network virtualization and resilience.
Speaker: A. S. Lakshminarayanan
Resilience now must protect not only the infrastructure but also the hidden actions of AI agents—understanding what runs in the background, how commands are transferred, and guarding against interception or modification.
He expanded the notion of resilience to include the opaque behavior of autonomous agents, linking technical risk to human factors like deep‑fakes.
Deepened the analysis of AI‑driven threats by adding a layer of operational opacity; reinforced Lakshmi’s call for trust and governance, and prompted Daisy to discuss virtualized security meshes.
Speaker: Richard Marko
AI levels the playing field for defenders, turning the historically asymmetric cyber‑war into a more balanced contest; we can now use AI agents in SOCs to automate shift handovers and other tasks.
He offered a hopeful counter‑narrative to the fear‑focused discourse, suggesting AI can restore parity between attackers and defenders and create new talent opportunities.
Shifted tone from risk‑centric to opportunity‑centric; inspired subsequent comments about building AI‑enabled security operations and the need for AI‑security operating systems.
Speaker: Dharshan Shanthamurthy
We should view AI risk through three lenses – compliance (e.g., EU AI Act), operational (model reliability, trust), and strategic (financial impact on reputation) – and translate these into quantifiable financial metrics for the board.
He provided a structured risk‑management framework that moves the conversation from abstract threats to concrete governance and reporting mechanisms.
Guided the panel toward actionable governance models; influenced Lakshmi’s discussion of assessment frameworks and Narendra’s mention of sandboxing and regulatory structures.
Speaker: Pradeep Sekar
AI will not just automate tasks; it will scale decisions, requiring a new paradigm where we assess capabilities (talent, culture, platform) against desired outcomes (efficiency, revenue, trust) on a two‑axis framework.
He introduced a strategic assessment matrix that reframes AI implementation as a capability‑outcome alignment problem rather than a collection of isolated use cases.
Provided a concrete roadmap for enterprises, prompting other speakers to reference the need for platform approaches and governance layers; set the tone for the forward‑looking “five‑year” vision.
Speaker: A. S. Lakshminarayanan (later in the discussion)
Overall Assessment

The discussion was driven forward by a series of pivotal insights that moved the panel from a broad framing of AI as a double‑edged technology to concrete, multi‑dimensional challenges and solutions. Daisy’s dual‑nature framing opened the floor, while Narendra’s warning about rapid, adversarial adoption shifted focus to national security. Lakshmi’s exposition of infrastructure fragility and the AI operating system concept introduced a new architectural paradigm, which Richard expanded into a deeper resilience narrative. Dharshan’s hopeful view of AI leveling the cyber‑defense playing field rebalanced the tone, and Pradeep’s three‑lens risk framework gave the conversation a practical governance structure. Finally, Lakshmi’s capability‑outcome matrix provided a strategic roadmap, tying together talent, platforms, and trust. Collectively, these comments redirected the dialogue from abstract concerns to actionable frameworks, influencing subsequent speakers and shaping a forward‑looking consensus on the need for holistic, governance‑driven AI integration in cybersecurity.

Follow-up Questions
How can we develop effective methods to protect AI models from jailbreaking, data leakage, and poisoning, particularly for open‑source models?
She highlighted the inherent vulnerabilities of open‑source AI models and the need for detection and mitigation techniques.
Speaker: Daisy Chittilapilly
What clear definitions and distinctions are needed between cybersecurity issues and AI malfunction or poor design, to avoid confusion in risk assessment?
He noted a lack of clarity on what constitutes a cybersecurity problem versus an AI design flaw, which hampers effective mitigation.
Speaker: G. Narendra Nath
How can comprehensive assessment frameworks be created for evaluating the security and functional integrity of AI systems before deployment?
Multiple participants mentioned the absence of standardized testing, certification, and assessment processes for AI deployments.
Speaker: G. Narendra Nath, A. S. Lakshminarayanan, Pradeep Sekar
What should an AI operating system look like, incorporating context, agentic, trust, and governance layers to safely manage LLMs and AI agents?
She advocated for an AI OS that provides governance and trust mechanisms to control AI behavior across applications.
Speaker: A. S. Lakshminarayanan
How can we ensure resilience by monitoring and securing the background processes, command pipelines, and agent interactions in AI‑driven workflows?
He emphasized the need to understand and protect the detailed steps an AI agent takes, including interception risks.
Speaker: Richard Marko
What strategies are needed to close the ambition‑versus‑reality gap in AI readiness, especially regarding data strategy, compute capacity, and AI threat awareness?
She presented data showing many enterprises lack essential foundations despite strong AI deployment ambitions.
Speaker: Daisy Chittilapilly
How can organizations quantify AI‑related operational and strategic risks, including trust, provenance, authenticity, and financial impact, for board‑level decision‑making?
He outlined three risk lenses (compliance, operational, strategic) and the need for metrics to translate AI risk into financial terms.
Speaker: Pradeep Sekar
What are the implications of AI‑induced increases in east‑west traffic, API call volume, and edge inference on the fragility of critical infrastructure?
She warned that AI will multiply network strain, especially at the edge, potentially overwhelming existing infrastructure.
Speaker: A. S. Lakshminarayanan
What capacity‑building initiatives and talent development programs are required to bridge the AI and cybersecurity skill gap across sectors such as health?
He highlighted a digital and AI divide, stressing the need for skilled personnel and training frameworks.
Speaker: G. Narendra Nath
How can sandbox regulatory frameworks be expanded and standardized to safely test AI innovations across diverse sectors?
He referenced existing sandboxes (RBI, telecom) and suggested broader use for AI experimentation before production rollout.
Speaker: G. Narendra Nath
What new business models and disruption patterns might emerge from AI‑native companies, and how should incumbents prepare?
She warned that AI could spawn a new class of disruptors, similar to past internet and fintech waves, requiring strategic foresight.
Speaker: A. S. Lakshminarayanan
What novel categories of technology (silicon, software, systems) need to be designed to meet the exponential performance demands of AI workloads?
She argued that existing hardware‑software stacks are insufficient for AI’s exponential growth, calling for re‑imagined technology stacks.
Speaker: Daisy Chittilapilly
How can corporations institutionalize ‘AI responsibility’ to govern and own the actions of AI systems they develop and deploy?
He introduced the concept of corporate AI responsibility, indicating a need for policies and frameworks to manage AI behavior.
Speaker: Samrat Kishor

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Why science metters in global AI governance

Why science metters in global AI governance

Session at a glanceSummary, keypoints, and speakers overview

Summary

The session opened with Anil Ananthaswamy stating that effective governance requires understanding, and the UN Secretary-General highlighted the urgency of grounding AI policy in science [1-2][5-15]. Guterres announced the creation of an Independent International Scientific Panel on AI, describing it as independent, globally diverse and multidisciplinary, intended to provide a shared baseline of analysis for all countries [17-22][24-26]. He argued that science-led guardrails can protect human rights while accelerating innovation, and that a universal scientific language can align technical standards and reduce fragmented rule-making [27-33][38-43]. Emphasising human oversight, he said policy must be evidence-based, with clear accountability so decisions are not outsourced to algorithms [45-49].


In the fireside chat, Yoshua Bengio noted that AI scientists often disagree on future risks, making it essential to identify where evidence is strong and where uncertainty remains, similar to climate tipping-point debates [70-78]. He stressed that rapid AI advances create a lag between scientific findings and policy action, requiring neutral, accessible evaluations for policymakers [81-84]. Soumya Swaminathan compared the AI challenge to the COVID-19 response, urging rapid, globally coordinated evidence mechanisms and inclusive systems that reflect diverse contexts, especially in low-income settings [206-214][217-220]. Balaraman Ravindran highlighted the lack of data on AI’s social impacts in the Global South, citing education and agriculture as areas where evidence on effectiveness and equity is still missing [225-236][229-240].


Anne Bouverot argued that misunderstanding fuels fear, and that accurate scientific panels are needed to inform both citizens and policymakers, using past job-loss predictions as an example of how evidence shapes policy choices [250-275]. Ajay Sood described India’s National AI Governance Framework, which combines public-private partnerships, techno-legal design, and capacity-building to manage risks while scaling AI services [283-300]. Singapore’s Minister Josephine Teo reinforced the need for sustained research investment, a balance between speed and caution, and international cooperation to create interoperable standards, positioning the UN as the legitimate hub for such coordination [320-340][345-352].


Across speakers, there was consensus that scientific assessment, shared benchmarks, and inclusive dialogue are critical to prevent fragmented regulations and to operationalise high-level AI principles such as transparency and safety [33-34][331-340]. The discussion concluded that a UN-anchored, multidisciplinary scientific panel can bridge evidence and policy, making AI governance both effective and trustworthy for global development goals [45-49][55][345-346][354-357].


Keypoints


Major discussion points


Science as the foundation of global AI governance – The UN is building a practical architecture that puts science at the centre, creating an Independent International Scientific Panel to provide a shared baseline of analysis and interoperable technical standards so that “countries at every level of AI capacity can act with the same clarity” and “guardrails… can travel with the technology” [17-24][36-43][45-48].


Bridging the science-policy gap amid uncertainty and rapid change – Yoshua Bengio stresses that AI research shows “very rapid growth… uneven… surpassing most people on some measurements and being kind of stupid… on others,” creating a lag between scientific evidence and policy decisions; he argues for neutral, fact-based evaluations that recognise uncertainty, highlight severe-risk clues, and help policymakers act despite limited proof [67-78][81-86].


Industry’s role in fostering a common, evidence-based understanding – Brad Smith warns that debates often stall because “people don’t have a common understanding of the problem” and are “too quick to want to blame someone,” urging a shift from hype to facts and emphasizing that the UN is the best platform to build that shared scientific basis [103-149][143-148].


Ensuring inclusivity and equity, especially for the Global South – Both Bengio and panelists highlight the need for a globally diverse, multidisciplinary panel that “makes sure that everyone is at the table and no one is on the menu,” and stress that evidence must be actionable for low-income contexts (e.g., COVID-19 experience, AI impacts on youth in India) and that equity should be at the heart of AI for the public good [90-94][213-218][225-236][320-334].


Concrete steps: benchmarks, capacity-building, and operationalising principles – Singapore’s Minister Josephine Teo outlines concrete investments (a $1 billion AI R&D plan, a digital trust centre, AI safety institute) and calls for “standardized evaluation methodologies,” international cooperation on interoperable tools, and capacity-building so that high-level AI principles become actionable across jurisdictions [320-345].


Overall purpose / goal


The session was convened to launch and explain the United Nations’ new science-driven framework for AI governance-particularly the Independent International Scientific Panel-and to explore how robust, globally-shared scientific evidence can bridge the gap between rapid AI innovation and responsible policy, while ensuring inclusive participation from all regions and sectors.


Overall tone


The discussion began with a formal, urgent tone emphasizing the need for scientific grounding ([1-4], [17-24]). It shifted to a reflective, technical tone during the fireside chat, acknowledging uncertainty and the difficulty of translating science into policy ([67-86]). The industry contribution added a pragmatic, cautionary tone, warning against hype and urging common understanding ([103-149]). Throughout, the tone remained constructive and collaborative, moving toward optimism as panelists highlighted concrete initiatives and global cooperation ([320-345]). No major negative or confrontational shifts were observed; the conversation consistently aimed at building consensus and actionable pathways.


Speakers

António Guterres


Role / Title: Secretary-General of the United Nations


Areas of Expertise: International diplomacy, multilateral cooperation, AI governance leadership


Sources: [S3]


Anil Ananthaswamy


Role / Title: Moderator / Host, Author of The Elegant Math Behind Machine Learning


Areas of Expertise: Science communication, machine learning, public engagement


Sources: [S26]


Brad Smith


Role / Title: Vice Chair and President, Microsoft Corporation


Areas of Expertise: Technology policy, AI regulation, corporate leadership, privacy & cybersecurity


Sources: [S14], [S15]


Yoshua Bengio


Role / Title: Professor, Université de Montréal; leading AI researcher


Areas of Expertise: Deep learning, AI safety, machine learning research


Sources: [S19]


Balaraman Ravindran


Role / Title: Professor, Indian Institute of Technology Madras; Member, International Independent Scientific Panel on AI


Areas of Expertise: AI, machine learning, applications in agriculture and education, AI policy implications for the Global South


Sources: [S20]


Ajay Sood


Role / Title: Principal Scientific Advisor to the Government of India


Areas of Expertise: National AI governance, digital public infrastructure, techno-legal frameworks, AI risk assessment


Sources: [S23]


Amandeep Singh Gill


Role / Title: Under-Secretary-General and Special Envoy for Digital and Emerging Technologies, United Nations


Areas of Expertise: Digital policy, emerging technologies, multilateral coordination, science-policy interface


Sources: [S11], [S12], [S13]


Anne Bouverot


Role / Title: France’s Special Envoy for Artificial Intelligence; former Director General, GSMA


Areas of Expertise: AI policy, digital trust, telecommunications, AI ethics and governance


Sources: [S9], [S10]


Soumya Swaminathan


Role / Title: Former Chief Scientist, World Health Organization


Areas of Expertise: Global health, evidence-based policy, pandemic response, scientific advisory leadership


Sources: [S1], [S2]


Josephine Teo


Role / Title: Minister for Digital Development and Information, Singapore


Areas of Expertise: Digital policy, AI R&D investment, AI safety and trust infrastructure, international AI governance


Sources: [S6], [S7], [S8]


Additional speakers:


– None. All speakers appearing in the transcript are covered by the provided speakers-names list.


Full session reportComprehensive analysis and detailed insights

The session opened with Anil Ananthaswamy reminding the audience that “we cannot govern what we do not understand” and introducing United Nations Secretary-General António Guterres, whose leadership places science and multilateral cooperation at the heart of AI governance [1-4][5-15]. Guterres framed the challenge as a race against “AI innovation moving at the speed of light, outpacing our collective ability to fully understand it” and argued that policy must be built on trusted facts rather than hype or disinformation [10-16]. He noted that the UN has been “indispensable to not just the protection of people, but the preservation of our species” by helping humanity live with nuclear weapons without using them [120-124]. He announced the creation of an Independent International Scientific Panel on Artificial Intelligence, describing it as “fully independent, globally diverse and multidisciplinary” and intended to give every country, regardless of AI capacity, a clear analytical baseline [17-24][25-30]. The Secretary-General stressed that science-led guardrails protect human rights, preserve agency and accelerate innovation, positioning science as a “universal language” that can create interoperable technical standards so that a startup in New Delhi can scale globally with confidence [31-34][38-43][45-49].


In the subsequent fireside chat, Professor Yoshua Bengio highlighted the difficulty of reaching consensus among AI researchers, noting that “Scientists themselves don’t always agree on what to expect for the future” [70-71]. He argued that a neutral, fact-based synthesis is required to provide a shared understanding for policymakers and used a climate-tipping-point analogy to illustrate why precaution is needed even when evidence is incomplete: “if the risk has huge severity… then policymakers need to pay attention” despite a lack of proof [72-78]. Bengio also pointed out that the rapid, uneven growth of AI capabilities creates an inevitable lag between scientific publications and policy action, because studies involving people can take months while AI systems evolve week by week [81-84]. He suggested that governance should focus on “high-level principles that can be applied without having to go into the details because the details are going to change” [85-88]. This stance contrasts with Guterres’ emphasis on “aligning technical baselines, shared testing and risk measurement” to ensure interoperability and safety across borders [44-48].


Brad Smith, Vice-Chair and President of Microsoft, reinforced the need for a common, evidence-based understanding, warning that “people don’t have a common understanding of the problem” and that debates often devolve into blame without first agreeing on the problem’s context [143-148]. He invoked an 80-year economic-cycle theory to argue that the United Nations, created just over 80 years ago, remains humanity’s “greatest accomplishment” and an indispensable platform for coordinating AI governance [103-112][126-129]. Smith also noted that the UN has helped humanity “live with the ever-constant presence of nuclear weapons without using them,” echoing Guterres’ point on existential risk management [115-118]. He criticised the culture of grandiose predictions, noting that his own grading of industry forecasts yielded an average accuracy of only 25 % and that “there is no such thing as a crystal ball” [152-166][167-168].


Dr Soumya Swaminathan underscored the importance of inclusive, rapid evidence generation by comparing the AI challenge to the COVID-19 response. She described how, during the pandemic, her team reviewed “a couple of hundred publications every day” to issue timely recommendations, and she called for a global scientific body-“something like the IPCC” for AI-to provide fast, trustworthy evidence that can be adapted to diverse national contexts [206-214][217-220]. She warned that without such mechanisms, policy may be made in advance of evidence, risking irrelevance or harm, and emphasized that “policy must change when evidence becomes clear” [218-220].


Representing the Global South, Professor Balaraman Ravindran highlighted the paucity of data on AI’s social impacts in India, questioning how AI affects youth, children’s mental health, and agricultural productivity and noting that most stories come from the West and that “we don’t have evidence of AI interventions” in education or farming [229-236]. His remarks illustrate the need for locally-generated benchmarks to evaluate AI’s effectiveness and equity, especially in low-resource settings [229-236].


Anne Bouverot, France’s Special Envoy for AI, echoed the theme that misunderstanding fuels fear. She quoted Marie Curie-“nothing in life is to be feared, everything is to be understood”-to argue that scientific panels are essential for both citizens and policymakers [250-259]. Bouverot also cited past job-loss predictions, referencing both the Oxford and Elon Musk forecasts, to show how divergent scientific forecasts lead to vastly different policy responses, from universal basic income to reskilling programmes, underscoring the necessity of accurate, evidence-based forecasts [268-275].


Ajay Sood described India’s National AI Governance Framework, which combines public-private partnerships, “techno-legal” design, and capacity-building to embed governance directly into AI systems, mirroring the country’s earlier digital public-infrastructure experience [283-300]. He acknowledged the current uncertainty about AI risks but argued that embedding safeguards at the technical level offers a pragmatic path forward [291-300].


Singapore’s Minister Josephine Teo presented a concrete investment agenda, noting a US$1 billion national AI R&D plan that funds foundational and applied research on responsible AI [320-322]. She described the Digital Trust Centre and the AI Safety Institute as national assets that operationalise safety standards. Teo stressed the need to balance rapid AI development with careful, evidence-based policy, arguing that “both impulses are necessary” and that international cooperation is essential for interoperable standards [323-326][327-334]. She reiterated the UN’s unique legitimacy for global AI discourse, citing the UN High-Level Advisory Body on AI report (published end-2024) as the basis for the new Independent International Scientific Panel, and warned that operationalising high-level AI principles through standardized evaluation methodologies and capacity-building for all countries is the current challenge [335-345]. Regional actions announced included Singapore’s hosting of the International Scientific Exchange on AI Safety (first edition) and its second edition on 17-18 May [350-353], the Singapore AI Safety Red-Team Challenge (the first multicultural, multilingual exercise for the Asia-Pacific), Singapore’s chairmanship of the ASEAN Work Group on AI Governance and the development of the ASEAN Guide on AI Governance and Ethics (extending to generative AI) [360-363], and an India-wide collaboration on the International Network for Advanced AI Measurement, Evaluation and Science for joint testing efforts [364-367].


Moderator Amandeep Singh Gill framed the discussion as a “science-evidence-policy loop,” opening with the technical observation that “≈ 90 % of AI is matrix multiplication; a 0.01 % improvement in its efficiency has huge energy implications” [190-192]. He linked the Independent International Scientific Panel’s work to turning “facts and evidence” into a reliable engine for the Sustainable Development Goals [201-206][309-312]. Gill’s rapid-fire round reinforced the consensus that science must be central, that common technical baselines are vital for interoperability, and that inclusive evidence-generation is essential for equitable outcomes [241-246][309-312].


In conclusion, the participants reached broad agreement that science is the indispensable foundation for AI governance and that the UN-anchored Independent International Scientific Panel will provide the neutral, multidisciplinary evidence needed to bridge the gap between fast-moving technology and responsible policy. Action items include fast-tracking the panel’s first report ahead of the Global AI Governance Summit in July [22-24]; Singapore’s commitment to host the second International Scientific Exchange, develop regional safety benchmarks, and advance the ASEAN Guide and the International Network for Advanced AI Measurement [350-353][360-363][364-367]; Microsoft’s pledge to devote resources to UN-led scientific efforts [182-183]; and India’s rollout of its National AI Governance Framework with techno-legal safeguards [283-300]. The session closed with Minister Josephine Teo reaffirming the UN’s role as the legitimate hub for global AI discourse, urging continued collaboration to turn scientific insight into trustworthy, inclusive governance [335-345][350-353][354-357].


Session transcriptComplete transcript of the session
Anil Ananthaswamy

Today’s session begins from a simple but powerful premise. We cannot govern what we do not understand. It is my honor to open this session with a special address by the Secretary General of the United Nations, whose leadership has placed science and multilateral cooperation at the forefront of global AI governance. So please join me in welcoming His Excellency Antonio Guterres.

António Guterres

Thank you very much. There is a computer here. I don’t know to whom it belongs. Excellencies, ladies and gentlemen. Thank you for joining this discussion on the role of science in international AI governance. We are barreling into the unknown. AI innovation is moving at the speed of light, outpacing our collective ability to fully understand it, let alone govern it. AI does not stop at borders, and no nation can fully grasp its implications on its own. If we want AI to serve humanity, policy cannot be built on guesswork. It cannot be built on hype or disinformation. We need facts we can trust and share across countries and across sectors. Less noise, more knowledge. That is why the United Nations is building a practical architecture that puts science at the center of international cooperation on AI.

Thank you for watching. and it starts with the Independent International Scientific Panel on Artificial Intelligence. This panel is designed to help close the AI knowledge gap and assess the real impacts of AI across economies and societies so countries at every level of AI capacity can act with the same clarity. It is fully independent, it is globally diverse, and it is multidisciplinary because AI touches every area of every society. And I’m delighted that the General Assembly of the United Nations confirmed the 40 experts I proposed to member states. Now the real work begins on a fast track to deliver a first report ahead of the Global Summit. The Global Dialogue on AI Governance in July. The panel will provide a shared baseline of analysis.

helping member states move from philosophical debates to technical coordination, and anchor choices in evidence so policy is neither a blunt instrument that stifles progress nor a bystander to harm. That is how science transcends decision -making. When we understand what systems can do and what they cannot, we can move from rough measures to smarter, risk -based guardrails. Guardrails that protect people, uphold human rights, and preserve human agency. Guardrails that build confidence and give business clarity so innovation can move faster in the right direction. Science -led governance. Governance is not a brake on progress. It is an accelerator for solutions. A way to make progress safer, fairer, and more widely shared. It helps us identify where AI can do the most good the fastest.

And it helps us anticipate impacts early, from risks for children, to labor markets, to manipulation at scale. So countries can prepare, protect, and invest in people. Today, international cooperation is difficult. Trust is strained, and technological rivalry is growing. Without a common baseline, fragmentation wins, with different regions and different countries operating under incompatible policies and technical standards. A patchwork of rules will raise costs, weaken safety, and widen divides. Science is a universal language. Guided by the independent panel and the global dialogue on AI governance, we can align with the world. We can align our technical baselines. When we agree on how to test systems and measure risk, we create interoperability. So a start -up in New Delhi can scale globally with confidence because the benchmarks are shared, and safety can travel with the technology.

Finally, let us be clear. Science informs, but humans decide. Our goal is to make human control a technical reality, not a slogan. And that requires meaningful human oversight in every high -stakes decision, injustice, health care, credit. And it requires clear accountability so responsibility is never outsourced to an algorithm. People must understand how decisions are made, challenge them, and get answers. Excellent. Thank you, ladies and gentlemen. The message is simple. Less hype, less fear. More facts and evidence. Guided by science, we can transform AI from a source of uncertainty into a reliable engine for the sustainable development goals. Let us build a future where policy is as smart as the technology it seeks to guide. Thank you.

Anil Ananthaswamy

Thank you, Secretary General, for those inspiring opening remarks. Ladies and gentlemen, we were going to have Mr. Brad Smith. Vice Chair and President of Microsoft Corporation as our next speaker, but he’s running a bit late, so we will move to the next item in the agenda. I would like to welcome Professor Yashwa Bengio to the stage, Scientific Director of MILA and one of the world’s leading AI researchers. He and I will be in a fireside chat and we’re hoping that Mr. Brad Smith will be able to join us very soon. Thank you. So, welcome Professor Bengio.

Yoshua Bengio

Thank you for having me.

Anil Ananthaswamy

Our pleasure. So, you are the most cited computer scientist And I looked it up. You’re actually the most cited living scientist today and have played a unique role at the global science policy interface, including through the UN Scientific Advisory Board and your leadership of the International AI Safety Report. So from your perspective, how do these science policy interfaces actually work in practice and where do they add the most value?

Yoshua Bengio

So it’s tricky, right, because there are many different views, especially different interests in business, in different governments. And the role of science, the role of a kind of synthesis of science that we want for the UN panel, that we have seeked for the… AI Safety Report. is to try to make it, to provide a shared understanding as a basis for those political discussions and not be influenced by as much as is humanly possible by those tensions that exist in our societies. And I think it’s particularly important because maybe unlike in the case of climate, the scientists themselves don’t always agree on what to expect for the future or even how to interpret the science that exists.

I just want to add something. So something that’s a little bit subtle about this kind of exercise is that to be able to recognize the uncertainty and the divergences that exist, and where is it that scientists agree, where is it that the evidence is strong, where is it that we have clues that matter. Even if we’re not certain about a particular risk, we might have clues about it. But if the risk has huge severity, in other words, if it does unfold, then it could be catastrophic, then policymakers need to make attention. And it’s always difficult when we don’t have proof that something terrible is going to happen. Maybe a good analogy is tipping points in climate, right?

Because there’s not enough past evidence to be sure that a particular tipping point is going to happen. So the situation is similar in AI in the sense that we don’t have the experience of, say, machines that are really smart and can change society, and be even potentially smarter than us. So how can we deal with the right policy decisions? but that’s why it is so important to have as neutral and as fact -based evaluation of what is going on available to everyone and in a language that is accessible to everyone and of course for policy makers which by the way is difficult for scientists to achieve they need help, they need iterations they need feedback from people who are used to the interface between science

Anil Ananthaswamy

Is there anything in particular about the highly technical nature of AI and also the pace of change that makes this interface particularly difficult?

Yoshua Bengio

Yes, yes The facts shown in the scientific benchmarks across labs, companies and academia on AI show very rapid growth of the capabilities of these systems and the capabilities of AI and the capabilities of AI and the capabilities of AI and the capabilities of AI and the capabilities of AI and that growth is uneven so we see AIs even surpassing most people on some measurements of capability and being kind of stupid or like a six -year -old on some other things so it’s very difficult to grasp what that means but because it’s moving so fast there’s always going to be a lag between even like the scientific papers take time to be written if there are studies so think about studies that involve people they’re going to take months and so by the time we start seeing clues that there’s a potential problem so you can think of something recent that was not expected like the psychological effects on people of these chatbots we now have lots of anecdotal evidence and we’re only starting to see the scientific studies and of course on the policy side it’s going to be even more difficult even later because those discussions are going to happen after we see scientific evidence so there is going to be a lag and that’s a real problem because things could move

Anil Ananthaswamy

So maybe that leads well into our next question. We often hear that AI governance is moving too slowly and from your experience, what kinds of scientific assessments or benchmarks could realistically keep pace with this rapid change?

Yoshua Bengio

Yeah, that’s a great question. My opinion on this is that we should be thinking about not just policy and the usual sense of coming up with principles, but we should try to strive for high -level principles that can be applied without having to go into the details because the details are going to change. And the second thing is I think we should strive for technology that are going to help to implement those guardrails in the field, in the deployment of AI, because otherwise there’s not enough time to

Anil Ananthaswamy

Well, thank you for those insights. And also congratulations on your recent appointment to the Independent International Scientific Panel on AI. In a few words, how do you see this new panel helping to strengthen the link between science and global AI policymaking?

Yoshua Bengio

Well, I think there’s something really important about this panel, and it’s global aspect and being rooted in the UN. And the reason I’m saying this is that AI is going to be transforming our world very clearly, and it’s going to have global effects, whether it is on the good side, the benefits are on the risks, but also the kind of power relationships that are going to be changing in the future. And I’m personally very concerned about how this will unfold for developing countries in the global south. And we need to work. Thank you. in a multidisciplinary array so that we can foresee those effects and we can start discussions to make sure that everyone is at the table and no one is on the menu.

Anil Ananthaswamy

Well said, Professor Bengio. Well, thank you very much for kick -starting our discussion. We will now turn to our panel. So, ladies and gentlemen, it is essential that discussions about AI policy include the voices of key industry actors, and I am pleased to invite Mr. Brad Smith, Vice Chair and President, Microsoft Corporation, for his keynote address.

Brad Smith

Well, good morning, everyone. It’s a pleasure to be here. My apologies for being a few minutes late. I want to offer a couple of thoughts this morning. The first thing I think we should come together to think about is that, in my opinion, this is a moment in time when we need to reflect on and reinvest in the importance of the United Nations. There is a well -known economic theory that says that humanity is, in many ways, almost destined to repeat its great economic mistakes every 80 years. The reason it’s 80 years is because that is basically the lifespan of human beings. And so every 80 years, almost everyone who had any living memory of a prior financial calamity has left the planet.

If you look at the Great Recession that started in 2008, what you realize is that it happened 79 years after the stock market crash that led to the Great Depression in 1929. And you can follow this series of financial mistakes all the way back to the bursting of the tulip bubble in the Netherlands hundreds of years ago. I think there is a corollary worth thinking about. Just as there is a risk that humanity forgets the mistakes it made 80 years ago, humanity runs the risk of forgetting the great successes. it created 80 years ago. It was just over 80 years ago that the world came together to create the United Nations. It was, in my opinion, one of humanity’s greatest accomplishments of the 20th century.

It is a unique organization in a very imperfect world. And so, of course, on any day and any year, it is possible for anyone to blame the United Nations for the imperfections that we see all around us. But the truth is this. Those imperfections are fewer, and their consequences are less disastrous, in my view, because of the United Nations. And one of the great things about working at Microsoft in a job like Microsoft, in my opinion, is that I get to work in a global organization. We have subsidiaries in 120 countries. We do work in 190 countries. We see the world. It turns out that everywhere we go, we see the United Nations. Sometimes it’s the United Nations Development Program, working to foster economic development.

Sometimes it is UNHCR, helping refugees. Sometimes it is the UN Office of Human Rights, seeking to protect human rights. But the truth is, if there’s a problem, the United Nations is almost always part of the solution. We need to remember this. And we need to remember that however challenging the last 80 years have been, we have managed, as humanity, as a species, to live. with the ever -constant presence of nuclear weapons without using them or destroying ourselves. The United Nations has, in fact, in my view, been indispensable to not just the protection of people, but the preservation of our species. Why does that matter now? Why should we talk about it today and this week in Delhi?

Well, because here we are on the cusp of the future. A technology that we all know will likely change the future. Here we are in the second month of the second quarter of the 21st century, and we need to focus on how we bring the institutions on which we rely into that future. So then let me talk about a second aspect that I think is so important to think about this month. One of the things I’m constantly struck by… leading a global organization is how often everyone disagrees with each other about almost everything. But one of the things I’ve learned along the way is that I think one of the reasons people so quickly disagree is that we rush so quickly to debate competing solutions.

This happens in domestic politics. It happens in international diplomacy. It, frankly, happens in a global company. It actually happens everywhere, even in families. As soon as there’s a problem, people want to talk about the solution. And then people have different solutions, and then they debate, and they disagree, and they argue, and sometimes it’s even worse than that. One of the things I’ve learned is the reason people so often disagree about the solution is they don’t have a common understanding of the problem. They don’t spend enough time talking about the problem. They don’t have a shared contextual understanding. of the problem they’re trying to solve. They’re too quick to want to blame someone for the problem, and then that spirals into a discussion that becomes completely unconstructive.

Why does that matter today? Because what we’re here to talk about today is all about creating a more common understanding together based on science of where artificial intelligence is going. This is an indispensable tool. Indeed, it’s a critical service for humanity so we can all learn together, we can all think together, we can all understand together what is going on in the world. I think it’s especially critical, to be honest, when it comes to artificial intelligence because I think if you even communicate, consider most of the conversations you have about this technology. I would argue that it has two flaws. The first flaw is it usually involves people making very grandiose predictions about the future.

You know what? I’ve worked in the tech sector for 32 years. I have listened for more than three decades to my colleagues in my industry around the world make bold predictions about the future. No one ever holds them accountable a decade later for whether they were right or wrong. I used the researcher agent in Microsoft Copilot a couple weekends ago, and I loaded a lot of names. I won’t say whom, but you can guess. And I said, look at all the predictions they made about all the technologies, and look at the predictions they made about when these technologies would come to do something or another, and give them a grade. The average grade was 25%. You couldn’t even get close to the top.

You were at the bottom. So let’s just understand one thing together. There is no such thing as a crystal ball. No one has one. But what we do have is the ability to understand where we are today. And what we do have is a better understanding to just appreciate what is happening each and every year. There is a second flaw, in my view, in many of the conversations that take place, including at this AI summit. Everybody wants to talk about how they’re going to make machines smarter. That’s interesting. I think it’s interesting to imagine living in a world where a data center is like a country of geniuses. But as I mentioned yesterday, compared to the people who lived in the Bronze Age, we’re all geniuses.

We’re all geniuses already. What that should remind us… is that human capability is neither fixed nor finite. And so what really matters, in my opinion, is not whether we are going to build machines that are smarter than humans. Yes, in some ways we will. But how will we use those machines to make people smarter, to help us do what we need to do? That is what this effort is all about. Wow. Let’s harness the power of science to build a common understanding of what is changing each year, and then let’s connect it with the global dialogue on governance so we can pursue policies that will ensure that this technology serves people. There’s no better place to get started than here.

There’s no better time than now. And let’s face it, there is no better institution on the planet that can do more to serve humanity and protect the world. than the United Nations. And on behalf of Microsoft, I just want you to know we are putting our full energy and resources to do everything that we can to help. Thank you very much.

Anil Ananthaswamy

Thank you. Thank you, Mr. Smith, for those insights on responsibility, accountability, and the role of industry. We now turn to our panel. Our panel brings together scientific leadership, public policy expertise, and international coordination. Please welcome to the stage our speakers, Professor Balaraman Ravindran, IIT Madras, Swaminathan, former Chief Scientist, WHO, Ajay Kumar Sood, Principal Scientific Advisor to the Government of India, and Anne Bouveraud, France’s Special Envoy for AI. I am also pleased to introduce our moderator, Amandeep Singh Gill, Undersecretary General and Special Envoy for Digital and Emerging Technologies. I invite him to guide the discussion. Thank you very much.

Amandeep Singh Gill

Thank you very much. Thank you, Anil, for leading us and for those who have not read his book, The Elegant Math Behind Machine Learning, please do have a go at it. We cannot govern something that is not possible. Something that we don’t understand. So something as simple as, like, if 90 % of AI is matrix multiplication, a 0 .01 % as he was explaining, improvement in efficiency of matrix multiplication has huge energy implications. So I want to welcome our esteemed panelists. The stage has been set by very inspiring keynotes and a fireside chat. So we will dive straight in. And since we are running a little short of time, I’m going to compress the two rounds into one rapid -fire round.

So all of you have worked on or are working on the science policy interface. And my sense is that there is a loop here, that there is a loop between science and evidence, and evidence and… and policy. And we want to explore that loop today in the context of the significant development of the setting up of the International Independent Scientific Panel at the United Nations. So I want to start with you, Soumya. You were the first chief scientist, first woman chief scientist at the WHO and worked at a very difficult time during the COVID when evidence, trusted evidence was so critical. So in your view, what makes this evidence that comes from science trusted and actionable for policymakers?

Soumya Swaminathan

The evidence is very rapid. The field is moving so rapidly. In COVID, we had to review a couple of hundred publications every day to understand what was happening on different aspects, on the virus, on the immunology, on how vaccines were working and drugs, and we had to make recommendations based on the best available evidence that day. I think we may be in a similar situation with AI, and it’s wonderful that the UN has now set up this body, which I see as something like the IPCC. I think we do need global governance. We need something like, you know, we’re talking now about preventing future pandemics by sharing data on pathogens, making sure that we have protocols in place where countries are willing to share that data, and also, of course, to share the tools, the vaccines or drugs when they become available, when or in case there is another pandemic.

Similarly, I hope that this scientific body that’s been set up by the UN would also establish systems that would, would link to national bodies and systems, and that would ensure the voices of all are heard. So one of the things during COVID was some of our recommendations were relevant. in high -income countries but not in low -income countries because the context is very different. And the WHO was criticized for this, I think rightfully so, and we need to learn from those mistakes. So it’s the voices, for example, of women, a low -income woman, a farmer in a remote place, is going to use technology very differently from a large farmer with access to lots of machines in Europe or North America.

So if AI has to work for everyone, then we need to make sure that those voices are heard. And ultimately, I think that loop you talked about, sometimes policy is made in advance of evidence. You have to. You can’t wait. But the policy must change. It must ask for the relevant evidence and be able to adapt when that is clear.

Amandeep Singh Gill

Thank you very much, Soumya. I’m going to come to you, Ravi, Professor Balaraman Rabindran. Now, as AI policies begin to take shape and you’ve been involved in some policymaking yourself, what signals from… regulators or public sector users should most urgently guide future AI research priorities? So in a sense, you know, the loop coming back into research.

Balaraman Ravindran

so thank you for that question so I mean AI right now especially in the global south so we don’t completely understand the implications of adopting AI and how is it going to affect the society, the people livelihood and everything in fact I also feel that we don’t have enough evidence about how AI is even affecting the social fabric how are children getting increasingly isolated with the adoption of AI and whether the effect is uniform between cities and rural India because the cultural setup is very different and so on and so forth so if the government as we heard our honourable prime minister say yesterday should focus more on youth and the impact of AI on youth what is the evidence do we have about what is happening in India so we hear stories about you know how there is dependence of on AI models of children and also people who are mentally challenged and so on and so forth who are under stress but all of these stories are coming to us from the west so what is it that’s happening in India so when we have these kinds of policy decisions that have to be made the government says that AI should be pushing efficiency in agriculture so do we have a benchmark in India that can evaluate the efficiency of effectiveness of these AI models in agriculture what are the kinds of flaws that happens when I for example build a bot that can act as a co -pilot for a farmer so these are bigger challenges so we have a lot of questions

Amandeep Singh Gill

if I can quickly follow up where do you actually see evidence for impact in the sustainable development goals space just a quick example or two

Balaraman Ravindran

so I I That was not in the notes he gave us earlier, so I have to think on my feet here. So let me take one thing that we are very familiar with, we are working on right now, is on the education space, right? So, for example, we don’t know, we don’t have evidence of AI interventions. How likely is it to change student learning behavior? So we have done some preliminary studies. So the author of the study is somewhere in the audience, because he has been sending me pictures of the stage. So what we have found out is the effectiveness of AI adoption is a direct function of habit. So if the students are using AI more, then they tend to…

But now I don’t know what is the causal factor there. I don’t know if the causal factor is whether they are using AI more, therefore they get better effect, or do they use AI more because they are getting better effect. So these are questions that we have to ask. Even in something as simple as education. I am saying simple because there is a lot of positive buzz around using AI in education. But even there, we need a lot more evidence to come.

Amandeep Singh Gill

Thank you, Ravi, and we’re honored to have you on the new International Independent Scientific Panel. So if I may jump to you, Anne, and you’re an AI scientist yourself. You know, all of us know you as a special envoy of President Macron, who made the February summit happen last year in Paris, but you’re also an AI scientist. So from your perspective, you kind of lived in these two worlds. So what works best for the interface? What kind of scientific evidence would you take to President Macron if you were to convince him to change the policy?

Anne Bouverot

Well, thank you for the question. I studied AI a long time ago, but I’m not really a scientist. But I try to understand, of course. Understanding, I think, is probably the very first thing. And before we move to policymakers, I think it’s for citizens, for us as human beings. The things that we don’t understand… We tend to be more afraid of. I often quote scientist Marie Curie. She wasn’t an AI scientist, but she’s one of the brightest scientists that we’ve had, two times Nobel laureate. And there’s a wonderful quote by her. She says, nothing in life is to be feared. Everything is to be understood. And now is the time to understand more because, of course, there are more things we can be afraid of at the time when she was living and now as well.

So trying to understand things, having scientific panels is definitely the right thing to do. And we’re fully supportive in France of the scientific panel. We’re very proud that Joëlle Barral is our nominee. She’s a scientist in AI and health and a member of the panel. This is absolutely excellent. So, yes, understanding things. is absolutely key. And then maybe just a second point to give an example of how understanding something or not can lead to very different policy decisions in the field of AI and work. We’ve had predictions. I remember in 2013, that was the previous AI revolution, but scientists, I believe, at Oxford said within 10 years, half of the jobs will disappear. We haven’t seen that.

At the AI summit in Bletchley Park, for very good reasons, we had frontier AI leaders in particular, Elon Musk saying within two years, half of the jobs will disappear. So, of course, the fact that this didn’t happen doesn’t mean that there isn’t a risk for work. Of course, there’s a risk for work. But if your potential or probable outcome is the end of jobs, then you need to think about universal basicism. Basic income, what are we going to do with all the people who don’t have jobs? If what economists are saying is that 80 % of the jobs will be transformed, then the policy outcome is training, skilling, reskilling, and helping to educate people. That’s why listening to economists and having the International Labor Organization and other institutions really follow closely what is happening in which countries for younger people, for older people, for women, for men, for different types of jobs, that’s super

Amandeep Singh Gill

Merci beaucoup, Anne. Merci. And I’m going to turn to you, Professor Sood. You occupy an important position within the Indian system, and you look at science broadly. And India has deployed some of these technologies at societal scale. India stack the digital public infrastructure. So how do you look at the AI opportunity, and importantly, how do you look at AI risks? And how are you prioritizing R &D allocations to harness the opportunities, manage the risks?

Ajay Sood

Thank you very much for having me on the panel. As you know that all the aspects which you asked, we have had very extensive consultations across all stakeholders. And we came out with the National AI Governance Framework, not the regulatory framework, but how do we really handle AI, all aspects. And there we have looked at how do we enable the compute facility, compute resources to our people. Because we are not at the scale when a few trillions of dollars are being invested. So we came out with some framework which we think with public -private partnership we could enable it. And we could see the results of that within a year as demonstrated in AI Summit.

Summit, the release of AI, so on, models and so on. Other aspect which is very important, as you rightly said, the risk assessment. So this is where, as has been mentioned, our experience with the digital public infrastructure, which has been rolled on a very public scale with the safety and security, which is as difficult as in AI. AI, of course, is more difficult. We still do not know the risks. But when we were dealing with the digital public infrastructure, either for the financial transactions or for identity, identity verification and so on, it was a challenge. And that was done by embedding governance through technical design. And this is what we call techno legal, which Honorable Prime Minister said in the Paris summit.

And also mentioned here. So this is where we are suggesting that this could be one way to look at. It’s not that everything is laid out. We will need framework for that. We will need technologies for that. But this is one way which will have a smooth. interaction if we can bring this technological framework.

Amandeep Singh Gill

Thank you so much for those insights. And now that since we are running out of time and I’m going to discriminate against the men on the panel so my apologies in advance. So I’m going to turn back to you Soumya and Aan for like 40 second, 30 second reflection. What do you think in terms of the pace and direction of technology opportunities including for accelerating scientific discovery and risk. What would be your advice for the international independent scientific panel maybe Anne you can go first 40 seconds.

Anne Bouverot

Yes I think AI has a strong potential for helping science we’ve seen that with the two Nobel prizes in physics and chemistry a year back. There’s many more areas in science where AI can help. It can only be possible if we have databases of scientific data that are available to the world and that are constructed by scientists and funded by governments and international institutions around the world. So this is a very important topic for research.

Amandeep Singh Gill

Thank you, Anne. Soumya, you have the last one.

Soumya Swaminathan

Yes, I agree very much with Anne. And I think that the scientific panel could actually help network many more groups of scientists from around the world, perhaps sectorally, for example, what’s happening in health, what’s happening in education, what’s happening in agriculture, looking at the evidences as they emerge, encouraging research, setting priorities, but also looking at safety and risks, because I think that’s going to be very important. There may be unanticipated risks and harms that we have not considered. And, of course, equity, being a UN -led panel, ensuring that equity is at the heart of AI and it’s being done for public good.

Amandeep Singh Gill

fantastic thank you that’s a great closing ladies and gentlemen please join me in thanking our outstanding panel and we are going to move straight to the closing over to you Anil

Anil Ananthaswamy

thank you to the panel for a rich and forward looking discussion to close this session it is my honor to invite Josephine Teal minister for digital development and information of Singapore to deliver the closing remarks minister Josephine Teal

Josephine Teo

good morning everyone first allow me to thank the secretary general for his remarks and it serves as a very useful guidance to all of us working in this important technology for the closing this morning I thought that it would perhaps be useful to offer a perspective from a small state Singapore has a population of just 6 million people and more than 30 years ago at the UN we became the convener of the Forum of Small States which still has about 108 members I will just make three points on how we look at developments on this front The first point is that we believe in AI being used as a force for the public good but to do so, it is important that we continue to invest in the science that underpins it and ground trust in evidence This certainly requires sustained investment in research and is also the reason why we set aside a billion dollars in a national AI R &D plan which will include foundational and applied research into responsible AI We believe in it and we have to put money behind this effort There are of course other investments such as in building up a digital trust center.

It’s our designated AI safety institute that has been participating in important conversations on this topic, as well as setting up a center for advanced technologies in online safety. So those are just some of the efforts that we can dedicate resources to doing as a small state. The second point I want to make is that there is almost always going to be a tension between moving quickly, given the pace of AI development, and moving carefully, giving the latest evidence that presents themselves on what we should be paying attention to. Both impulses are necessary, and we believe it is not impossible to try and balance them through integration of science and policy. It is not easy, but it is not an effort that we must give up on.

I should just add that on this score, it will be much better if we can cooperate internationally to develop sound approaches that can also be interoperable across different jurisdictions. And this is one effort that we believe underpins the work that is being carried out by the UN. And this brings me to my third point. I want to highlight the important role that an organisation like the United Nations plays in facilitating global discourse to bridge science and policy. I cannot overemphasise the importance of this effort. We must recognise that global AI governance landscape is becoming increasingly fragmented. There are multiple initiatives, frameworks and institutions. The UN’s unique value lies in your legitimacy and inclusiveness to encourage interoperability across efforts.

The Secretary -General talked about this too. We therefore welcome… We welcome the establishment of the… independent international scientific panel on AI, building on the work of the UN High -Level Advisory Body on AI, which published its report on governing AI for humanity at the end of 2024. We note that the panel’s multidisciplinary approach, covering machine learning, applied AI, social science, ethics, all of these are necessary to address the complexity of AI governance challenges. Finally, I would just like to acknowledge that we now have substantial convergence on the high -level AI principles. Yoshua talked about this. Transparency, accountability, fairness, safety. But the challenge is in operationalizing them. We need to find standardized evaluation methodologies that work across different regulatory contexts.

We need capacity building so that all countries can meaningfully engage with the technical and the technical challenges. We need to work with the technical evidence and not just with the large AI research ecosystems. I would encourage all stakeholders to view scientific input not as a constraint on policy flexibility, but as a constraint on policy flexibility. as a foundation for more durable, effective governance that can maintain public trust. We need to keep the conversations going, one where science informs governance, and governance sharpens science. I would just perhaps end by highlighting Singapore’s continued commitment to contribute to advancing these discussions. We were very fortunate to host the International Scientific Exchange on AI Safety and to bring about the Singapore Consensus on Global AI Safety Research Priorities.

Joshua was in Singapore for this very momentous event. We will continue to participate in joint testing efforts of the International Network for Advanced AI Measurement, Evaluation and Science. We have organized two editions of the Singapore AI Safety Red Teaming Challenge, the first multicultural and multilingual AI safety red teaming exercise focused on the Asia -Pacific region. And as chair of the ASEAN Work Group on AI Governance, we have actively spearheaded efforts to foster a trusted environment in ASEAN by adapting global norms and best practices for ASEAN and in bringing about regional harmonization through the ASEAN Guide on AI Governance and Ethics, as well as expanding it to address the risk in generative AI. We are now working within ASEAN to explore practical tools for AI safety testing and aim to collectively develop a set of AI safety benchmarks that reflect our region’s concerns.

And finally, I’d like to welcome all colleagues to join us in Singapore for the second edition of the International Scientific Exchange, which we expect to take place on the 17th and 18th of May, and we look forward to furthering

Anil Ananthaswamy

Thank you very much once again. Thank you, Mr. Teo, for your closing remarks. This session is now concluded. Thank you very much. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (16)
Factual NotesClaims verified against the Diplo knowledge base (7)
Confirmedhigh

“António Guterres’ leadership places science and multilateral cooperation at the heart of AI governance.”

The knowledge base records Guterres emphasizing the importance of science in global AI governance and calling for evidence-based, multilateral approaches [S20].

Confirmedhigh

“AI innovation is moving at the speed of light, outpacing our collective ability to fully understand it.”

Guterres is noted as saying technological developments are unfolding at an unprecedented speed and that AI advancement is outpacing regulation and understanding [S89] and [S94].

Confirmedhigh

“Policy must be built on trusted facts rather than hype or disinformation.”

Guterres called for replacing hype and fear with shared, evidence-based approaches to AI policy [S5].

Confirmedhigh

“The creation of an Independent International Scientific Panel on Artificial Intelligence, described as fully independent, globally diverse and multidisciplinary, to give every country a clear analytical baseline.”

The panel is identified in the knowledge base as the first global scientific body on AI, independent and multidisciplinary, intended to provide expert evidence for all nations [S92] and [S93].

Additional Contextmedium

“The UN has been “indispensable to not just the protection of people, but the preservation of our species” by helping humanity live with nuclear weapons without using them.”

The knowledge base highlights the UN’s broader indispensable role in preventing regional crises and preserving humanity, though it does not specifically mention nuclear-weapon deterrence [S90] and [S91].

Additional Contextmedium

“Rapid, uneven growth of AI capabilities creates a lag between scientific publications and policy action because studies involving people can take months while AI systems evolve week by week.”

The pacing problem between fast-moving technology and slower governance is documented in the knowledge base, underscoring the same lag described by Bengio [S47].

Confirmedhigh

“The United Nations, created just over 80 years ago, remains humanity’s greatest accomplishment and an indispensable platform for coordinating AI governance.”

The UN’s indispensable nature and its 80-year history are affirmed in the knowledge base, which describes the organization as essential for global cooperation and crisis prevention [S90] and [S30].

External Sources (99)
S1
AI Meets Agriculture Building Food Security and Climate Resilien — Dr. Chaturvedi leads our national effort in agriculture and farmer’s welfare. Mr. Johannes Jett, he is the Regional Vice…
S2
https://dig.watch/event/india-ai-impact-summit-2026/ai-meets-agriculture-building-food-security-and-climate-resilien — Dr. Chaturvedi leads our national effort in agriculture and farmer’s welfare. Mr. Johannes Jett, he is the Regional Vice…
S3
(Day 1) General Debate – General Assembly, 79th session: morning session — – António Guterres, Secretary-General of the United Nations César Bernardo Arévalo de León – Guatemala : Your Excellenc…
S4
Keynote-HE Emmanuel Macron — -Antonio Guterres: Title – His Excellency (likely UN Secretary-General based on context); Role – Delivered opening addre…
S5
Keynote-António Guterres — -Moderator: Role/Title: Discussion moderator; Areas of expertise: Not mentioned -Mr. Sundar Pichai: Role/Title: Not spe…
S6
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — -Josephine Teo- Role/title not specified (represents Singapore)
S8
S9
Building Trusted AI at Scale – Keynote Anne Bouverot — -Anne Bouverot: Special Envoy for Artificial Intelligence, France; Diplomat and technologist; Former Director General of…
S10
How to make AI governance fit for purpose? — – Anne Bouverot- Chuen Hong Lew – Jennifer Bachus- Anne Bouverot
S11
Amandeep Singh Gill — Mr Gill holds a PhD in Nuclear Learning in Multilateral Forums from King’s College, London, a Bachelor of Technology in …
S12
From High-Performance Computing to High-Performance Problem Solving / Davos 2025 — – Amandeep Singh Gill: UN Secretary General’s envoy on technology Amandeep Singh Gill broadened the scope of potential …
S13
A Digital Future for All (morning sessions) — – Amandeep Singh Gill – UN Secretary General’s Envoy in Technology Amandeep Singh Gill: Good morning. How are we toda…
S14
Keynote-Brad Smith — -Brad Smith: Role/Title: Vice Chair and President of Microsoft; Areas of expertise: Technology policy, privacy, cybersec…
S15
Brad Smith — As Microsoft’s vice chair and president, Brad Smith leads a team of more than 1,900 business, legal and corporate affair…
S16
Microsoft Vice Chair and President Brad Smith testimony before Senate on AI — Microsoft Vice Chair and President Brad Smith testafied before a Senate Judiciary subcommittee in a hearing titled ‘Over…
S17
Transcript from the hearing — Let me introduce the witnesses and seize this moment to let you have the floor. We’re honored to be joined by Dario Amad…
S18
UN Secretary-General unveils Science and Technology Advisory Board — The United Nations Secretary-General, António Guterres, announced the creation of aScientific Advisory Boardto provide i…
S19
Driving U.S. Innovation in Artificial Intelligence — 17. Yoshua Bengio – Professor, University of Montreal
S20
Why science metters in global AI governance — -Balaraman Ravindran- Professor at IIT Madras, member of International Independent Scientific Panel
S21
Towards a Safer South Launching the Global South AI Safety Research Network — – Dr. Balaraman Ravindran- Dr. Urvashi Aneja
S22
Panel Discussion AI &amp; Cybersecurity _ India AI Impact Summit — – Balaraman Ravindran- Abdurrahman Habib – Balaraman Ravindran- S. Krishnan
S23
Why science metters in global AI governance — -Ajay Sood- Principal Scientific Advisor to the Government of India
S24
WS #202 The UN Cybercrime Treaty and Transnational Repression — Joey Shea: with the headphones on. We’re going to begin the session. My name is Joey Shea. I cover Saudi Arabia for …
S25
Day 0 Event #82 Inclusive multistakeholderism: tackling Internet shutdowns — – Nikki Muscati: Audience member who asked questions (role/affiliation not specified)
S26
Why science metters in global AI governance — -Anil Ananthaswamy- Moderator/Host, Author of “The Elegant Math Behind Machine Learning”
S27
Artificial intelligence (AI) – UN Security Council — António Guterres, the Secretary-General, emphasized that”humanity must always retain control over decision-making functi…
S28
IGF 2024 Opening Ceremony — – António Guterres: UN Secretary General António Guterres: Excellencies, I am pleased to greet the Internet Governance …
S29
Software.gov — The interoperability of systems is maintained by establishing common standards and rules.
S30
https://dig.watch/event/india-ai-impact-summit-2026/why-science-metters-in-global-ai-governance — Because there’s not enough past evidence to be sure that a particular tipping point is going to happen. So the situation…
S31
The Dawn of Artificial General Intelligence? / DAVOS 2025 — Nicholas Thompson: Yoshua? Yoshua Bengio: All right, there are several things that Andrew said that I think are wrong…
S32
Science under siege from AI, integrity of research at risk — AI is rapidlytransformingthe landscape of scientific research, but not always for the better. A growing concern is the p…
S33
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — He emphasised the need for policy that balances principle-level guidance with practical guardrails whilst avoiding overl…
S34
Building inclusive global digital governance (CIGI) — The impact of digital technologies, AI, data management, and governance is a subject of ongoing debate, with both opport…
S35
Towards 2030 and Beyond: Accelerating the SDGs through Access to Evidence on What Works — Diversity in evidence production and sharing is crucial
S36
Session — – The need for inclusion of diverse views, not just representation
S37
Open Forum #30 High Level Review of AI Governance Including the Discussion — These key comments fundamentally shaped the discussion by introducing three critical themes that transformed it from a r…
S38
Data first in the AI era — – **Equity and Access as Core Challenges**: A central theme was ensuring equitable access to both data and the benefits …
S39
World Economic Forum Panel on Quantum Information Science and Technology — Equity and governance frameworks are crucial to ensure quantum technologies benefit all populations globally rather than…
S40
Global AI Policy Framework: International Cooperation and Historical Perspectives — The scientific panel will provide evidence-based policy assessments, whilst the global dialogue will enable multilateral…
S41
AI Safety at the Global Level Insights from Digital Ministers Of — There’s a gap between scientific reports and actionable policy guidance that could be filled with evidence-based policy …
S42
Hard power of AI — In conclusion, the analysis provides insights into the dynamic relationship between technology, politics, and AI. It hig…
S43
How AI Drives Innovation and Economic Growth — Kremer argues that while there are forces that may widen gaps, AI has significant potential to narrow development dispar…
S44
Measuring Gender Digital Inequality in the Global South — One of the speakers shared the opinion that although progress is being made in terms of digital skills and education, th…
S45
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Fireside Chat Moderator- Mariano-Florentino Cuellar — Minister Teo offered insights from Singapore’s experience navigating AI development amid great power competition. Singap…
S46
Lightning Talk #209 Safeguarding Diverse Independent NeWS Media in Policy — ## Background and Research Context None identified beyond those in the speakers names list.
S47
Laying the foundations for AI governance — Lan Xue: Okay. I think my job is easier. I can say I agree with all of them. So I think that’s probably the easiest way….
S48
morning session — In addition to the discussions surrounding confidence-building measures and the BWC, this expanded summary also emphasiz…
S49
Table of contents — + Even though Estonia is esteemed as a digital country in the world, our attention and resources are largely directed to…
S50
Software.gov — The interoperability of systems is maintained by establishing common standards and rules.
S51
Law, Tech, Humanity, and Trust — Technical Standards and Interoperability Technical standardization is crucial for global interoperability
S52
Why science metters in global AI governance — And it helps us anticipate impacts early, from risks for children, to labor markets, to manipulation at scale. So countr…
S53
The Virtual Worlds we want: Governance of the future web | IGF 2023 Open Forum #45 — Alexandra Kozik:Thank you very much and thank you so much for inviting us to this debate. Good morning from Brussels, of…
S54
Hard power of AI — In conclusion, the analysis provides insights into the dynamic relationship between technology, politics, and AI. It hig…
S55
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Eltjo Poort: thank you Isadora yeah and thanks for giving me the opportunity to say a few things I there’s a little bit …
S56
AI Governance Dialogue: Steering the future of AI — Martin argues that high-level policy commitments must be accompanied by detailed technical standards to be effective. Wi…
S57
Foreword — – i. To achieve digital transformation, policy and regulation should be more holistic. Cross-sectoral collaboration alon…
S58
Digital Public Infrastructure, Policy Harmonisation, and Digital Cooperation – AI, Data Governance,and Innovation for Development — Harmonization of policies across the region was identified as a critical goal to enable seamless transactions and integr…
S59
Artificial Intelligence &amp; Emerging Tech — Jörn Erbguth:Thank you very much. So I’m EuroDIG subject matter expert for human rights and privacy and also affiliated …
S60
What is it about AI that we need to regulate? — Regional coordination emerged as a key middle layer between global and local approaches.Folake Olagunju articulated this…
S61
IGF 2024 Opening Ceremony — This comment provided a structure for subsequent speakers to address specific aspects of AI governance and inequality. I…
S62
AI Governance: Ensuring equity and accountability in the digital economy (UNCTAD) — Inclusivity is another key aspect of AI governance. It is crucial to have more inclusive conversations and ensure the pa…
S63
International multistakeholder cooperation for AI standards | IGF 2023 WS #465 — Inclusion of all relevant stakeholders is seen as crucial for effective AI standards. The inclusivity of diverse perspec…
S64
Leveraging AI to Support Gender Inclusivity | IGF 2023 WS #235 — By engaging users and technical communities, policymakers can gain valuable insights and perspectives, ultimately leadin…
S65
Setting the Rules_ Global AI Standards for Growth and Governance — Key areas of convergence included the importance of process-oriented standards that can adapt to evolving capabilities, …
S66
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — Throughout the discussion, speakers emphasised that effective AI assurance cannot be achieved by individual organisation…
S67
The role of standards in shaping a safe and sustainable AI-driven future — He further expounded on the collaborative essence of standardisation work, which relies on mutual trust, understanding, …
S68
Artificial intelligence (AI) – UN Security Council — During the9821st meetingof the Artificial Intelligence Security Council, a key discussion centered around whether existi…
S69
Open Forum #30 High Level Review of AI Governance Including the Discussion — International Cooperation and Framework Coordination The UN’s role should focus on providing independent scientific res…
S70
Why science metters in global AI governance — Global governance is needed with systems linking national bodies to ensure all voices are heard, especially from develop…
S71
UNSC meeting: Scientific developments, peace and security — China:President, China, thanks. Foreign Minister Cassius for presiding over the meeting. I listened carefully to the pre…
S72
AI Safety at the Global Level Insights from Digital Ministers Of — There’s a gap between scientific reports and actionable policy guidance that could be filled with evidence-based policy …
S73
How AI Drives Innovation and Economic Growth — Kremer argues that while there are forces that may widen gaps, AI has significant potential to narrow development dispar…
S74
Hard power of AI — In conclusion, the analysis provides insights into the dynamic relationship between technology, politics, and AI. It hig…
S75
https://dig.watch/event/india-ai-impact-summit-2026/why-science-metters-in-global-ai-governance — But as I mentioned yesterday, compared to the people who lived in the Bronze Age, we’re all geniuses. We’re all geniuses…
S76
In brief — Humanitarian actors need to be aware of the different nuances of the term ‘evidence-based’, particularly w…
S77
Panel Discussion AI &amp; Cybersecurity _ India AI Impact Summit — The panel discussion addressed needs of the Global South, with particular focus on capacity building for women and youth
S78
AI for Good Impact Initiative — Ebtesam Almazrouei:Thank you, Fred. Your Royal Highness, Your Excellencies, esteemed guests, allow me first to extend my…
S79
Inclusive AI governance: Perspectives from the Global South — At the 2024 Internet Governance Forum (IGF) in Riyadh, the Data and AI Governance coalition convened apanelto explore th…
S80
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — Fadi Daou:So, thank you and welcome everybody to this very important session at the WSIS in this rainy weather. Today, I…
S81
Open Forum #82 Catalyzing Equitable AI Impact the Role of International Cooperation — While the panel focused heavily on Global South inclusion, an audience member challenged this narrow focus by highlighti…
S82
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Fireside Chat Moderator- Mariano-Florentino Cuellar — Minister Teo offered insights from Singapore’s experience navigating AI development amid great power competition. Singap…
S83
Policymaker’s Guide to International AI Safety Coordination — This observation set the analytical framework for much of the subsequent discussion. It influenced Minister Teo’s detail…
S84
Opening &amp; Plenary segment: Summit of the Future – General Assembly, 3rd plenary meeting, 79th session — Multiple speakers including António Guterres, UN Secretary-General
S85
(Day 2) General Debate – General Assembly, 79th session: afternoon session — – Antonio Guterres: Secretary-General of the United Nations Allah Maye Halina – Chad: Madame President, Heads of State…
S86
UN: Summit of the Future Global Call — Melissa Fleming, the UN Under-Secretary-General for Global Communications, is moderating a global call ahead of the summ…
S87
AI Meets Cybersecurity Trust Governance &amp; Global Security — “Move fast, break things.”[113]”And the motto there is move deliberately and maintain things.”[114]”How to be able to ge…
S88
Keynote-Roy Jakobs — This comment introduces a systems-thinking perspective that acknowledges the complexity of AI implementation beyond just…
S89
IGF 2019 – Opening ceremony — United Nations Secretary-General Antonio Guterresopened his speech by drawing parallels with German Chancellor Angela Me…
S90
The 80th session of the UN General Assembly (UNGA 80) – Day 2 — Indispensable nature of the UN:Argued that in a time of extreme complexity and uncertainty, the UN is not only useful bu…
S91
vi CONTENTS — As Dag Hammarskjo ¨ld, the UN’s great second Secretary-General, put it, the United Nations was not created to take human…
S92
Leaders’ Plenary | Global Vision for AI Impact and Governance Morning Session Part 1 — The International Scientific Panel. independente sobre inteligência artificial é o primeiro órgão científico global sobr…
S93
9821st meeting — 2. Creation of an International Scientific Panel on Artificial Intelligence Mr. President, allow me to make some recomm…
S94
UNGA/DAY 1/PART 2 — The advancement of AI is outpacing regulation and responsibility, with its control concentrated in a few hands. (UN Secr…
S95
AI and Digital Predictions for 2024 report — Discussions far from consensus.
S96
Is Geopolitical ‘Coopetition’ Possible? — Maros Sefcovic, a prominent advocate for global cooperation, emphasises the critical need to foster collaboration amidst…
S97
Open Forum: Liberating Science — In conclusion, climate change misinformation and disinformation hinder efforts to tackle the climate crisis by promoting…
S98
WS #270 Understanding digital exclusion in AI era — This highlights the tension between policy development and technological progress, particularly in countries where gover…
S99
AI for Humanity: AI based on Human Rights (WorldBank) — Stating that technology developments occur at a rapid pace implies a need for due diligence and risk assessment to keep …
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
A
António Guterres
3 arguments110 words per minute653 words353 seconds
Argument 1
Science‑centered architecture for AI governance
EXPLANATION
Guterres argues that AI governance must be built around scientific knowledge, placing science at the core of international cooperation to ensure policies are evidence‑based rather than speculative. He stresses that a science‑led architecture will provide risk‑based guardrails that protect rights while accelerating progress.
EVIDENCE
He states that the United Nations is building a practical architecture that puts science at the centre of international cooperation on AI and that the Independent International Scientific Panel is designed to close the AI knowledge gap and provide a shared baseline for all countries, moving from blunt measures to smarter, risk-based guardrails that protect people and uphold human rights [17-20][24-30].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Guterres stresses that AI governance must be built around scientific knowledge, replacing hype with shared evidence and calling for common safety measures and interoperability standards in his keynote and the “Why science matters” discussion [S5][S20][S29][S27].
MAJOR DISCUSSION POINT
Science as the foundation for AI governance
AGREED WITH
Yoshua Bengio, Anne Bouverot, Amandeep Singh Gill, Soumya Swaminathan, Josephine Teo
Argument 2
Panel provides a shared baseline of analysis for all nations
EXPLANATION
Guterres explains that the new Independent International Scientific Panel will deliver a common analytical foundation, enabling every country—regardless of AI capacity—to understand impacts and act with clarity. This shared baseline is meant to shift discussions from philosophical debates to technical coordination.
EVIDENCE
He describes the panel as designed to help close the AI knowledge gap, assess real impacts across economies and societies, and give countries at every level of AI capacity the same clarity, providing a shared baseline of analysis [19-24].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The independent International Scientific Panel is described as delivering a common analytical foundation and likened to an IPCC-style mechanism for AI in the Guterres keynote and the “Why science matters” report, reinforcing its role as a shared baseline [S5][S20][S34].
MAJOR DISCUSSION POINT
Creation and purpose of the Independent International Scientific Panel on AI
Argument 3
Common technical baselines and shared testing standards enable interoperability
EXPLANATION
Guterres contends that agreeing on common technical benchmarks for testing AI systems creates interoperability, allowing technologies to scale globally with confidence. Without such baselines, fragmented rules would raise costs and safety risks.
EVIDENCE
He notes that when we agree on how to test systems and measure risk we create interoperability, giving the example that a startup in New Delhi can scale globally because the benchmarks are shared and safety travels with the technology [41-44].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Guterres calls for common safety measures, testing standards and interoperability across borders, echoing the emphasis on shared technical standards in the keynote and the software.gov description of interoperability through common rules [S5][S29].
MAJOR DISCUSSION POINT
Global cooperation, trust and avoiding fragmented regulations
AGREED WITH
Josephine Teo, Soumya Swaminathan, Amandeep Singh Gill
DISAGREED WITH
Yoshua Bengio
Y
Yoshua Bengio
3 arguments141 words per minute828 words351 seconds
Argument 1
Neutral, fact‑based synthesis to inform policy
EXPLANATION
Bengio stresses that the scientific panel should produce a neutral, fact‑based synthesis that offers a shared understanding for policymakers, insulated from societal tensions. This synthesis helps identify where scientists agree, where evidence is strong, and where uncertainties remain.
EVIDENCE
He says the role of the synthesis is to provide a shared understanding as a basis for political discussions and to be as uninfluenced by tensions as possible, highlighting the need to recognize uncertainties, points of agreement, and strong evidence [68-74].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Bengio argues for a neutral, fact-based synthesis to guide policymakers; this position is echoed in the Guterres keynote and the “Why science matters” briefing, and is reflected in his Davos remarks on providing shared understanding [S5][S20][S31].
MAJOR DISCUSSION POINT
Science as the foundation for AI governance
AGREED WITH
António Guterres, Anne Bouverot, Amandeep Singh Gill, Soumya Swaminathan, Josephine Teo
Argument 2
AI capabilities are growing unevenly and faster than scientific publishing, creating a lag
EXPLANATION
Bengio points out that AI capabilities are advancing rapidly and unevenly, outpacing the slower cycle of scientific studies and policy responses, which creates a lag between emerging risks and regulatory action. This lag hampers timely mitigation of potential harms.
EVIDENCE
He describes rapid growth of AI capabilities across labs and companies, noting that scientific papers and studies take months, so clues of potential problems appear only after a delay, exemplified by unexpected psychological effects of chatbots that were first observed anecdotally before scientific studies began [81-84].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Bengio highlights the rapid, uneven advance of AI outpacing scientific studies, a concern also noted in Davos discussions and the AI Policy Research Roadmap which stresses the policy-research lag [S31][S33].
MAJOR DISCUSSION POINT
Rapid AI advancement and the policy lag
DISAGREED WITH
Josephine Teo
Argument 3
Emphasise high‑level, principle‑based guardrails that survive technical change
EXPLANATION
Bengio argues that policy should focus on high‑level principles that remain applicable despite fast‑changing technical details, rather than trying to codify every specific technology. Such principles can guide the development of guardrails that are robust over time.
EVIDENCE
He suggests thinking about high-level principles that can be applied without delving into details because the details will change, and stresses the need for technology that implements those guardrails in the field [85-86].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Bengio recommends high-level principle guardrails that remain relevant despite technical shifts; this aligns with his Davos statements and the AI policy roadmap that calls for principle-level guidance balanced with practical guardrails [S31][S33].
MAJOR DISCUSSION POINT
Rapid AI advancement and the policy lag
DISAGREED WITH
António Guterres
S
Soumya Swaminathan
4 arguments180 words per minute451 words149 seconds
Argument 1
Policy must adapt to the best available evidence, not wait for certainty
EXPLANATION
Swaminathan asserts that policy cannot wait for absolute certainty; it must be based on the best current evidence and remain flexible to adapt as new data emerges. This mirrors the rapid evidence turnover experienced during the COVID‑19 response.
EVIDENCE
She explains that during COVID-19 they reviewed hundreds of publications daily to make recommendations, and that policy must change, ask for relevant evidence, and adapt when that evidence becomes clear [217-220].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Swaminathan draws on the COVID-19 experience of daily evidence review to argue for evidence-based, adaptable policy, a point reiterated in the “Why science matters” discussion [S20][S33].
MAJOR DISCUSSION POINT
Science as the foundation for AI governance
AGREED WITH
António Guterres, Yoshua Bengio, Anne Bouverot, Amandeep Singh Gill, Josephine Teo
Argument 2
Panel functions like an IPCC for AI, linking science to policy worldwide
EXPLANATION
Swaminathan likens the new UN scientific panel to the IPCC, suggesting it should serve as a global mechanism that aggregates scientific findings and connects them to policy decisions across nations, facilitating coordinated responses to AI challenges.
EVIDENCE
She states that the UN body is similar to the IPCC and should establish systems that link to national bodies, ensuring all voices are heard and that policy is informed by science [209-212].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
She likens the new UN scientific panel to the IPCC, a comparison supported by the Building Inclusive Global Digital Governance report and the “Why science matters” brief that stress an IPCC-style global mechanism for AI [S34][S20].
MAJOR DISCUSSION POINT
Creation and purpose of the Independent International Scientific Panel on AI
AGREED WITH
António Guterres, Josephine Teo, Amandeep Singh Gill
DISAGREED WITH
Balaraman Ravindran
Argument 3
Inclusion of diverse, especially low‑income, voices is critical for trustworthy evidence
EXPLANATION
Swaminathan emphasizes that for scientific evidence to be trusted and actionable, it must incorporate perspectives from low‑income populations and diverse stakeholders, ensuring recommendations are relevant globally. She cites past criticism of WHO recommendations that were not suitable for low‑income contexts.
EVIDENCE
She notes that during COVID-19 some recommendations were relevant only to high-income countries, highlighting the need to include voices of women, low-income women, and remote farmers to make AI work for everyone [213-218].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for diversity in evidence production and inclusion of low-income perspectives are documented in inclusive governance literature and equity-focused studies, highlighting the need for broad stakeholder input [S35][S36][S37][S38].
MAJOR DISCUSSION POINT
Global cooperation, trust and avoiding fragmented regulations
AGREED WITH
Balaraman Ravindran, Yoshua Bengio, Josephine Teo
Argument 4
Equity must be central to AI governance, ensuring benefits for all populations
EXPLANATION
Swaminathan stresses that equity should be at the heart of AI governance, guaranteeing that AI benefits are distributed fairly and that marginalized groups are not left behind. She calls for the panel to network scientists across sectors to address safety, risks, and equity.
EVIDENCE
She mentions that the panel could help network scientists sectorally, look at emerging evidence, set priorities, and ensure equity is central to AI as a public good [315-317].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Equity and access as core challenges are emphasized in data-first equity analyses and the World Economic Forum panel on equitable technology governance, reinforcing her argument for equity-centered AI policy [S38][S39].
MAJOR DISCUSSION POINT
Capacity building and equitable AI deployment
AGREED WITH
Balaraman Ravindran, Yoshua Bengio, Josephine Teo
A
Anne Bouverot
2 arguments142 words per minute501 words211 seconds
Argument 1
Understanding reduces fear and enables informed decisions
EXPLANATION
Bouverot argues that fear stems from lack of understanding, and that increasing scientific comprehension of AI reduces anxiety and supports rational policy choices. She cites Marie Curie’s famous quote to illustrate this point.
EVIDENCE
She quotes Marie Curie saying “nothing in life is to be feared, everything is to be understood” and stresses that understanding is the first step before moving to policymakers and citizens [250-259].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need to replace hype and fear with shared scientific understanding is highlighted in the Guterres keynote and the “Why science matters” briefing, supporting Bouverot’s claim that understanding mitigates fear [S5][S20].
MAJOR DISCUSSION POINT
Science as the foundation for AI governance
AGREED WITH
António Guterres, Brad Smith, Yoshua Bengio
Argument 2
Multidisciplinary, UN‑backed panel essential for credible advice
EXPLANATION
Bouverot highlights that a multidisciplinary panel anchored in the UN provides credible, globally accepted scientific advice for AI governance. She points to France’s support and the nomination of a scientist to the panel as evidence of its importance.
EVIDENCE
She notes that France is fully supportive of the scientific panel, proud of Joëlle Barral as a nominee, and stresses that multidisciplinary, UN-backed panels are essential for credible advice [260-264].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The importance of a multidisciplinary, UN-anchored scientific panel for credible global advice is underscored in the Guterres keynote and the Building Inclusive Global Digital Governance report [S5][S34].
MAJOR DISCUSSION POINT
Creation and purpose of the Independent International Scientific Panel on AI
A
Amandeep Singh Gill
1 argument136 words per minute644 words283 seconds
Argument 1
The “science‑evidence‑policy” loop drives effective governance
EXPLANATION
Gill describes a feedback loop where scientific evidence informs policy, and policy questions shape further scientific inquiry, creating a dynamic cycle that enhances AI governance. He frames this loop as central to the discussion of the new panel.
EVIDENCE
He states there is a loop between science and evidence, and evidence and policy, and that they want to explore that loop today in the context of the International Independent Scientific Panel [201-202].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The “science-evidence-policy” feedback loop is described in the “Why science matters” discussion as central to the new panel’s work, illustrating how scientific evidence informs policy and vice-versa [S20].
MAJOR DISCUSSION POINT
Science as the foundation for AI governance
B
Brad Smith
3 arguments132 words per minute1339 words606 seconds
Argument 1
Hype and grandiose predictions hinder realistic governance; focus on current facts
EXPLANATION
Smith criticizes the tendency to make bold, unverified predictions about AI, arguing that such hype distracts from grounded, fact‑based governance. He advocates focusing on present evidence rather than speculative crystal‑ball forecasts.
EVIDENCE
He recounts listening to predictions that scored an average of 25% accuracy, stating there is no crystal ball, and emphasizes using current understanding each year rather than grandiose forecasts [152-166].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Smith’s criticism of hype aligns with Guterres’ call to replace hype with evidence and with observations that past AI predictions lack empirical grounding, as noted in the keynote and a source on the lack of historical evidence for AI tipping points [S5][S30].
MAJOR DISCUSSION POINT
Rapid AI advancement and the policy lag
AGREED WITH
António Guterres, Yoshua Bengio, Anne Bouverot
Argument 2
The United Nations remains the indispensable platform for coordinated global action
EXPLANATION
Smith asserts that the UN is essential for global cooperation, preventing fragmentation and providing a framework where nations can collectively address AI challenges. He cites the UN’s historical role in averting nuclear catastrophe and its ongoing relevance.
EVIDENCE
He mentions that the UN has been indispensable for protecting people and preserving our species, referencing its role in managing nuclear weapons and being part of solutions worldwide [112-119][126-129].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Smith’s assertion is supported by the Guterres keynote emphasizing the UN’s historic role in global coordination and the UN Security Council AI meeting that highlights the UN’s centrality in AI governance [S5][S27].
MAJOR DISCUSSION POINT
Global cooperation, trust and avoiding fragmented regulations
Argument 3
AI should augment human capability rather than replace it; focus on practical benefits
EXPLANATION
Smith argues that AI’s value lies in enhancing human abilities and solving problems, not merely in creating smarter machines. He stresses using AI to make people smarter and to address societal needs.
EVIDENCE
He says the important question is not whether machines will be smarter than humans, but how we will use them to make people smarter and help us do what we need to do [174-177].
MAJOR DISCUSSION POINT
Operationalising AI principles and building standards
B
Balaraman Ravindran
2 arguments169 words per minute483 words171 seconds
Argument 1
Lack of local benchmarks hampers policy decisions in education and agriculture
EXPLANATION
Ravindran points out that India lacks domestic evidence and benchmarks to assess AI’s impact on education and agriculture, making it difficult to craft effective policies. He calls for data on AI’s effects on youth, children, and rural versus urban contexts.
EVIDENCE
He describes uncertainty about AI’s impact on the social fabric, children, youth, and agriculture, noting the absence of Indian benchmarks and examples of AI bots for farmers, and mentions preliminary studies on AI in education with unclear causal relationships [225-240].
MAJOR DISCUSSION POINT
Rapid AI advancement and the policy lag
DISAGREED WITH
Soumya Swaminathan
Argument 2
Research on AI’s impact on youth, education, and agriculture is needed to guide equitable policies
EXPLANATION
Building on the previous point, Ravindran stresses the need for systematic research to generate evidence on AI’s effects on youth, learning behavior, and agricultural efficiency, which would inform equitable policy decisions. He highlights the importance of understanding habit‑driven AI adoption in schools.
EVIDENCE
He mentions preliminary studies showing AI adoption in education is linked to habit, but the causal direction is unclear, and calls for more evidence to evaluate AI’s effectiveness in agriculture and education [229-240].
MAJOR DISCUSSION POINT
Capacity building and equitable AI deployment
AGREED WITH
Soumya Swaminathan, Yoshua Bengio, Josephine Teo
A
Ajay Sood
1 argument138 words per minute304 words131 seconds
Argument 1
Embedding governance through “techno‑legal” design integrates risk management into systems
EXPLANATION
Sood proposes a “techno‑legal” approach where governance mechanisms are built directly into technical designs, allowing AI systems to incorporate risk mitigation at the architectural level. This method mirrors India’s experience with digital public infrastructure.
EVIDENCE
He explains that governance was embedded through technical design in digital public infrastructure, calling it “techno-legal” and suggesting it as a way to handle AI risks [296-298].
MAJOR DISCUSSION POINT
Operationalising AI principles and building standards
J
Josephine Teo
5 arguments140 words per minute901 words385 seconds
Argument 1
Singapore’s commitment to the panel and its role in global AI safety
EXPLANATION
Teo affirms Singapore’s support for the Independent International Scientific Panel and its active participation in global AI safety initiatives, positioning Singapore as a proactive small‑state contributor. She highlights hosting events and collaborating on safety testing.
EVIDENCE
She welcomes the establishment of the panel, notes Singapore’s role in the International Scientific Exchange on AI Safety, the Singapore Consensus, and participation in joint testing efforts and red-team challenges, and mentions the ASEAN work on AI safety benchmarks [335-352].
MAJOR DISCUSSION POINT
Creation and purpose of the Independent International Scientific Panel on AI
AGREED WITH
António Guterres, Yoshua Bengio, Anne Bouverot, Amandeep Singh Gill, Soumya Swaminathan
Argument 2
Need for standardized evaluation methodologies that work across regulatory contexts
EXPLANATION
Teo calls for common evaluation methods that can be applied internationally, enabling consistent assessment of AI systems despite differing regulatory environments. Standardization is essential for interoperability and trust.
EVIDENCE
She states the need for standardized evaluation methodologies that work across different regulatory contexts and mentions capacity building for all countries to engage with technical challenges [340-342].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The call for common evaluation methods and interoperable standards mirrors Guterres’ emphasis on shared safety measures and the software.gov description of interoperability through common standards [S5][S29].
MAJOR DISCUSSION POINT
Operationalising AI principles and building standards
AGREED WITH
António Guterres, Soumya Swaminathan, Amandeep Singh Gill
Argument 3
Investment in AI R&D and safety institutes creates the scientific base for standards
EXPLANATION
Teo emphasizes that sustained investment in AI research and dedicated safety institutes provides the scientific foundation needed to develop robust standards and guidelines. Singapore has allocated a billion‑dollar AI R&D plan and established a digital trust centre.
EVIDENCE
She notes Singapore set aside a billion dollars in a national AI R&D plan for foundational and applied research into responsible AI, and mentions a designated AI safety institute and a centre for advanced technologies in online safety [320-322].
MAJOR DISCUSSION POINT
Operationalising AI principles and building standards
AGREED WITH
António Guterres
Argument 4
Capacity‑building programmes ensure all countries can engage with technical challenges
EXPLANATION
Teo argues that integrating science and policy, and fostering international cooperation, helps build capacity in all nations to understand and regulate AI, preventing fragmentation. She stresses the importance of collaborative approaches for global interoperability.
EVIDENCE
She says both impulses of speed and caution are necessary, and that integration of science and policy, plus international cooperation, can develop sound interoperable approaches, highlighting capacity building as essential [323-326].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Capacity-building as a means to enable worldwide participation in AI governance is highlighted in the “Why science matters” briefing and the inclusive governance report that stresses building scientific capacity across nations [S20][S34].
MAJOR DISCUSSION POINT
Capacity building and equitable AI deployment
Argument 5
ASEAN and Singapore initiatives illustrate regional harmonisation efforts
EXPLANATION
Teo highlights ASEAN’s work on AI governance, including guides and red‑team challenges, as examples of regional harmonisation that complement global UN efforts. These initiatives aim to align standards across the region.
EVIDENCE
She describes ASEAN’s AI Governance Guide, efforts to adapt global norms, the Singapore AI Safety Red Teaming Challenge, and work to develop regional AI safety benchmarks, as well as Singapore’s ongoing participation in joint testing and capacity-building activities [327-334][347-353].
MAJOR DISCUSSION POINT
Global cooperation, trust and avoiding fragmented regulations
Agreements
Agreement Points
Science should be central to AI governance, providing a shared evidence‑based foundation for policy.
Speakers: António Guterres, Yoshua Bengio, Anne Bouverot, Amandeep Singh Gill, Soumya Swaminathan, Josephine Teo
Science‑centered architecture for AI governance Neutral, fact‑based synthesis to inform policy Understanding reduces fear and enables informed decisions The ‘science‑evidence‑policy’ loop drives effective governance Policy must adapt to the best available evidence, not wait for certainty Singapore’s commitment to the panel and its role in global AI safety
All speakers stress that AI governance must be built on scientific knowledge and evidence, with panels and loops that translate neutral, fact-based synthesis into policy, and that understanding reduces fear and builds trust [17-20][24-30][68-74][250-259][201-202][217-220][335-340].
POLICY CONTEXT (KNOWLEDGE BASE)
The emphasis on science aligns with calls for evidence-based AI governance highlighted in recent policy briefs, which argue that scientific assessment enables early risk anticipation and informs international cooperation [S52] and is reflected in the AI Policy Research Roadmap advocating systematic evidence gathering [S55].
Establishing common technical baselines and shared testing standards is essential for interoperability and coordinated global action.
Speakers: António Guterres, Josephine Teo, Soumya Swaminathan, Amandeep Singh Gill
Common technical baselines and shared testing standards enable interoperability Need for standardized evaluation methodologies that work across regulatory contexts Panel functions like an IPCC for AI, linking science to policy worldwide The ‘science‑evidence‑policy’ loop drives effective governance
Speakers agree that shared benchmarks, standardized evaluation methods and an IPCC-style panel create interoperable frameworks that allow AI systems to scale safely across borders [41-44][340-342][209-212][201-202].
POLICY CONTEXT (KNOWLEDGE BASE)
Technical baselines and testing standards are core to interoperability frameworks such as the Software.gov guidance on common standards [S50] and are echoed in IGF discussions on global AI standards that stress shared testing protocols for coordinated action [S51][S65].
Inclusion of diverse, especially low‑income and regional, voices is critical to produce trustworthy evidence and ensure equitable AI outcomes.
Speakers: Soumya Swaminathan, Balaraman Ravindran, Yoshua Bengio, Josephine Teo
Inclusion of diverse, especially low‑income, voices is critical for trustworthy evidence Research on AI’s impact on youth, education, and agriculture is needed to guide equitable policies AI will affect developing countries … need multidisciplinary array so everyone at the table Equity must be central to AI governance, ensuring benefits for all populations
All four speakers highlight the need to incorporate perspectives from low-income groups, regional contexts and developing countries to build credible evidence and equitable policies [213-218][225-240][92-94][337-340].
POLICY CONTEXT (KNOWLEDGE BASE)
Inclusive governance is a pillar of UNCTAD’s AI equity agenda and of multistakeholder standard-setting processes that stress participation from developing countries and under-represented groups [S62][S63][S64].
Reduce hype and focus on concrete, evidence‑based facts to guide realistic AI governance.
Speakers: António Guterres, Brad Smith, Yoshua Bengio, Anne Bouverot
Less noise, more knowledge Hype and grandiose predictions hinder realistic governance; focus on current facts policy cannot be built on guesswork Understanding reduces fear and enables informed decisions
The speakers concur that AI policy should be grounded in solid evidence rather than speculative predictions, emphasizing factual knowledge and public understanding [15-17][52-53][152-166][68-74][250-259].
POLICY CONTEXT (KNOWLEDGE BASE)
Calls to curb hype and prioritize factual evidence appear in analyses of AI’s rapid development versus policy lag, urging evidence-based framing to avoid speculative regulation [S52][S54].
Sustained investment in AI research, safety institutes and capacity‑building programmes underpins robust standards and global cooperation.
Speakers: Josephine Teo, António Guterres
Investment in AI R&D and safety institutes creates the scientific base for standards Science‑led governance … accelerator for solutions
Both speakers underline that dedicated funding for AI research and safety, together with capacity-building, provides the scientific foundation needed for effective standards and international collaboration [320-322][24-30].
POLICY CONTEXT (KNOWLEDGE BASE)
Investment in research and safety institutes is highlighted in the AI Policy Research Roadmap as essential for building capacity and trustworthy standards, and collaborative safety monitoring initiatives stress the need for sustained funding [S55][S66].
Similar Viewpoints
Both emphasize the United Nations as the essential, irreplaceable platform for coordinating global AI governance and fostering scientific cooperation [17-20][112-119][126-129].
Speakers: António Guterres, Brad Smith
Science‑centered architecture for AI governance The United Nations remains the indispensable platform for coordinated global action
Both note that rapid AI advances outpace research and prediction, leading to a lag that makes hype‑driven forecasts unreliable and underscores the need for evidence‑based approaches [81-84][152-166].
Speakers: Yoshua Bengio, Brad Smith
AI capabilities are growing unevenly and faster than scientific publishing, creating a lag Hype and grandiose predictions hinder realistic governance; focus on current facts
Unexpected Consensus
Regional harmonisation and benchmark development for AI applications
Speakers: Balaraman Ravindran, Josephine Teo
Research on AI’s impact on youth, education, and agriculture is needed to guide equitable policies ASEAN and Singapore initiatives illustrate regional harmonisation efforts
Despite representing different regions (India and Singapore), both speakers stress the need for locally-generated evidence and regional coordination (e.g., benchmarks in education/agriculture and ASEAN harmonisation) to inform policy, an alignment not obvious from their distinct national contexts [225-240][327-334].
POLICY CONTEXT (KNOWLEDGE BASE)
Regional harmonisation is identified as a key step between global and national policies, with IGF and digital cooperation forums recommending benchmark development at the regional level to enable seamless integration [S58][S60][S65].
Overall Assessment

The discussion shows strong consensus that science must be at the heart of AI governance, that common technical standards and shared baselines are vital for interoperability, that inclusive and equitable evidence‑generation is essential, and that hype should be replaced by factual, evidence‑based policy. There is also agreement on the need for sustained investment and capacity building to support these goals.

High – The convergence across UN leadership, academia, industry and regional representatives indicates a solid foundation for coordinated, science‑driven AI governance, increasing the likelihood of effective global policy frameworks.

Differences
Different Viewpoints
Approach to policy design – high‑level principle‑based guardrails versus detailed technical baselines and shared testing standards
Speakers: Yoshua Bengio, António Guterres
Emphasise high‑level, principle‑based guardrails that survive technical change Common technical baselines and shared testing standards enable interoperability
Bengio argues that policy should focus on high-level principles that remain applicable despite rapid technical change, rather than trying to codify every detail (​[85-86]​). Guterres, by contrast, stresses the need for common technical benchmarks for testing AI systems to create interoperability and allow technologies to scale globally (​[41-44]​). Both seek effective AI governance but propose different routes.
POLICY CONTEXT (KNOWLEDGE BASE)
The tension between principle-based guardrails and detailed technical standards is discussed in policy briefs that argue high-level commitments must be paired with concrete standards to be operationally effective [S56][S65].
Handling the speed of AI advancement – accepting a lag between research and policy versus trying to balance speed with caution through integrated science‑policy processes
Speakers: Yoshua Bengio, Josephine Teo
AI capabilities are growing unevenly and faster than scientific publishing, creating a lag Need to balance rapid AI development with careful, evidence‑based policy through integration of science and policy
Bengio points out that AI advances faster than scientific studies can keep up, creating a lag that hampers timely regulation (​[81-84]​). Teo acknowledges a similar tension, noting that moving quickly and moving carefully are both necessary and must be balanced via science-policy integration (​[323-326]​). While they share the concern, Bengio emphasizes the inevitability of lag, whereas Teo stresses a proactive balancing act.
POLICY CONTEXT (KNOWLEDGE BASE)
The pacing problem between fast-moving AI research and slower policy processes has been highlighted in recent sessions, noting the need for integrated science-policy mechanisms to reduce the lag [S47][S54].
Source of evidence for policy – reliance on a global IPCC‑style scientific panel versus the need for locally generated benchmarks and data
Speakers: Soumya Swaminathan, Balaraman Ravindran
Panel functions like an IPCC for AI, linking science to policy worldwide Lack of local benchmarks hampers policy decisions in education and agriculture
Swaminathan likens the new UN scientific panel to the IPCC, arguing it should aggregate global scientific findings to inform policy (​[209-212]​). Ravindran stresses that India lacks domestic evidence and benchmarks to assess AI’s impact on education and agriculture, making policy formulation difficult (​[225-240]​). The disagreement lies in the emphasis on global versus local evidence generation.
POLICY CONTEXT (KNOWLEDGE BASE)
Debates on global versus local evidence sources reference proposals for an IPCC-style AI panel alongside calls for region-specific benchmarks to capture contextual nuances, as seen in discussions on inclusive evidence generation [S52][S58].
Unexpected Differences
None identified
Speakers:
The discussion was largely collaborative, with speakers building on each other’s points rather than presenting starkly opposing views. No surprise conflicts emerged beyond the nuanced differences noted above.
Overall Assessment

The speakers largely converged on the need for science‑based, evidence‑driven AI governance, the importance of international cooperation, and the role of the UN‑backed scientific panel. The main points of contention concerned the preferred mechanism for translating science into policy – high‑level principles versus detailed technical standards – and the balance between global versus local evidence generation.

Low to moderate. While there are nuanced differences in implementation strategies, there is broad consensus on overarching goals. This suggests that future work on the Independent International Scientific Panel can progress with relatively smooth coordination, though careful attention will be needed to reconcile principle‑based approaches with concrete technical standards and to integrate both global and local evidence streams.

Partial Agreements
Both agree that AI governance must be grounded in scientific evidence. Guterres calls for a science‑led architecture that provides a shared baseline for all nations (​[17-20, 24-30]​), while Swaminathan stresses that policy should be based on the best current evidence and remain adaptable (​[217-220]​). They differ in emphasis: Guterres focuses on building a global scientific infrastructure, whereas Swaminathan highlights the need for rapid evidence turnover and flexibility.
Speakers: António Guterres, Soumya Swaminathan
Science‑centered architecture for AI governance Policy must adapt to the best available evidence, not wait for certainty
Both advocate for standardized, interoperable technical frameworks. Guterres argues that agreeing on how to test systems creates interoperability (​[41-44]​), while Teo calls for standardized evaluation methods that function across different regulatory regimes (​[340-342]​). Their goals align, but Guterres emphasizes global benchmarks, whereas Teo stresses methodological standardisation coupled with capacity‑building.
Speakers: António Guterres, Josephine Teo
Common technical baselines and shared testing standards enable interoperability Need for standardized evaluation methodologies that work across regulatory contexts
Takeaways
Key takeaways
Science must be the foundation of AI governance; neutral, fact‑based synthesis is needed to inform policy. The United Nations is establishing an Independent International Scientific Panel on AI to provide a shared, multidisciplinary baseline for all nations, especially giving a voice to the Global South. AI capabilities are advancing faster than scientific publishing and policy processes, creating a lag that must be addressed with high‑level, principle‑based guardrails. Fragmented national regulations risk higher costs and reduced safety; common technical standards and interoperable testing frameworks are essential. Equitable and inclusive evidence—incorporating voices from low‑income countries, women, youth, and diverse sectors—is critical for trustworthy governance. Operationalising AI principles requires standardized evaluation methods, capacity‑building, and embedding governance into technology (“techno‑legal” design). Industry (e.g., Microsoft) and small states (e.g., Singapore) are committing resources to AI R&D, safety institutes, and regional harmonisation efforts.
Resolutions and action items
The UN panel will fast‑track a first report ahead of the Global AI Governance Summit in July. Member states are invited to adopt the panel’s shared baseline of analysis for technical coordination and risk‑based guardrails. Singapore will host the second edition of the International Scientific Exchange on AI Safety (May 17‑18) and continue its AI safety red‑team challenges and ASEAN harmonisation work. Microsoft pledged to devote energy and resources to support the UN‑led scientific panel and related governance initiatives. India’s National AI Governance Framework will pursue public‑private partnerships to build compute capacity and embed techno‑legal safeguards. Panel members (including Yoshua Bengio, Balaraman Ravindran, Anne Bouverot, etc.) will work to develop multidisciplinary evidence streams for health, education, agriculture, and youth impacts.
Unresolved issues
How to create rapid, reliable scientific benchmarks that keep pace with fast‑moving AI capabilities. Specific methodologies for measuring AI impact on education, agriculture, and youth in diverse contexts, especially in the Global South. Concrete mechanisms for translating high‑level AI principles into standardized, cross‑jurisdictional evaluation protocols. Ways to ensure continuous inclusion of under‑represented voices (e.g., low‑income women, remote farmers) in the evidence‑generation process. The extent and timing of policy interventions when scientific certainty is low but potential risks are high (e.g., tipping‑point analogies). Funding models and resource allocation for sustained global scientific collaboration beyond initial UN panel activities.
Suggested compromises
Adopt high‑level, principle‑based guardrails that remain applicable despite technical change, rather than detailed prescriptive rules. Balance rapid AI development with careful, evidence‑driven policy by integrating science continuously into the policy cycle. Combine “techno‑legal” design (embedding governance into system architecture) with flexible regulatory frameworks to allow adaptation. Use the UN’s legitimacy to create interoperable standards while allowing national contexts to tailor implementation. Treat scientific input as a foundation for durable governance rather than a constraint on policy flexibility, enabling iterative refinement.
Thought Provoking Comments
Science is a universal language. When we agree on how to test systems and measure risk, we create interoperability, allowing a startup in New Delhi to scale globally with confidence because the benchmarks are shared.
Highlights the foundational role of shared scientific standards in overcoming fragmentation and building trust across borders, framing science as the bridge between diverse policy regimes.
Set the agenda for the whole session, prompting subsequent speakers to discuss how to create common baselines, and influencing the panelists to stress the need for neutral, globally accepted metrics.
Speaker: António Guterres
The situation is similar to climate tipping points: we lack past evidence to be sure a particular tipping point will happen, yet the potential severity is catastrophic. We must recognize uncertainty, identify where evidence is strong, and act on high‑severity risks even without proof.
Provides a powerful analogy that clarifies why precautionary governance is needed despite scientific uncertainty, linking AI risk assessment to well‑understood climate policy frameworks.
Shifted the conversation from abstract optimism to concrete risk‑management, leading the panel to explore how to identify and prioritize uncertain but high‑impact AI risks.
Speaker: Yoshua Bengio
There is a well‑known economic theory that humanity repeats its great economic mistakes every 80 years because each generation forgets the previous crises. The United Nations, created just over 80 years ago, is one of humanity’s greatest successes and must be reinvested in.
Frames the UN’s relevance historically, using a cyclical view of economic memory to argue for institutional continuity in the face of rapid technological change.
Re‑centered the dialogue on the strategic importance of multilateral institutions, prompting other speakers (e.g., Josephine Teo) to emphasize UN legitimacy and inclusiveness.
Speaker: Brad Smith
I used Microsoft Copilot to grade AI predictions from industry leaders; the average accuracy was 25%. There is no crystal ball. We have the ability to understand where we are today, not where we will be a decade from now.
Critiques the culture of hype and over‑promising in AI, grounding the discussion in empirical performance and urging humility.
Triggered a tone shift toward skepticism of grandiose forecasts, encouraging panelists like Bengio and Swaminathan to stress evidence‑based policy rather than speculative visions.
Speaker: Brad Smith
People disagree because they don’t have a common understanding of the problem. We rush to debate solutions without first agreeing on the problem’s context.
Identifies a fundamental communication breakdown that hampers effective governance, suggesting a procedural remedy—shared problem definition.
Guided the moderator to frame the rapid‑fire round around “loops” between science and policy, and inspired panelists to discuss how to build shared contextual understanding.
Speaker: Brad Smith
During COVID we reviewed hundreds of papers daily to make rapid recommendations. AI is similar; we need a global body like the IPCC to provide fast, trustworthy evidence that can be adapted to different country contexts.
Draws a direct parallel between pandemic response and AI governance, illustrating how rapid evidence synthesis can inform timely policy while acknowledging contextual differences.
Prompted the panel to consider mechanisms for fast evidence aggregation and highlighted the need for inclusivity of low‑income perspectives, influencing later remarks on equity.
Speaker: Soumya Swaminathan
If economists predict 80 % of jobs will be transformed, policy should focus on training and reskilling; if they predict half the jobs will disappear, policy should consider universal basic income. The underlying scientific forecast determines the policy response.
Shows how divergent scientific predictions lead to vastly different policy pathways, underscoring the importance of accurate forecasting for social policy design.
Added nuance to the discussion on AI’s labor impact, prompting participants to think about scenario‑based policy planning rather than one‑size‑fits‑all solutions.
Speaker: Anne Bouverot
We embed governance through technical design – a ‘techno‑legal’ approach – as we did with India’s digital public infrastructure for identity and finance. This can be a model for AI safety.
Introduces a concrete, implementation‑focused strategy that blends law and technology, moving the conversation from abstract principles to actionable design patterns.
Shifted the dialogue toward practical engineering solutions, influencing later remarks about standardised evaluation methodologies and benchmarking.
Speaker: Ajay Sood
Balancing the impulse to move quickly with the need to move carefully is not impossible; it requires integration of science and policy, and international cooperation to develop interoperable approaches.
Synthesises the central tension of the whole session—speed versus safety—and positions the UN as the facilitator of interoperable, science‑driven governance.
Served as a concluding synthesis that reinforced earlier points about shared baselines, inclusivity, and the UN’s unique legitimacy, tying together the diverse strands of the discussion.
Speaker: Josephine Teo
AI has strong potential for helping science, as seen with recent Nobel‑winning work in physics and chemistry, but this requires open, globally funded databases of scientific data.
Extends the conversation beyond governance to the positive feedback loop where AI accelerates scientific discovery, emphasizing infrastructure needs for that synergy.
Opened a brief but significant side‑track on the benefits of AI for scientific research, reinforcing the panel’s earlier call for multidisciplinary collaboration.
Speaker: Anne Bouverot
Overall Assessment

The discussion was driven forward by a series of pivotal remarks that repeatedly returned to the need for shared scientific baselines, humility in the face of uncertainty, and concrete mechanisms for translating evidence into policy. Guterres’ framing of science as a universal language set the stage, while Bengio’s climate‑tipping‑point analogy and Brad Smith’s critique of hype sharpened the focus on precautionary, evidence‑based governance. Contributions from Swaminathan and Bouverot linked these ideas to real‑world crises and labor policy, respectively, and Sood’s ‘techno‑legal’ proposal offered a tangible design pathway. Josephine Teo’s closing synthesis tied the threads together, reaffirming the UN’s role as the integrator of speed, safety, and inclusivity. Collectively, these comments redirected the conversation from lofty aspirations to actionable, interdisciplinary strategies, shaping a nuanced, forward‑looking consensus on how science can effectively inform global AI governance.

Follow-up Questions
What is the evidence on how AI affects children, youth, and social fabric in India, including issues like isolation and mental health?
Ravindran highlighted a lack of data on AI’s societal impacts in the Global South, especially on vulnerable groups, indicating a need for targeted research.
Speaker: Balaraman Ravindran
What benchmarks and evaluation methods can assess the efficiency and effectiveness of AI applications in Indian agriculture, such as AI co-pilots for farmers?
He asked for concrete evidence and metrics to gauge AI’s contribution to agricultural productivity, revealing a gap in measurable standards.
Speaker: Balaraman Ravindran
What is the causal relationship between AI usage and student learning outcomes in education—does AI use improve learning, or do better learners use AI more?
Ravindran noted uncertainty about directionality, calling for rigorous studies to untangle cause and effect.
Speaker: Balaraman Ravindran
How can globally accessible, publicly funded scientific data repositories be created to enable AI‑driven scientific discovery?
She emphasized the need for worldwide databases built by scientists and supported by governments to unlock AI’s potential in research.
Speaker: Anne Bouverot
What standardized, interoperable AI safety evaluation methodologies can be developed to work across different regulatory contexts?
She identified the lack of common evaluation tools as a barrier to operationalizing high‑level AI principles globally.
Speaker: Josephine Teo
What capacity‑building programs are needed so all countries, especially low‑resource ones, can meaningfully engage with technical AI challenges?
She pointed out disparities in technical expertise and the necessity of support mechanisms for inclusive participation.
Speaker: Josephine Teo
How can the voices of women, low‑income populations, and remote farmers be systematically incorporated into AI policy evidence and recommendations?
Drawing on WHO experience, she stressed the importance of inclusive evidence that reflects diverse contexts.
Speaker: Soumya Swaminathan
What processes and feedback loops are required to translate complex scientific AI findings into language and formats usable by policymakers?
He highlighted the communication gap between scientists and decision‑makers and the need for iterative, interdisciplinary interfaces.
Speaker: Yoshua Bengio
What high‑level, technology‑agnostic principles can guide AI governance to remain effective despite rapid technical change?
He suggested focusing on broad principles rather than detailed rules to keep pace with AI’s fast evolution.
Speaker: Yoshua Bengio
How can forecasting of AI developments be improved and made accountable, given the poor track record of past predictions?
He criticized inaccurate future forecasts and implied the need for better predictive methodologies and accountability mechanisms.
Speaker: Brad Smith
What concrete evidence links AI interventions to progress on Sustainable Development Goals, and what examples can illustrate this link?
When asked for SDG impact examples, he lacked ready data, indicating a research gap in measuring AI’s contribution to SDGs.
Speaker: Balaraman Ravindran
How can a shared global baseline for AI testing and risk measurement be established to ensure interoperability and avoid fragmented regulations?
He advocated for common technical standards to enable consistent safety and trust across jurisdictions.
Speaker: António Guterres
What multidisciplinary research is needed to anticipate AI’s impacts on developing countries and ensure equitable outcomes?
He expressed concern about the Global South and called for cross‑disciplinary studies to forecast and mitigate risks.
Speaker: Yoshua Bengio

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Smaller Footprint Bigger Impact Building Sustainable AI for the Future

Smaller Footprint Bigger Impact Building Sustainable AI for the Future

Session at a glanceSummary, keypoints, and speakers overview

Summary

The event opened with an introduction and a keynote by France’s Minister Delegate for AI and Digitalisation, Anne Le Henanf, who framed sustainable AI as an urgent global priority [1][5-10]. She described the Sustainable AI Coalition’s rapid growth to over 220 million people, its three-pillar strategy of research, measurement and action, and announced the Resilient AI Challenge as a concrete step toward energy-efficient models [16-23][24-27][28-30][31-33].


Dr. Tafik Delassie emphasized that the energy and resource footprint of large generative models threatens low-income regions, argued that the next breakthrough must be leaner, resilient systems, and officially launched the Resilient AI Challenge to move from principle to practice [40-46][51-55][60-62][69-74]. Moderator Anne Bouvreau then invited panelists, and Ambassador Philip Tigo explained Kenya’s 95 % renewable energy mix, the need for green-by-design AI use, and highlighted the role of international standards in governing AI’s environmental impact [104-112][119-120].


James Manyika outlined Google’s Gemini family, which uses mixture-of-experts architectures and aims for carbon-free data centres by 2035, illustrating how performance and efficiency can be pursued together [131-144][150-158]. Arthur Mensch added that sparse-expert models, open-source releases, localisation of training to low-carbon grids, and diverse low-power chips dramatically reduce AI’s carbon intensity, and he called for public-procurement policies to accelerate these gains [167-176][182-190][192-197][252-262].


Abhishek Singh described India’s focus on inference efficiency, off-grid and modular reactor solutions, and policy measures that open small-model development to the private sector, arguing that sustainable AI is essential for scaling public-sector services cost-effectively [216-224][226-236][238-242][310-319]. The panel agreed that AI can support grid management, agriculture and material science, turning the technology into a climate-mitigation tool [202-208], and that governments can further progress by incentivising open-source research, setting procurement criteria, and investing in renewable off-grid power for AI compute [252-262][267-270].


Both speakers and panelists highlighted that model compression and task-specific architectures can cut AI’s energy use by up to 90 % without harming performance [65-66]. They concluded that coordinated international action, standards, and initiatives such as the Resilient AI Challenge are essential to embed resilience, fairness and sustainability into the future of AI [31-33][328-334].


Keypoints


Major discussion points


Sustainable and resilient AI as an urgent global imperative – The speakers framed AI’s future around energy efficiency, environmental limits, and fairness, warning that AI’s energy needs are outpacing green-energy progress and that large models risk widening global divides [7-15].


Coordinated actions and standards to drive “green” AI – France’s Sustainable AI Coalition is scaling up research, publishing a second-generation global standard for AI environmental sustainability, and launching the Resilient AI Challenge to move from principle to practice [24-27][69-74].


Industry-led technical approaches to reduce AI’s carbon footprint – Google’s James Manyika described the Gemini family, mixture-of-experts architectures, and a 24/7 carbon-free compute goal, while Mistral’s Arthur Mensch highlighted sparse-expert models, caching, open-source model release, locality-based data-center choices, and energy-efficient chips as key levers [133-158][168-194].


Policy levers and government involvement – Kenya’s ambassador emphasized a “green-by-design” energy mix, education on responsible AI use, and participation in international standards [107-119]; other speakers called for public-procurement criteria, incentives for off-grid renewable power, and clear environmental standards to guide AI deployment [252-262][267-272].


Collaboration across sectors as the path forward – Throughout the session, participants from UNESCO, France, India, Kenya, and leading AI firms stressed that multi-stakeholder cooperation, open-source sharing, and joint research are essential to achieve inclusive, low-impact AI [45-48][70-74][285-291].


Overall purpose / goal of the discussion


The session was convened to mobilise governments, international organisations, and the AI industry around the development and deployment of sustainable, resilient AI that can meet climate-related targets while remaining inclusive. It aimed to showcase concrete initiatives (standardisation, research funding, the Resilient AI Challenge) and to solicit concrete commitments from policymakers and companies to embed energy-efficiency and fairness into AI practice.


Overall tone


The conversation began with a formal, diplomatic tone (opening remarks by the French minister) that quickly shifted to a technical and solution-focused dialogue as industry leaders detailed model-level innovations. Mid-session the tone became collaborative and optimistic, highlighting shared commitments and concrete actions. The closing returned to a hopeful and rallying tone, urging participants to join the challenge and reinforcing that environmental stewardship is now a competitive advantage for AI stakeholders.


Speakers

Dr. Tafik Delassie – Area of expertise: UNESCO communications, technology sector, AI policy and sustainability; Role/Title: Assistant Director General for Communication and Technology Sector, UNESCO [S1].


Anne Le Henanf – Area of expertise: AI policy, digitalisation, sustainable AI; Role/Title: Minister Delegate for AI and Digitalisation Affairs, France.


Ambassador Philip Tigo – Area of expertise: Technology policy, AI for development in Africa; Role/Title: Ambassador and Special Technology Envoy for Kenya [S7].


James Manyika – Area of expertise: AI research, large-scale models, sustainability, cloud infrastructure; Role/Title: Senior Vice President, Google-Alphabet (Alphabet Inc.) [S10].


Arthur Mensch – Area of expertise: AI model development, efficient architectures, open-source AI; Role/Title: Co-founder and Chief Executive Officer, Mistral AI [S13].


Anne Bouvreau – Area of expertise: AI policy and diplomacy for France; Role/Title: Special Envoy on AI for France, panel moderator.


Speaker 1 – Area of expertise: Event facilitation/moderation; Role/Title: Host/Moderator of the session (no specific title provided).


Abhishek Singh – Area of expertise: AI policy, government AI strategy, AI for public sector services; Role/Title: Lead organizer of the summit; Under-Secretary, Ministry of Electronics and Information Technology, Government of India [S22].


Additional speakers:


Hélène – Area of expertise: Not specified; Role/Title: Likely co-host/moderator (mentioned briefly in the panel introduction, no formal title provided).


Full session reportComprehensive analysis and detailed insights

The host opened the session with a brief welcome and outlined the agenda before introducing the first distinguished speaker, Mrs Anne Le Henanf, France Minister Delegate for AI and Digitalisation Affairs[1-4]. In her keynote, Le Henanf reframed the debate from “how can AI work for us” to “how can we ensure AI works efficiently, responsibly and fairly for people and for our planet” [7-9]. She warned that AI’s energy demands already outpace the growth of green-energy capacity [10-13] and that massive, unsustainable models risk creating a new fairness crisis by excluding regions with limited resources [14-16].


Le Henanf then presented the Sustainable AI Coalition, noting its growth from 90 founding members to a network that reaches over 220 million people and now includes fifteen countries, eight international organisations, and a broad mix of tech firms, utilities, NGOs and research institutions [18-20][21-22]. The coalition follows a three-pillar approach-research (2026 AI-research pitch sessions) [18-20], measurement (second-generation global standard for AI environmental sustainability) [23-25], and action (low-carbon, renewable-powered data centres and the Resilient AI Challenge) [26-28][31-33]. The coalition is embedded in the UN Global Digital Compact and a UN Environment Assembly resolution [31-33].


After the keynote, the host thanked Le Henanf and introduced Dr Tafik Delassie, Assistant Director-General for Communication and Technology Sector, UNESCO[1-4]. Delassie quantified the scale of the problem: generative-AI inference already consumes hundreds of gigawatt-hours per year-comparable to the annual electricity use of millions of people in low-income countries-and training a single frontier model can require more than 1 000 MWh, enough to power Indian villages for a year [52-55][56-58]. He argued that the next breakthrough will come from “leaner, more resilient systems” that can operate under strict energy constraints [59-60][61-62]. To move from principle to practice, he announced the Resilient AI Challenge, which will benchmark open-source models on accuracy and energy efficiency, with results to be presented at the AI for Good Summit in July in Geneva [65-66][69-74].


The host then transitioned to the panel, introducing Anne Bouvreau as moderator [1-4]. The first panellist, Ambassador Philip Tigo, Tech Envoy for Kenya, explained that Kenya enjoys a 95 % renewable-energy mix-geothermal, wind, hydro, solar and water-providing a “green-by-design” foundation for AI workloads [107-110]. He highlighted Kenya’s contribution to the first AI environmental-sustainability resolution [111-115] and called for a broader AI-safety research agenda that explicitly includes environmental concerns [280-284]. He also noted the importance of user behaviour and participation in international standards work [119-120].


Mr James Manyika, Senior Vice-President, Google Alphabet, described Google’s Gemini family as an illustration of industry-led technical progress. The Gemini portfolio spans high-performance “Pro” models to ultra-efficient “Flash” variants, all built on mixture-of-experts architectures that activate only a fraction of parameters, thereby reducing FLOPs per token [133-144][145-148]. Manyika outlined Google’s commitment to carbon-free compute, with investments in nuclear, geothermal, hydro, wind and solar that aim for 24/7 carbon-free operation by 2035 [151-158][154-158]. He stressed that efficiency is both an environmental and a business imperative: lower per-token energy use directly improves return on investment at scale [151-153][152-153]. He also mentioned the potential of fusion energy, noting AI’s role in plasma containment research [267-270].


Mr Arthur Mensch, CEO, Mistral AI, complemented Google’s approach by detailing additional levers. Mistral employs sparse-expert models that activate only about 5 % of parameters, coupled with sophisticated caching systems that avoid redundant computation, achieving substantial reductions in energy per token [169-171][172-176]. He emphasized that open-sourcing large pretrained models amortises the carbon cost of training across the community, preventing ten separate labs from duplicating the same high-energy work [172-178]. Mensch highlighted localisation strategies-training in low-carbon regions such as nuclear-heavy France or hydro-rich Sweden-and the use of diverse, low-power chips to further cut emissions [182-190][191-194]. He advocated for public-procurement criteria that embed sustainability metrics, arguing that market pressure combined with policy can accelerate efficiency gains [190-194].


Representing India, Mr Abhishek Singh, Lead Organizer, AI Impact Summit, outlined a national strategy focused on inference efficiency and grid optimisation. He noted that AI-driven projects with the Ministry of Power have already reduced transmission and distribution losses by 10-15 % [236-237]. Singh stressed that India will not chase trillion-parameter models; instead, the emphasis is on sector-specific, small-language models that keep per-query costs low, a necessity for public-sector services funded by taxpayers [221-224][226-236]. To meet the massive projected inference demand, India is exploring off-grid renewable solutions [267-270] and small modular reactors to avoid overloading the national grid [314-316].


Across the discussion, the speakers agreed that AI’s growing energy consumption threatens climate goals and widens the digital divide, and that improving efficiency-through greener energy mixes, mixture-of-experts architectures, open-source sharing and localisation-is essential for both equity and business viability [7-9][52-55][151-153][107-115]. They also concurred that robust measurement and standardisation are prerequisites for progress; Le Henanf announced a second-generation global standard [24-25], Mensch called for third-party carbon-intensity audits [193-194], and Manyika urged governments to support off-grid renewable power and detailed footprint assessments [151-158]. Finally, the panel highlighted AI as a climate-mitigation tool, citing high-leverage applications in grid management, agriculture, material science and chemistry [203-208].


The discussion revealed nuanced disagreements. On model size, Le Henanf warned that massive models exacerbate inequality [14-16], while Delassie argued that future breakthroughs must come from leaner systems [59-60]; Manyika, however, defended continued investment in large models within the Gemini family, relying on efficiency tricks rather than abandoning scale [133-148]. Regarding energy strategy, Tigo cautioned that off-grid solutions may be unrealistic for many emerging economies [107-112], whereas Manyika and Singh advocated dedicated off-grid solar, wind, geothermal and even small modular reactors to relieve pressure on national grids [267-270][314-316]. On policy levers, Mensch promoted public-procurement mandates [190-194], while Manyika emphasized broader incentives and standards, suggesting a more flexible approach [151-158].


Key outcomes


* The Resilient AI Challenge is now open for submissions until 15 March; winners will be announced at the AI for Good Summit in July in Geneva [69-74].


* The coalition’s Version 2 standard for AI environmental sustainability has been published jointly by ITU, IEEE and ESO [24-25].


* France pledged to implement low-carbon AI policies, green data centres and the three-pillar research-measurement-action framework [26-27][31-33].


* India committed to continue inference-efficiency projects, including grid-loss reduction pilots and policies that open AI infrastructure to private investment [236-237][267-270][314-316].


* Kenya reaffirmed its 95 % renewable-energy mix, user-education programmes and active participation in international standards work [107-115][119-120].


In her closing remarks, Bouvreau reiterated that environmental impact is now a core competitive factor for AI providers and a prerequisite for equitable development [323-326]. She reminded the audience of the registration deadline for the Resilient AI Challenge [329-331] and thanked the panel for demonstrating that sustainable, resilient AI can become the global baseline for future innovation [69-74]. The event positioned sustainable AI as an urgent, collaborative agenda that bridges policy, industry and research to align technological progress with planetary boundaries.


Session transcriptComplete transcript of the session
Speaker 1

And this is what we will explore at this event. To introduce the topic, we will first have two distinguished speakers. First, I have the honor to welcome Mrs. Anne Le Henanf, France Minister Delegate for AI and Digitalization Affairs. Welcome, Madam Minister.

Anne Le Henanf

Excellencies, distinguished guests, ladies and gentlemen, it’s an honor to address you at Smaller Footprints, Bigger Impact, co -organized by France, UNESCO, and the Sustainable AI Coalition. This event is a continuation of the work co -chaired by India and France in preparation of this AI Impact Summit. putting resiliency, sustainability and efficiency at the heart of the global agenda. The question we face is no longer how can AI work for us, but how can we ensure AI works efficiently, responsibly and fairly for people and for our planet. Resilient and sustainable is the key to unlocking digital transformation, environmental protection and inclusive development. Sustainable AI is not an option, it’s an imperative. First, it’s an energy and environment imperative as governments decarbonize.

AI’s energy demands. Threaten to outpace green energy progress. Model providers face a stark reality. AI’s energy needs are growing faster than supply. Second, it’s a fairness crisis. Massive AI models without sustainability create new divides and can exclude regions and communities lacking resources. That is why France, at the AI Action Summit, made sustainable AI a priority through the Sustainable AI Coalition, launched with UNEP, ITU and India as founding members. Our goal? Leverage AI to solve environmental challenges without exceeding planetary boundaries. From 90 initial partners, we have grown to over 220 million people. We are the first to have a sustainable AI. including tech firms, startups, utilities, NGOs, and research institutions backed by eight international organizations and 15 countries with the Netherlands joining this year.

Sustainable AI is now a global priority. Embedded in the UN Global Digital Compact and a UN Environment Assembly resolution. To turn vision to action, we focus on three pillars. First, research. In 2026, the coalition will launch AI research pitch sessions to connect university projects with funding and industry partners. Second, measurement. You can’t improve what you can’t measure. Today, I’m proud to announce on behalf of the coalition ITU, the Institute of Electrical and Electronics Engineers and ESO that we published the second version of the global approach on standardization for AI environmental sustainability to promote consistency in AI environmental sustainability standardization and third, action. France is implementing policies for low carbon efficient AI, powered by renewable energy hosted in green data centers and designed to be leaner and smarter this approach boosts competitiveness and discovery with minimal environmental costs that’s why as an AI Impact Summit outcome, India, France and UNESCO launched the Resilient AI Challenge, a global challenge to advance compressed, more energy -efficient AI models.

This initiative supports innovation aligned with our shared goals. Sustainable and resilient AI must be the global baseline. The only path to equitable development that services people and the planet. France and India have led this effort from Paris to New Delhi by focusing on people, planets and progress. Now we must deliver together. I look forward to our panelists’ insights and now invite to continue. Thank you.

Speaker 1

Thank you. Many thanks, Madam Minister, for this insightful introduction and the pioneering role of France in Sociable AI. I have now the pleasure to welcome Dr. Tafik Delassie, Assistant Director General for Communication and Technology Sector at UNESCO, whose landmark report on smaller models was published in July last year. Thank you.

Dr. Tafik Delassie

Madame la Ministre de l ‘IA du Numérique, Madame l ‘Envoyé Spécial pour l ‘IA, distinguished participants, esteemed colleagues, dear partners and ladies and gentlemen. I’m very pleased on behalf of UNESCO to be with you this afternoon for this important session. But allow me first to raise a question. What if the next breakthrough in AI is a breakthrough in AI? is not about building other larger models, but about building leaner, more resilient systems, systems that can solve whole world problems and real world constraints, including in low resource environments. Before turning to the resilient AI challenge, I would like to warmly thank the government of India for its leadership in convening this timely, strategic, and important forward -looking summit.

I also would like to acknowledge the co -chairs of the Working Group on Resilience, Innovation, and Efficiency, the Ministry of Power of India, and the Ministry of Ecological Transition of France for their strong commitment, engagement, and stewardship. My sincere thanks also go to our technical and ecosystem partners, including Mistral, Google, Hugging Face, Alkosh, Sarvam AI, and the broader Sustainable AI Coalition. alongside many academic experts who have contributed to this collective effort. UNESCO is proud to serve as a key knowledge partner for this initiative and to support the vision of India regarding AI that truly serves the people, the planet and prosperity. I would like to convey briefly three messages. First, the future of AI will not be defined by scale alone, but rather by resilience.

Second, resource -efficient AI is not a trade -off. It is a path to inclusion and access. Thirdly, delivering impact at scale requires global collaboration that is truly grounded in real -world validation. We are at a critical inflection point. Generative AI tools are now used by more than 1 billion people on a daily basis. Yet, behind every prompt lies a growing energy and resource footprint. Inference already amounts to hundreds of gigawatt hours per year, and this is comparable to the annual electricity use of millions of people in low -income countries. Training frontier models is even more energy intensive. A single large AI model can consume over 1 ,000 megawatt hours of electricity, enough to power villages across India for a whole year, placing increasing pressure on energy systems and reinforcing inequalities in access to compute and infrastructure.

These challenges are not theoretical. They are real. They directly affect whether AI can be deployed. In public services, also by small, medium -sized enterprises, the technology is used to build a system that can be used to build a system that can be used to build a system that can be used to build a system that can be used to build a system that can be used to build a system that can be used to build a system that can be used to build a system that can be used to build a system that can be used to build a system that can be used to build a system that can be used to build a system that can be used to build a system that can be used to build a system that can be used to build a system that can be used to build a system that can be used to build a system that can be used to build a system that can be used to build a system that can be used to build a system that can be used to build a system that can be used to build a system that can be used to build a system that can be used to build a system that can be used to build a system that can be used to build a system that can be used to build a system that can be used to build a in rural health systems and low connectivity environments, both in developing countries but also in advanced economies facing growing energy constraints.

This is why the next breakthrough in AI will not come from building ever -larger models. It will come from building smarter, leaner, and more resilient systems that can deliver impact under energy constraints rather than exacerbate them. A proverb says, a good life is for everyone. It captures the spirit of living well together, in community, inclusively, and in harmony with our planet. In the same spirit, AI must be designed not only for those with the greatest computing power, but for all communities. It is everywhere around the world. The work of UNESCO shows that small but conscious design choices, such as model compression, task -specific architectures, and optimized inference can reduce AI energy consumption by up to 90 % without compromising performance.

Resilient AI is therefore not only greener, it is more inclusive, more affordable, and more adaptable. It lowers barriers for researchers, empowers local ecosystems, and enables AI solutions to reach communities too often left at the margins of the digital transformation. This brings me to why we are here today. It is my pleasure to officially announce the launch of the Resilient AI Challenge, which is a flagship initiative under the India AI Impact Summit Working Group on Resilience, innovation, and efficiency. This challenge moves us decisively from principles to action. It brings together model providers, researchers, startups, and academic teams to demonstrate how open -source AI models can be optimized, compressed, and deployed to achieve strong performance while significantly reducing the use of energy.

Rather than comparing entirely different models, the challenge focuses on improving one base model per task, ensuring transparency, fairness, and rigorous benchmarking. Submissions will be evaluated on shared infrastructure and ranked on both accuracy and energy efficiency, generating clear and actionable evidence. The winners of the challenge will be announced at the AI for Good Summit this coming July in Geneva, but the real success will be, of course, much broader than that.

Speaker 1

Thank you. before we delve into the panel I will invite the keynote speaker and the panelists to go up front for a picture now that we have the final line up and then we start the panel thank you Thank you. Thank you very much. Thank you very much. So now let me welcome our distinguished panelists and Mrs. Anne Bouvreau, Special Envoy on AI for France, moderator of this panel, to discuss how to make these models work and deploy in real life to the benefit of all. Thank you so much.

Anne Bouvreau

Thank you very much, Hélène. Thanks to the… the two keynote speeches that we just had first. Without further ado, I think what we want is to head into the discussion, so I will not make long introductions. I’m delighted to welcome our distinguished guests, James Manika, Senior Vice President, Google Alphabet, Arthur Mensch, CEO of Mistral AI, Abhishek Singh, lead organizer of this summit. A round of applause for him, please. Thank you. And Ambassador Philip Tigo, Ambassador and Tech Envoy for Kenya. Thank you. So the AI industry, according to the International Energy Agency, will probably consume 3 % of worldwide electricity production by 2030. This is not the end of the world, but this is a huge expansion.

The world’s largest energy source is the United States. The world’s largest energy source is the United States. The world’s largest energy source is the United States. The world’s largest energy source is the United States. And therefore, there are environmental costs and impacts that we need to mitigate. AI, of course, at the same time also creates opportunity to optimize resources, including energy. So how can we ensure that AI’s development, in particular in developing countries but everywhere as well, is something that comes together with a focus on the planet? I’ll start with a question for Ambassador Philip Tigo. Let me turn to you first. You’re an attractive proponent of, active proponent of a more efficient and sustainable AI.

Africa is one of the most energy -constrained regions. It’s also a continent where adoption is becoming very frequent. We saw that with mobile phone payment. We saw that with other technologies. technologies. How is Kenya approaching efficient AI? What can you share with

Ambassador Philip Tigo

Thank you so much. And I’ll be very quick because I can see the ticker. There are a couple of things. One is that we’re very lucky as a country that our energy mix is already 95 percent. And we keep on investing into that. So we have geothermal, we have wind, we have water, we have solar, and we have hydro. So that’s the first kind of framework that we have, that it really must be green by design. The second part, of course, is that where the green comes in, it’s always not necessarily on the efficient data centers or how they’re energy efficient, but also on the use of it. So part of our green by design is also kind of wide scale of education around how people use these resources.

For example, you shouldn’t be looking for the next Starbucks, for example, when you’re using AI. You should really be using Google as an option. So people need to have those choices in their heads by design. The third part, of course, is protecting Kenya alone is not enough. You can put a green shield around the country. but AI is global. So the third part quickly is working in the international framework. So as you know, we worked with the Coalition for Sustainable AI to champion the first ever AI resolution environmental sustainability, and part of it had the four parts, right? The energy, the life cycle, the sustainability piece, but also the improving the set of the science to continue to understand the energy efficiency component of AI.

Anne Bouvreau

Excellent. Thank you so much. And we’ll try to keep this lively. My next question will be for James, for James Manika. Google is one of the key players, of course Mistral as well and Hugging Face, but you’re a key player in publishing transparent data on environmental impact of AI. And you develop both very large frontier models and also smaller, very efficient models. It’s the Gemini and the Gemma. Thank you. So I’ll start with the Gemini family. From a business and an engineering standpoint, I think it’s a very interesting family. Where is the real frontier? Is it scaling up or scaling down?

James Manyika

Well, thank you. Pleasure to be here at the summit with you, Anne. I think just to get to the question, we’re actually looking at this on multiple fronts. On the one hand, if you look at, for example, our Gemini models, it’s not one model. We have a whole model family, which starts with the Gemini Pro, goes to the Gemini Flash models, which are some of the most efficient models. So we’re trying to make sure with our models, our Gemini family, we cover the performance efficiency frontier of these models. You may have noticed that recently no one really talks a lot about model size. Remember, two, three years ago… It used to be the big craze.

It used to be the big question, how many are, how many parameters. And that’s because even with our Pro models, we’re now pursuing this mixture of experts’ architectures, where the activation of the model doesn’t activate. The entire model. No one activates the dense models anymore. People are activating and reactivating our mixture of experts. So on the Gemini models, we’re trying to cover the performance. performance and efficiency frontier. Then we also have our GEMA models. Our GEMA models are our most efficient open source, open weights models. In fact, here in India, on AI Kosh, which is the platform in India, we actually have on there 23 GEMA models. And that’s because we’ve optimized them for different sizes.

Some of them are efficient and run on a single GPU because we know that the needs on the edge, people want a variety of model choices. To make sure we drive efficiency. I’ll say two more quick things very quickly. Every year we focus on efficiency because it’s both from an energy point of view, from a computer efficiency point of view, even from a business standpoint, it’s the right thing to do. Because as you start to serve many more people, you want the most efficient systems. I’ll say one last thing finally, which is we are making probably extraordinary, probably the most investments of any… anybody into using green energy, clean energy for our energy, for our compute.

In fact, we’ve made this audacious goal that some point in the 2020, 2030, 2035 era, we want to be 24 -7 carbon free. So we’ve made investments in nuclear, in geothermal, we actually have several operational data centers in geothermal. We’re using hydro, we’re using wind and solar. So we’re making, we’re trying to get to a point where all our energy uses for our compute is carbon free. That’s our kind of our moonshot goal.

Anne Bouvreau

Excellent, thank you so much. I’d like to move to Archer Mensch, to Archer. Mistral is developing very large models, but really also being very good at high performance compact models. And I know your engineers and you as a co -founder and CEO also strongly believe in the environmental impact of AI and what can be done there. So what can you share with us on that and with your both business and engineering experience, where does model efficiency have the highest return? So I would say the second one. You can. Thank you. Wonderful. Tim Mark.

Arthur Mensch

So I would start with a couple of technical aspects. So to James’ point, the model size is indeed not only the only thing that we should be looking at. Effectively, we are using sparse mixture of experts because those are models which have a lot of parameters to store knowledge, but where you only activate 5 % of them. So that has been a key way of reducing the number of flops you do to generate one token, which is the one thing that matters for energy and therefore for carbon intensity. It’s one of the multipliers, actually. So the sparse… city matters and then you the other thing that matter is the systems on top I would say the the caching systems that you can put the way you’re managing the context so that you’re not reprocessing information and beyond just releasing the model weights that is something that we’ve always done we’re also heavy contributors to inference frameworks that are doing more and more advanced that are using more and more advanced technology to handle the caching systems in a way that where we are actually removing the wasteful computations that we used to do so it’s a it’s an algorithmic problem it’s actually very interesting it’s also a machine learning problem because depending on the request that you’re getting you can actually route the request to a small model or to a large model and so to James point it’s actually very important for any company doing models to actually have small models all the way to large models in particular because the large ones can be used to make specialized models after that so very important that’s an important point But I would say if you look at the carbon footprint today of artificial intelligence, because most of the GPUs are currently being used for training, I would say most of the weight comes from the fact that you have around 10 labs in the world that are training models that at the end look very similar.

And so for us, if I look at our biggest leverage there, the fact that we’ve been open sourcing models that are very large and we’ve been open sourcing our best models really, has been a major way of reducing the externality cost that you’re producing. Because we’re investing and it costs a lot of carbon to actually train a model, but then we give it for free to everyone else. And what that means is that people can build on top. And that’s amortized costs. Suddenly you don’t have 10 companies doing and training the same kind of models, but this thing is out there and you don’t need to reinvest. So I think that’s the big part. So that’s really on the training front.

And today training is the thing that takes most of the cost. when it comes to training. Now, when it comes to our own approach to sustainability, and I think I agree with James, one of the multipliers is the carbon intensity of your energy. And so there is a locality aspect to it, and we’ve been building our data centers and training our models recently. We’ve been training our models recently on our own hardware, which sits in France, which France is heavily nuclear, so the carbon intensity is low. Also 95%. Yes. Philippe, sorry. And in Sweden, it’s not 95%. Still very good, still very good. But in Sweden, and in Sweden where you have hydro. So choosing the locality is important because it’s one of the multipliers that you want to optimize for.

And finally, the one thing to worry about is, I mean, model size is one thing, carbon intensity is one thing, and then chips are also another thing. So being able to use the diversity of chips is huge. It’s super important. And we are in the… on using new kind of chips that are much more efficient from an energy perspective. Now to James’ point I would like to add the good thing about AI is that we are energy constrained and so suddenly it means that efficiency is actually driven by business. So I mean I would say transparency is super important for us and matters for our customers so we give, we’ve done like a very deep study on how that works and the carbon intensity of our training, we’ve done it with Mistral Large too with third party auditors etc.

But the business is also driving the, it’s also a reason why we’re going toward more efficient models because we don’t have enough energy, we need to have things that run on smaller hardware and it depends on the countries as well. Like there’s actually in the US the constraint is higher than in Europe and I think it’s going to be very high as well in Africa and in India and down the line. So it’s always good when business aligns Yes. Of course you can. And I think it would be valuable for public procurement in particular to put more pressure on sustainability as a way to accelerate the industry because that raises the stake and so that also pushes us toward more efficiency.

Anne Bouvreau

Wonderful. Thank you so much. I think that was really… Do you want to react quickly, James? No, no. Before we go to Abhishek?

James Manyika

I was going to agree with Arthur, but I’ll maybe add a couple more components. One of the things that is also important in this conversation is what you actually apply AI to. So there’s a whole range of applications of AI that actually are helpful for sustainability, grid management, managing with the adaptation and effects of climate change. And we’re seeing a lot of those kinds of applications at scale in ways that make an enormous difference to the sustainability question.

Arthur Mensch

So adding to that, you have agriculture as well where you have a lot of leverage. You have material science and chemistry. So we work with vertical AI companies to try and make that happen.

Anne Bouvreau

Great to see this. Thank you. I think we have a very high -quality exchange in this panel. Abhishek, I’d like to move to you and, yeah, and the microphone as well. And Archer actually introduced the fact that energy constraints are real, and they’re real in India, of course, and you have such a high population and wide market and also, of course, infrastructure constraints. How do you approach this? How does the AI mission in India approach this? And what are you doing on this front?

Abhishek Singh

the AI factories, with the hope that ultimately this investment will pay out. But when we ultimately look at how it will pay out, it will come out through inferencing. And we are doing inferencing at scale, ultimately users will have to pay. So until and unless you have focused on efficiency and sustainability, actual ROI on the investments will not work out. So it will be in the interest of everyone and only those players will survive who actually ensure that per token energy use is the minimal. So it will require innovation at multiple levels. It will require innovation at how do you do the algorithms, how do you do the inferencing, how do you use it. And therein, the value of small language models will come in.

While it’s fashionable to go for a trillion parameter model and more, but ultimately if you are building use cases in key sectors like healthcare or education or agriculture, you’ll need to go through smaller models which will be consuming less energy and which will be able to cost less. So sustainability is something that is given. So what we are doing, of course, in India… mission and in India is, number one, we are not chasing the trillion parameter models. We are not in the parameter game, number one. Number two, we are not even right now at the stage in which our companies are. I don’t think anyone of us is chasing AGI, which is like glamorized by some of the frontier AI models.

We are trying to think of what are the solutions which can be built by using current level of models which are available, which can solve societal problems in various sectors. To have real impact. Real impact. It’s a plug for you. Yeah, exactly. And when we do that, the cost per inference, the cost per query is something that becomes material because many of the public sector applications, especially in sectors like agriculture or healthcare, education, for some time will have to be funded by government, which will mean the taxpayers’ money. So we cannot be extravagant in doing that. So ensuring that the PUEs or data centers are lesser, ensuring that grid efficiency, we have, in fact, we are doing a project with the Ministry of Power, which I think finds a mention in the resilient inter…

committee’s report also, wherein we are using AI for improving grid efficiency, reducing the transmission distribution losses and what we have felt is that doing it smartly and using technology for doing that brings down the T &D losses by almost 10 to 15%. That’s again a big, big gain. So we’ll have to look at the entire ecosystem right from what kind of chips you are using for what, if you are doing inferencing do you need the high -end chip for doing that. So classifying it, having a very sector -specific application, specific use case basis approach for designing your systems will ultimately be where the game is and those who are able to do that will be able to build more sustainable systems their cost per query will be lesser and they will be able to survive.

So we, as government we are trying to enable this but ultimately I feel that business sense will ensure that sustainability comes in. We cannot be, it cannot be like that we can consume as much energy as we want unmindful of the ramifications. We have the funds and the VCs will pay only till a particular time. It cannot be forever.

Anne Bouvreau

Excellent. Thank you. We’re unpacking a number of things and we’re unpacking training from inference and utilization. We’re unpacking large models with smaller models and actually you need to get the larger models ideally through open source to be able to do the smaller ones. We’re looking at how AI can further then loop back and help optimize. We’ve heard a number of super interesting things. We started, you started a little bit on this Artur, but let me ask this question of everyone quickly. What, first of all, we also heard that business interests and commercial interests are aligned with the desire to make AI more sustainable which is a very hopeful message but what can governments and institutions do to further help improve this?

Artur, you hinted at public procurement. Do you want to say a few more words on this?

Arthur Mensch

Yes, it’s one of the ways in which we can build and make sure that efficiency is favored. Again, I think the market can solve it, but it can be accelerated, and the faster we can go, the better, because effectively we’re really building a lot of electricity at the moment for AI, and so if we can just make sure that efficiency is part of the consign, that’s good. It’s worth noting that for better or worse, artificial intelligence, generatively, is turning into being a utility company. Being an AI company is turning into being a utility company, in that you’re basically turning electricity into tokens. It’s highly competitive, so that means the margins are getting, I would say, thinner, and which means that things are also getting price sensitive, and so when it comes to being price, when things get price sensitive, efficiency really matters.

So that’s going to be partially solved, and that’s what we’re going to do. the market, but can be accelerated. And I’d say the way it can also lead the way is probably by sustaining open source projects that actually go beyond the models. The inference path, what we call agent harnessing, is also something that will eventually become common goods and can be used everywhere. And so good practices, incentivizing research as well, because the domain of routing, picking the right models, the domain of distillation, those models do not require you to have thousands of GPUs. And so you can do efficient research, so public research on that domain is very much possible, and we’d love to see more of it.

So I guess that’s the three things that I can mention.

Anne Bouvreau

Wonderful. Thank you. James, do you want to add a few words on that?

James Manyika

Yeah, first of all, I agree with the three things that Arthur mentioned. I would add a couple more. One of the things that’s actually quite interesting is the more government can actually incentivize and encourage… Come on. to use off -grid solutions is super important because that takes the burden off the public infrastructure that affects citizens. And so, for example, we’re spending a lot of time thinking about off -grid solar, off -grid wind, and we’re thinking about geothermal. We’ve even built in our own small modular reactors. And we’re also investing, to Arthur’s point, in breakthrough research. One of the most exciting areas, by the way, which is not as far away as people think it is, is actually fusion energy.

So we’ve made some of the biggest investments in fusion energy. And, by the way, AI is actually helping us make that progress because one of the things you worry about with fusion energy is how do you do what’s called plasma containment, where you can actually hold these high -energy particles and contain them. And AI has actually helped us do that. So even the use of AI in breakthrough research like that is pretty important. I’ll say one other quick thing very quickly because it reinforces, I think, something that Arthur and actually the minister said, which is… Inference is going to turn… to be the most important thing in many respects, far more than the training part of this.

And we’ve actually started to invest in that. So, for example, we’ve actually built, you know, we have our own chips, TPUs. We use TPUs and GPUs. In TPUs, we’ve actually built some inference -specific TPUs just for inference, to be able to do inference even more efficiently than what you would typically do with a general kind of GPU.

Anne Bouvreau

Wonderful. Thank you. Ambassador Philip Tigo, what can you… Maybe you can take the microphone from a neighbor, and then I’ll ask Abhishek to conclude.

Ambassador Philip Tigo

No, very quick, because a lot of the solutions are for developed economies. I think we have to be a little bit realistic in terms of where emerging economies… I think, one, there’s a bigger question of sovereignty, right? And there are conversations around that. And there has to be trade -offs. Like, every country wants to have the entire stack in their country. So I think governments need to be very realistic around which parts of the stack they really want to keep in their country, especially if you have this… AI for green and green AI… conversation. I think the second part again is to look at, especially in emerging economies, is to look at sustainability across the stack.

So we may not have compute necessarily, but we have other parts of the stack. So how do you ensure that part of the training gets that done? The third part I think is to expand this definition of safety, because AI safety is very much around the models and not necessarily around the use and potential harms of the environment. I’ve not seen that research. So there could be an expansion of research around looking at AI safety, including environmental concerns. The other quick one, of course, is you can only know the environmental footprint from use cases, and it has to be specific. And these are deep dives, and I have a sense people need to invest in deep dives.

When I look at food systems, that’s an entire food system, so there’s potentially problems there if we do not necessarily have, and to my last point around the standards, we really have to invest in the standards. We’ve seen that in other electronics, right? So we need to see that. So everybody, everybody knows the kind of environmental standards that you do that, and that’s needs to be done at scale. Thank you so much.

Anne Bouvreau

Abhishek, what can governments do? You represent a government. You want the… It works?

Abhishek Singh

Governments are doing… Every government is conscious of this. In India, in fact, recently we did kind of focus on the small model reactors, which James mentioned, is that we came out with a new policy under which the sector has been opened up for the private sector also to invest. What we do believe is that as inferencing needs go up and India, when we are talking inferencing, we are talking inferencing at scale. Say if 100 billion or 200 billion in the first phase and up to ultimately 500 million and more, people start using these services and the kind of back -end infrastructure that we need will be huge, which will consume a lot of energy. So to reduce the load on the existing grid, we will need to think of off -grid solutions.

We will need to think of dedicated small modular reactors, which can power the air applications. the world over what we are seeing is the more and more AI adoption is going up, energy costs go up. And if energy cost goes up, ultimately for elected governments it doesn’t be so well. So it has to be thought of, the entire strategy has to be thought of, how do we balance the needs between having more efficient and more intense AI solutions with the needs for sustainability, with the needs of reducing the carbon footprint, because we are also a few years away from 2030 SDG, Sustainable Development Goals. So ultimately we need to balance the both, the need for having more efficient AI and the need for reducing the impact on environment.

Otherwise we can’t solve one problem and create another. So that’s again something the governments are concerned of and I think augmenting the renewable energy sources, solar, wind and nuclear, the fusion thing will be the way to go forward.

Anne Bouvreau

Yeah, thank you very much. I think this has been a fascinating discussion. The we can we heard from all of the panelists that the environmental impact of AI is not an afterthought. It’s actually front and center. It’s part of the competitive advantage. It’s part of what companies and governments think about. This is a very strong and positive message that I think we can all be reassured with. Let me just close by mentioning the Resilient AI Challenge that was mentioned at the beginning. Registrations close on March 15th. So please submit your solution. Please join me in thanking this wonderful panel. Thank you, everyone, for joining us today and really hoping to see you engage into this Resilient AI Challenge.

This is first at the international level working on improving research on compressed models. So one of the… solution and tool that was presented in the panel so we really encourage you to register so thank you so much to our panelists another round of applause thank you Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (14)
Factual NotesClaims verified against the Diplo knowledge base (5)
Confirmedhigh

“The host introduced Mrs Anne Le Henanf, France Minister Delegate for AI and Digitalisation Affairs as the first distinguished speaker.”

The knowledge base records the host welcoming Mrs Anne Le Henanf, France Minister Delegate for AI and Digitalization Affairs, confirming her role and introduction. [S34]

Confirmedhigh

“Le Henanf warned that AI’s energy demands already outpace the growth of green‑energy capacity.”

A source explicitly states that AI’s energy demands threaten to outpace green-energy progress. [S1]

Confirmedhigh

“Massive, unsustainable AI models risk creating a new fairness crisis by excluding regions with limited resources.”

The knowledge base mentions a fairness crisis where large, unsustainable AI models create new divides and can exclude regions and communities lacking resources. [S1]

Additional Contextmedium

“AI’s energy demands are growing faster than the supply of green energy, posing a major sustainability challenge.”

Broader analyses estimate that global AI-related electricity consumption could equal that of a whole country (e.g., Japan) by 2030 and that data-centre electricity use will more than double, underscoring the scale of the challenge. [S93] and [S102]

Additional Contextmedium

“Large AI models require vast computational resources, significant electricity, and extensive cooling infrastructure.”

A source describes how large-scale AI model development and deployment demand substantial compute power, electricity, and cooling, providing technical detail that supports the report’s statements about energy intensity. [S26]

External Sources (102)
S1
Smaller Footprint Bigger Impact Building Sustainable AI for the Future — -Dr. Tafik Delassie: Assistant Director General for Communication and Technology Sector at UNESCO
S2
Ethical AI_ Keeping Humanity in the Loop While Innovating — -Dr. Tawfik Jelassi- Assistant Director General for Communication and Information at UNESCO -Dr. Tawfiq Jilasi- Assista…
S3
DC-OER The Transformative Role of OER in Digital Inclusion | IGF 2023 — Dr. Tawfik Jelassi, Assistant Director-General for Communication and Information Sector, UNESCO
S4
Smaller Footprint Bigger Impact Building Sustainable AI for the Future — – Anne Le Henanf- Dr. Tafik Delassie – Anne Le Henanf- Dr. Tafik Delassie- Ambassador Philip Tigo
S5
THE FORGOTTEN FRENCH Exiles in the British Isles, 1940-44 — – – Mauriac , C., The Other de Gaulle (London, Angus &amp; Robertson, 1973) – Michel, H., Histoire de la France Libre (P…
S6
Global Health Diplomacy — Ilona Kickbusch is the director of the Global Health Programme at the Graduate Institute of International and Developmen…
S7
Responsible AI for Shared Prosperity — -Philip Thigo- His Excellency Ambassador, Special Technology Envoy of the Government of Kenya
S8
Toward Collective Action_ Roundtable on Safe &amp; Trusted AI — And to explore those questions, we’ve got an amazing panel that I’m honored to introduce. We’ve got Dr. Chinasa Okolo on…
S9
S10
A Digital Future for All (afternoon sessions) — – James Manyika – Senior VP, Google-Alphabet and Co-Chair of the Secretary-General’s High-level Advisory Body on Artific…
S11
https://dig.watch/event/india-ai-impact-summit-2026/ai-for-social-good-using-technology-to-create-real-world-impact — Because we believe that AI’s true potential lies in its ability to deliver population -scale impact, transforming educat…
S12
Smaller Footprint Bigger Impact Building Sustainable AI for the Future — -James Manyika: Senior Vice President, Google Alphabet
S13
State of Play: AI Governance / DAVOS 2025 — – Arthur Mensch: Co-founder and Chief Executive Officer, Mistral Arthur Mensch: I’m suggesting that this is the direct…
S14
The Role of Government and Innovators in Citizen-Centric AI — – Arthur Mensch- Jarek Kutylowski – Arthur Mensch- Roberto Viola
S15
Smaller Footprint Bigger Impact Building Sustainable AI for the Future — – Arthur Mensch- Ambassador Philip Tigo – Arthur Mensch- James Manyika- Abhishek Singh
S16
THE FORGOTTEN FRENCH Exiles in the British Isles, 1940-44 — – – Mauriac , C., The Other de Gaulle (London, Angus &amp; Robertson, 1973) – Michel, H., Histoire de la France Libre (P…
S17
Building Trusted AI at Scale – Keynote Anne Bouverot — -Anne Bouverot: Special Envoy for Artificial Intelligence, France; Diplomat and technologist; Former Director General of…
S18
Inclusive AI_ Why Linguistic Diversity Matters — -Anne Bouverot- Special envoy to the president (France)
S19
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S20
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S21
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S22
Open Forum #30 High Level Review of AI Governance Including the Discussion — – **Abhishek Singh** – Under-Secretary from the Indian Ministry of Electronics and Information Technology Abhishek Sing…
S23
Announcement of New Delhi Frontier AI Commitments — -Abhishek: Role/Title: Not specified (invited as distinguished leader of organization), Area of expertise: Not specified
S24
GPAI: A Multistakeholder Initiative on Trustworthy AI | IGF 2023 Open Forum #111 — Abhishek Singh:I can take that, no worries. Thank you, Abhishek. The floor is yours. You can give your question. Yeah, t…
S25
Is AI the key to nuclear renaissance? — In the global race for AI dominance, tech giants spare no effort in securing the necessary energy resources. However, th…
S26
Green AI and the battle between progress and sustainability — AI is increasingly recognised for its transformative potential and growing environmental footprint across industries. Th…
S27
UNSC meeting: Peace and common development — In this speech, the speaker emphasises the critical importance of international cooperation and multilateralism in addre…
S28
Large Language Models on the Web: Anticipating the challenge | IGF 2023 WS #217 — Emily Bender:Thank you so much. Ohayou gozaimasu. I’m joining you from Seattle, where it is the evening. And I have prep…
S29
The rise of large language models and the question of ownership — What are large language models? Large language models (LLMs) are advanced AI systems that can understand and generate va…
S30
UN AI resolution a significant global effort to harness AI for sustainable development  — On 21 March, the United Nations General Assembly (UNGA) overwhelmingly passed the firstglobal resolution on AI. Member s…
S31
Global Digital Compact: AI solutions for a digital economy inclusive and beneficial for all — ## Challenges and Unresolved Issues ## Key Agreements and Consensus ## Setting the Context: Twenty Years of WSIS and C…
S32
US-led UN resolution calls for safe AI systems to address global challenges — On 15 March 2024, the United States (US) and 54 co-sponsors issued a joint statement on the proposed United Nations Gene…
S33
Researchers propose social and environmental certification framework for AI — Researchers at the Montreal AI Ethics Institute, Microsoft, McGill University, and Carnegie Mellon Universityhave propos…
S34
https://dig.watch/event/india-ai-impact-summit-2026/smaller-footprint-bigger-impact-building-sustainable-ai-for-the-future — AI’s energy demands. Threaten to outpace green energy progress. Model providers face a stark reality. AI’s energy needs …
S35
Panel Discussion Summary: AI Governance Implementation and Capacity Building in Government — Offered candid insights into France’s AI governance journey since 2018, including significant cultural resistance within…
S36
Scaling Trusted AI_ How France and India Are Building Industrial &amp; Innovation Bridges — A very good morning, ladies and gentlemen. Our next session is a panel discussion on AI for science. The panel will be m…
S37
Transforming Agriculture_ AI for Resilient and Inclusive Food Systems — It offers enormous opportunities to increase the productivity and sustainability of local food production. It offers opp…
S38
Resilient and Responsible AI | IGF 2023 Town Hall #105 — Overall, the analysis highlighted the need for innovation, inclusive policies, and partnerships to achieve sustainable d…
S39
Democratizing AI Building Trustworthy Systems for Everyone — “to echo the obvious point, which is that measurement is tremendously important”[83]. “These are examples of what’s nece…
S40
Networking Session #50 AI and Environment: Sustainable Development | IGF 2023 — Jerry SHEEHAN:All right, thank you very much, Patrick. I’m delighted to be able to join you, even though it can only be …
S41
Revisiting 10 AI and digital forecasts for 2025: Predictions and Reality — AI has significantlyincreased energy consumption, with data centres now consuming approximately 2% of global electricity…
S42
AI for Social Good Using Technology to Create Real-World Impact — And I think that’s what we’re doing. And to give you another example of how it reduces the complexity, there’s a very in…
S43
Workshop 3: Quantum Computing: Global Challenges and Security Opportunities — Funding challenges due to unpredictable return on investment timelines present obstacles to development and deployment. …
S44
It’s Over for Turnover: Retaining Talent in Cyberspace — Dr. Almerindo Graziano:Yeah, sorry if I may add. Yes, please. I think that one of the biggest problem that the gap, the …
S45
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Virginia Dignam: Thank you very much, Isadora. No pressure, I see. You want me to say all kinds of things. I hope that i…
S46
Powering the Technology Revolution / Davos 2025 — Anne Bouverot: A lot has been said. And I agree with all of this. I don’t want to repeat it. I just want to comment …
S47
Keynote-Roy Jakobs — And they do that across imaging, monitoring and connected care. The work done here does not stay here alone. It shapes s…
S48
Planetary Limits of AI: Governance for Just Digitalisation? | IGF 2023 Open Forum #37 — Another perspective suggests that countries from the Global South are not prioritising sustainability and climate protec…
S49
Open Forum #27 Make Your AI Greener a Workshop on Sustainable AI Solutions — High level of consensus with strong implications for sustainable AI development. The agreement across speakers from diff…
S50
WS #214 AI Readiness in Africa in a Shifting Geopolitical Landscape — Government representatives emphasized their role in creating enabling policy environments while acknowledging capacity c…
S51
Press Conference: Closing the AI Access Gap — Finally, there is strong agreement among the speakers for trust-based, multi-stakeholder partnerships in AI. They argue …
S52
AI is here. Are countries ready, or not? | IGF 2023 Open Forum #131 — AI is here. Are countries ready, or not? How can countries accelerate their effective adoption and utilization of AI for…
S53
Global dialogue on AI governance highlights the need for an inclusive, coordinated international approach — Global AI governance was the focus of a high-levelforumat the IGF 2024 in Riyadhthat brought together leaders from gover…
S54
Smaller Footprint Bigger Impact Building Sustainable AI for the Future — Excellencies, distinguished guests, ladies and gentlemen, it’s an honor to address you at Smaller Footprints, Bigger Imp…
S55
Building Scalable AI Through Global South Partnerships — The discussion concluded with optimism about AI’s potential to drive meaningful social change across the Global South, c…
S56
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — Quote from UNDP Human Development Report 2025 stating that innovation incentives favor rapid deployment and automation o…
S57
Advancing Scientific AI with Safety Ethics and Responsibility — And also, very importantly, how we have to also see it from the context of, you know, people doing their own thing, DIY …
S58
TradeTech for Greener Supply Chains — Government regulations, policy changes, and incentives were highlighted as crucial factors in promoting sustainability. …
S59
Leveraging AI4All_ Pathways to Inclusion — By embedding standards that reward accessibility and open standards into procurement, governments can shape market incen…
S60
Open Forum #27 Make Your AI Greener a Workshop on Sustainable AI Solutions — Funding and Policy Mechanisms In 99% of UN member states, the public sector is still the biggest single buyer, making p…
S61
Powering the Technology Revolution / Davos 2025 — Anne Bouverot: A lot has been said. And I agree with all of this. I don’t want to repeat it. I just want to comment …
S62
Building the Workforce_ AI for Viksit Bharat 2047 — From the community health worker delivering nutrition to an expecting mother to the balancing worker strategizing access…
S63
Building Indias Digital and Industrial Future with AI — Deepak Maheshwari from the Centre for Social and Economic Progress provided historical context, tracing India’s digital …
S64
Global AI Policy Framework: International Cooperation and Historical Perspectives — -Sovereignty vs. Openness in AI Development: The concept of “open sovereignty” emerged as a key theme – the idea that co…
S65
AI Algorithms and the Future of Global Diplomacy — This collaborative approach reflects what Yaktiyami termed “managed interdependence” rather than complete technological …
S66
Discussion Report: Sovereign AI in Defence and National Security — Faisal advocates for a strategic approach where countries focus their limited sovereign resources on the most critical c…
S67
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — Energy Sustainability & Cooling He points out that India faces land, water and power constraints, recommending hybrid e…
S68
Revisiting 10 AI and digital forecasts for 2025: Predictions and Reality — To address this, companies are exploring innovative solutions such aspower capping(limiting processor power to 60-80% of…
S69
Canada considers $15 billion incentive to boost AI data centres — Canada’s federal government isexploringa proposal to offer up to $15 billion in incentives to encourage domestic pension…
S70
AI energy demand accelerates while clean power lags — Data centres are driving asharp rise in electricity consumption, putting mounting pressure on power infrastructure that …
S71
WS #466 AI at a Crossroads Between Sovereignty and Sustainability — Environmental Impact and Climate Justice Moltzau argues that given the current climate crisis and multiple global chall…
S72
Davos report marks AI misinformation as an immediate threat to democracy and environment — AdvancedAI fueling false and misleading information poses the immediate risk of eroding democracy and polarising society…
S73
Networking Session #50 AI and Environment: Sustainable Development | IGF 2023 — Artificial intelligence (AI) is improving the ways we live, work and solve problems. It can also help us fight climate c…
S74
Smaller Footprint Bigger Impact Building Sustainable AI for the Future — AI’s energy demands. Threaten to outpace green energy progress. Model providers face a stark reality. AI’s energy needs …
S75
Planetary Limits of AI: Governance for Just Digitalisation? | IGF 2023 Open Forum #37 — Additionally, they highlight the importance of considering sustainable development goals and respecting human rights in …
S76
Shaping the Future AI Strategies for Jobs and Economic Development — The discussion maintained an optimistic yet pragmatic tone throughout. While acknowledging significant challenges around…
S77
Global Standards for a Sustainable Digital Future — ## Sustainability and Environmental Integration in Standards – **Maike Luiken**: Chair of standard working group addres…
S78
Green AI and the battle between progress and sustainability — AI is increasingly recognised for its transformative potential and growing environmental footprint across industries. Th…
S79
State of Play: AI Governance / DAVOS 2025 — Mensch mentions Mistral’s efforts in promoting open-source models and working with various countries, including Saudi Ar…
S80
How African knowledge and wisdom can inspire the development and governance of AI — Audience:Sure. Thank you. Thank you very much. Just, I think it is very hard to speak after Ambassador Kerr, who is the …
S81
Digital on Day 3 of UNGA79: Addressing AI, misinformation, and the need for global cooperation — In the area of development, several key issues were highlighted regarding affordable financing, financial inclusion, and…
S82
MASTERPLAN FLAGSHIP PROGRAMMES — To create this plan, the government will convene an interagency AI task force comprised of National Government agencies,…
S83
A Digital Future for All (afternoon sessions) — AI governance requires a multi-stakeholder approach due to the diverse nature of opportunities, risks, and inclusivity c…
S84
Day 0 Event #249 Sustainable Digital Growth Net Negative Net Zero or Net Positive — – Multi-stakeholder collaboration is essential across sectors and borders
S85
Press Conference: Closing the AI Access Gap — Finally, there is strong agreement among the speakers for trust-based, multi-stakeholder partnerships in AI. They argue …
S86
Open Forum #82 Catalyzing Equitable AI Impact the Role of International Cooperation — Multi-stakeholder cooperation and inclusive governance frameworks are essential
S87
Building Inclusive Societies with AI — The discussion highlighted that addressing India’s informal workforce challenges requires sustained collaboration across…
S88
Open Internet Inclusive AI Unlocking Innovation for All — -Announcer: Event host/moderator introducing the speakers and session
S89
Open Forum #37 Digital and AI Regulation in La Francophonie an Inspiration and Global Good Practice — Audience: Hello, ladies and gentlemen. Hello, Ambassador Emoso. I am Sidi Kabubaka Nondishao, from Alexandria at the Uni…
S90
Open Forum #33 Building an International AI Cooperation Ecosystem — This comment reframes the urgency of AI governance from a technical challenge to an existential imperative. It introduce…
S91
Using AI to tackle our planet’s most urgent problems — These key comments fundamentally shaped the discussion by transforming what could have been a standard technology presen…
S92
AI for Democracy_ Reimagining Governance in the Age of Intelligence — This comment provides a crucial conceptual distinction that reframes the entire discussion. Instead of asking how AI can…
S93
Rapid AI growth raises global energy demands — The global demand for AI technologyis set to consumenearly as much energy by 2030 as Japan does today, with much of that…
S94
High-level AI Standards panel — Kathleen A. Kramer: So, at IEEE, we believe that in standards to advance technology, but we see standards as far more th…
S95
DC-CIV Evolving Regulation and its impact on Core Internet Values | IGF 2023 — Overall, the Dynamic Coalition, under the leadership of Olivier Crepin-Leblond, provides an open platform for discussion…
S96
Global Digital Compact – Informal Consultations (3rd Meeting) — However, it concurrently acknowledges the risks posed by digital technologies, such as cybersecurity threats and the spr…
S97
United Nations Office for Digital and Emerging Technologies — ODET is facilitating the GDC’s endorsement process and supporting the integration of its commitments into the updated WS…
S98
Accelerating an Inclusive Energy Transition | IGF 2023 Open Forum #133 — International cooperation and input are highly valued by the speakers. They appreciate the contribution and input from a…
S99
DC-Inclusion &amp; DC-PAL: Transformative digital inclusion: Building a gender-responsive and inclusive framework for the underserved — – Tawfik Jelassi: Assistant Director General of Communication and Information Sector of UNESCO Najib Mokni: Good morni…
S100
Day 0 Event #252 Editorial Media and Big Tech Dependency the Material Conditions for a Free and Resilient NeWS Media — – **Tawfik Jelassi** – Assistant director general for communications and information at UNESCO; PhD in information syste…
S101
A Digital Future for All (morning sessions) — – Tawfik Jelassi – Assistant Director-General for Communication and Information, UNESCO Tawfik Jelassi: Excellencies, …
S102
Powering AI _ Global Leaders Session _ AI Impact Summit India Part 2 — The scale of the challenge is substantial. Current global data centre electricity consumption stands at 415 terawatt hou…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
A
Anne Le Henanf
7 arguments90 words per minute490 words325 seconds
Argument 1
AI energy demands outpace green energy progress, risking climate goals (Anne Le Henanf)
EXPLANATION
The minister warns that the growing energy requirements of AI systems are increasing faster than the development of renewable energy sources, threatening to undermine climate objectives. She frames this as an urgent environmental imperative for governments pursuing decarbonisation.
EVIDENCE
She states that AI’s energy demands threaten to outpace green energy progress and that model providers face a stark reality where AI’s energy needs are growing faster than supply [10-13].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
External evidence notes that AI’s energy needs are growing faster than renewable supply and threaten climate goals, as highlighted in reports on AI’s energy demands and green AI challenges [S1][S26][S41][S46].
MAJOR DISCUSSION POINT
Energy demand vs. green supply
AGREED WITH
Dr. Tafik Delassie, James Manyika, Arthur Mensch, Ambassador Philip Tigo
Argument 2
Large models deepen global inequality by excluding low‑resource regions (Anne Le Henanf)
EXPLANATION
The minister highlights a fairness crisis, arguing that massive AI models that are not sustainable create new divides, marginalising communities and regions that lack computational resources. This exacerbates existing global inequities.
EVIDENCE
She describes a fairness crisis where massive AI models without sustainability create new divides and can exclude regions and communities lacking resources [14-16].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The fairness crisis and risk of excluding low-resource regions are documented in analyses of AI’s environmental and equity impacts [S1][S34].
MAJOR DISCUSSION POINT
Fairness and inequality
AGREED WITH
Dr. Tafik Delassie, James Manyika, Arthur Mensch, Abhishek Singh
DISAGREED WITH
Dr. Tafik Delassie, James Manyika, Arthur Mensch
Argument 3
Sustainable AI is embedded in the UN Global Digital Compact and UNEA resolution; coalition now includes 15 countries (Anne Le Henanf)
EXPLANATION
Anne explains that sustainable AI has been codified in major UN frameworks, giving it a formal international status. She also notes the expansion of the Sustainable AI Coalition to fifteen member countries, signalling broad diplomatic support.
EVIDENCE
She mentions that Sustainable AI is embedded in the UN Global Digital Compact and a UN Environment Assembly resolution, and that the coalition now includes 15 countries with the Netherlands joining this year [21][19].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Sustainable AI’s inclusion in the UN Global Digital Compact and UNEA resolution is confirmed by the UN AI resolution and Digital Compact documents [S30][S31].
MAJOR DISCUSSION POINT
International policy embedding
Argument 4
Publication of the second version of a global AI environmental‑sustainability standardization framework (Anne Le Henanf)
EXPLANATION
The minister announces the release of an updated global approach to standardising AI environmental sustainability, aiming to promote consistency across the sector. This is presented as a concrete step toward measurable progress.
EVIDENCE
She proudly announces that, on behalf of the coalition, ITU, IEEE and ESO have published the second version of the global approach on standardisation for AI environmental sustainability [25].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
A proposed environmental-social certification framework for AI and emphasis on measurement support the development of a global standardisation approach [S33][S39].
MAJOR DISCUSSION POINT
Standardisation framework
Argument 5
France implements low‑carbon AI policies, green data centers, and leads the Sustainable AI Coalition with concrete standards (Anne Le Henanf)
EXPLANATION
France is portrayed as a pioneer, adopting policies that require AI to run on renewable energy in green data centres, and promoting leaner, smarter AI designs. The country also leads the coalition that brings together diverse stakeholders to advance sustainable AI.
EVIDENCE
She describes France’s implementation of policies for low-carbon efficient AI powered by renewable energy hosted in green data centres, and notes France’s leadership in launching the Resilient AI Challenge together with India and UNESCO [26-27].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
France’s AI governance, green data-centre initiatives, and France-India collaboration are described in recent policy reviews [S35][S36].
MAJOR DISCUSSION POINT
National low‑carbon AI strategy
AGREED WITH
Dr. Tafik Delassie, Ambassador Philip Tigo, James Manyika, Arthur Mensch, Abhishek Singh
Argument 6
Resilient and sustainable AI is essential to unlock digital transformation, environmental protection and inclusive development
EXPLANATION
The minister argues that making AI resilient and sustainable is the key driver for broader digital transformation, protecting the environment and fostering inclusive growth.
EVIDENCE
She states that resilient and sustainable AI is the key to unlocking digital transformation, environmental protection and inclusive development [8-9].
MAJOR DISCUSSION POINT
Role of sustainable AI in development
Argument 7
Measurement is a prerequisite for improvement; without metrics, AI sustainability cannot be advanced
EXPLANATION
She emphasizes that measuring AI’s environmental impact is critical because improvement is impossible without data, highlighting the measurement pillar of the coalition’s approach.
EVIDENCE
She says “You can’t improve what you can’t measure” and announces the publication of a global standardisation approach for AI environmental sustainability to promote consistency [24-25].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The importance of measurement for AI sustainability is underscored in discussions of certification frameworks and measurement emphasis [S33][S39].
MAJOR DISCUSSION POINT
Need for measurement and standards
D
Dr. Tafik Delassie
7 arguments153 words per minute985 words385 seconds
Argument 1
AI inference consumes hundreds of GWh annually, comparable to electricity use of millions in low‑income countries (Dr. Tafik Delassie)
EXPLANATION
Delassie quantifies the energy footprint of AI inference, stating that current usage already amounts to hundreds of gigawatt‑hours per year, a level comparable to the total electricity consumption of millions of people in low‑income nations.
EVIDENCE
He notes that inference already amounts to hundreds of gigawatt hours per year, comparable to the annual electricity use of millions of people in low-income countries [52-54].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Data-centre electricity consumption figures and analyses of AI’s large-scale energy use provide context for the magnitude of inference demand [S41][S26].
MAJOR DISCUSSION POINT
Inference energy footprint
AGREED WITH
Anne Le Henanf, James Manyika, Arthur Mensch, Ambassador Philip Tigo
Argument 2
Energy‑intensive training reinforces compute access gaps, threatening equitable deployment (Dr. Tafik Delassie)
EXPLANATION
He points out that training frontier models consumes massive electricity—over 1,000 MWh for a single large model—enough to power villages for a year, thereby widening the gap between regions with abundant compute and those without.
EVIDENCE
He explains that a single large AI model can consume over 1,000 MWh of electricity, enough to power villages across India for a whole year, reinforcing inequalities in access to compute and infrastructure [55-56].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Reports on AI’s energy demands and fairness crises highlight how compute-intensive training widens access gaps [S1][S34].
MAJOR DISCUSSION POINT
Training energy and inequality
AGREED WITH
Anne Le Henanf, James Manyika, Arthur Mensch, Ambassador Philip Tigo
Argument 3
Future breakthroughs will arise from leaner, resilient systems rather than ever larger models (Dr. Tafik Delassie)
EXPLANATION
Delassie argues that the next major AI breakthroughs will come from building smarter, more resource‑efficient systems, not from scaling model size ever larger. This shift is presented as essential for sustainable impact.
EVIDENCE
He states that the next breakthrough in AI will not come from building ever-larger models but from building smarter, leaner, and more resilient systems that can deliver impact under energy constraints [59-60].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for resource-efficient AI as a path to inclusion and the development of certification frameworks emphasize the shift toward leaner models [S1][S33].
MAJOR DISCUSSION POINT
Shift to resilient AI
AGREED WITH
Anne Le Henanf, James Manyika, Arthur Mensch, Abhishek Singh
DISAGREED WITH
Anne Le Henanf, James Manyika, Arthur Mensch
Argument 4
Model compression and task‑specific architectures can cut AI energy use by up to 90 % without performance loss (Dr. Tafik Delassie)
EXPLANATION
He cites evidence that careful design choices—such as compressing models, using task‑specific architectures, and optimizing inference—can reduce AI energy consumption dramatically while preserving performance.
EVIDENCE
He reports that small but conscious design choices like model compression, task-specific architectures, and optimized inference can reduce AI energy consumption by up to 90 % without compromising performance [65].
MAJOR DISCUSSION POINT
Energy‑saving design techniques
AGREED WITH
Anne Le Henanf, James Manyika, Arthur Mensch, Abhishek Singh
Argument 5
Launch of the Resilient AI Challenge by India, France, and UNESCO to move from principles to action (Dr. Tafik Delassie)
EXPLANATION
Delassie announces the creation of a global competition that will encourage participants to demonstrate open‑source AI models that are both high‑performing and energy‑efficient, turning policy commitments into concrete outcomes.
EVIDENCE
He officially announces the launch of the Resilient AI Challenge, a flagship initiative under the India AI Impact Summit Working Group, which moves from principles to action by having model providers, researchers, startups and academic teams optimise and compress models while reducing energy use [68-74].
MAJOR DISCUSSION POINT
Challenge as action mechanism
AGREED WITH
Anne Le Henanf, Ambassador Philip Tigo, James Manyika, Arthur Mensch, Abhishek Singh
Argument 6
AI breakthroughs should focus on lean, resilient systems that serve low‑resource environments such as rural health and education
EXPLANATION
Delassie suggests that the next major AI advances will come from building smarter, more efficient models that can operate under energy constraints, enabling applications in low‑connectivity settings.
EVIDENCE
He notes that the next breakthrough will be building smarter, leaner, and more resilient systems that can deliver impact under energy constraints, citing examples like rural health systems and low-connectivity environments [59-61][58].
MAJOR DISCUSSION POINT
Resilient AI for low‑resource contexts
Argument 7
AI must be designed for all communities, not just those with abundant compute power, to ensure inclusive access
EXPLANATION
He stresses that AI should be built to serve everyone, including marginalized groups, rather than being limited to high‑compute users.
EVIDENCE
He says AI must be designed not only for those with the greatest computing power, but for all communities, emphasizing universal accessibility [63-65].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The fairness crisis and need for inclusive AI are discussed in analyses of AI’s environmental and social impacts [S1].
MAJOR DISCUSSION POINT
Inclusive AI design
A
Arthur Mensch
8 arguments183 words per minute1190 words389 seconds
Argument 1
Sparse mixture‑of‑experts activates only ~5 % of parameters, drastically reducing FLOPs per token (Arthur Mensch)
EXPLANATION
Mensch explains that using a sparse mixture‑of‑experts architecture means only a small fraction of model parameters are active for any given token, cutting the computational work (FLOPs) required and thus lowering energy consumption.
EVIDENCE
He describes that sparse mixture of experts activates only about 5 % of parameters, which reduces the number of FLOPs needed to generate one token, a key factor for energy and carbon intensity [169-170].
MAJOR DISCUSSION POINT
Sparse MoE efficiency
AGREED WITH
Anne Le Henanf, Dr. Tafik Delassie, James Manyika, Abhishek Singh
Argument 2
Open‑sourcing large models amortizes the carbon cost of training across the community (Arthur Mensch)
EXPLANATION
Mensch argues that by releasing large models openly, the high carbon emissions incurred during training are shared among many downstream users, reducing the overall environmental impact compared with each organization training its own model.
EVIDENCE
He notes that open-sourcing large models amortises the carbon cost of training across the community, because the initial training carbon is incurred once and then reused by many, avoiding duplicate training emissions [172-178].
MAJOR DISCUSSION POINT
Amortised training emissions
AGREED WITH
Anne Le Henanf, Dr. Tafik Delassie, James Manyika, Abhishek Singh
DISAGREED WITH
Anne Le Henanf, Dr. Tafik Delassie, James Manyika
Argument 3
Public procurement criteria can accelerate industry adoption of sustainable AI practices (Arthur Mensch)
EXPLANATION
Mensch suggests that governments can use their purchasing power to require sustainability metrics in AI procurements, thereby pushing the market toward greener solutions more quickly.
EVIDENCE
He states that public procurement can put more pressure on sustainability as a way to accelerate the industry, raising the stakes for companies to adopt efficient practices [196-197].
MAJOR DISCUSSION POINT
Procurement as lever
AGREED WITH
Anne Le Henanf, Dr. Tafik Delassie, Ambassador Philip Tigo, James Manyika, Abhishek Singh
DISAGREED WITH
James Manyika
Argument 4
Transparency through third‑party carbon‑intensity audits meets customer demand and drives sustainable choices (Arthur Mensch)
EXPLANATION
Mensch highlights that providing audited, transparent data on the carbon intensity of AI training helps satisfy client expectations and encourages the adoption of greener models.
EVIDENCE
He mentions that transparency is important, citing a deep study with third-party auditors on the carbon intensity of training for Mistral Large, which meets customer demand for sustainable choices [193-194].
MAJOR DISCUSSION POINT
Audited carbon transparency
AGREED WITH
Anne Le Henanf, James Manyika
Argument 5
Competitive market pressures make energy efficiency a key differentiator for AI providers (Arthur Mensch)
EXPLANATION
Mensch observes that as AI services become commodified and price‑sensitive, companies that can deliver lower energy consumption gain a competitive advantage, making efficiency a market driver.
EVIDENCE
He explains that AI is becoming a utility-like business where price sensitivity makes efficiency crucial, and that this market pressure will partially solve the sustainability challenge [194-196].
MAJOR DISCUSSION POINT
Efficiency as competitive edge
AGREED WITH
Anne Le Henanf, Dr. Tafik Delassie, James Manyika, Ambassador Philip Tigo
Argument 6
Locating AI training in regions with low‑carbon energy mixes reduces the carbon intensity of model development
EXPLANATION
Mensch notes that training models on hardware situated in countries with low‑carbon electricity sources lowers the overall emissions associated with AI training.
EVIDENCE
He explains that Mistral trains models on its own hardware in France, which is heavily nuclear, and in Sweden where hydro provides low-carbon power, highlighting the importance of locality for carbon intensity [182-188].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Discussions of AI’s energy-decarbonisation strategies include leveraging nuclear and low-carbon power sources for training workloads [S25][S46].
MAJOR DISCUSSION POINT
Geographic locality and carbon intensity
Argument 7
Adopting a diverse portfolio of energy‑efficient chips is crucial for cutting AI’s carbon footprint
EXPLANATION
He argues that using a variety of chips that are more energy‑efficient can significantly reduce AI’s energy consumption.
EVIDENCE
He states that being able to use the diversity of chips is huge and that new kinds of chips are much more efficient from an energy perspective [190-191].
MAJOR DISCUSSION POINT
Chip efficiency as a lever
Argument 8
As AI services become utility‑like and price‑sensitive, efficiency becomes a competitive differentiator driving market adoption
EXPLANATION
Mensch observes that AI is turning into a utility business where margins are thin, making energy efficiency essential for competitiveness and market acceleration.
EVIDENCE
He describes AI becoming a utility company, with price sensitivity leading to efficiency being a key factor for companies to survive and for public procurement to push sustainability [254-256][194-196].
MAJOR DISCUSSION POINT
Market pressure for efficiency
J
James Manyika
6 arguments176 words per minute827 words280 seconds
Argument 1
Google’s Gemini family spans performance‑efficiency frontier, using mixture‑of‑experts and dedicated efficient variants (James Manyika)
EXPLANATION
Manyika describes Google’s Gemini portfolio as covering a range from high‑performance to highly efficient models, employing mixture‑of‑experts architectures and specialized lightweight versions to balance capability and energy use.
EVIDENCE
He outlines that the Gemini family includes Gemini Pro, Gemini Flash (efficient models), and GEMA models (open-source, optimized for different sizes, some running on a single GPU), all designed to cover the performance-efficiency frontier [133-148].
MAJOR DISCUSSION POINT
Gemini model family
AGREED WITH
Anne Le Henanf, Dr. Tafik Delassie, Arthur Mensch, Abhishek Singh
DISAGREED WITH
Anne Le Henanf, Dr. Tafik Delassie, Arthur Mensch
Argument 2
Efficiency lowers cost per token, essential for ROI and survival of AI services at scale (James Manyika)
EXPLANATION
Manyika argues that improving energy and computational efficiency directly reduces the cost per token, which is critical for the financial viability of AI services as user numbers grow.
EVIDENCE
He notes that every year they focus on efficiency because it reduces energy costs, computer costs, and is the right business decision when serving many more people, emphasizing the need for the most efficient systems [151-153].
MAJOR DISCUSSION POINT
Cost efficiency for ROI
AGREED WITH
Anne Le Henanf, Dr. Tafik Delassie, Arthur Mensch, Ambassador Philip Tigo
Argument 3
Major investments in carbon‑free energy, green data centers, and dedicated inference chips aim for 24/7 carbon‑free compute (James Manyika)
EXPLANATION
Manyika details Google’s ambitious investments in renewable and nuclear energy, green data centres, and purpose‑built inference hardware, targeting continuous carbon‑free operation for its compute workloads.
EVIDENCE
He lists investments in nuclear, geothermal, hydro, wind, solar, and the goal of 24-7 carbon-free compute, along with the development of inference-specific TPUs and other chips to improve efficiency [153-158][280-282].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Investments in nuclear, geothermal, hydro, wind, and solar power for AI workloads are highlighted as part of the push for carbon-free compute [S25][S46].
MAJOR DISCUSSION POINT
Carbon‑free compute ambition
Argument 4
Governments should incentivize off‑grid renewable power, update standards, and support deep‑dive footprint assessments (James Manyika, Abhishek Singh, Ambassador Philip Tigo)
EXPLANATION
Manyika calls for policy measures that promote off‑grid renewable solutions, strengthen standards, and fund detailed environmental footprint analyses to reduce AI’s energy burden on public grids.
EVIDENCE
He urges governments to incentivise off-grid solar, wind, geothermal, and mentions investments in fusion energy, noting that off-grid solutions relieve pressure on public infrastructure [267-270].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Policy recommendations call for off-grid renewable solutions to power AI, aligning with off-grid renewable strategies discussed in AI sustainability literature [S25][S31].
MAJOR DISCUSSION POINT
Policy incentives for off‑grid AI
AGREED WITH
Anne Le Henanf, Dr. Tafik Delassie, Ambassador Philip Tigo, Arthur Mensch, Abhishek Singh
DISAGREED WITH
Arthur Mensch
Argument 5
AI applications in grid management and climate adaptation can deliver substantial sustainability benefits at scale
EXPLANATION
Manyika highlights that AI can be applied to manage electricity grids and adapt to climate‑change effects, providing large‑scale environmental impact reductions.
EVIDENCE
He mentions a whole range of applications of AI that are helpful for sustainability, such as grid management and managing adaptation and effects of climate change, noting their significant difference at scale [203-205].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Examples of AI optimizing energy trading and grid operations illustrate AI’s potential for sustainability applications [S42].
MAJOR DISCUSSION POINT
AI for sustainability applications
Argument 6
Inference will become the dominant energy consumer in AI, surpassing training, making efficient inference a priority
EXPLANATION
He asserts that inference, not training, will be the most important factor for AI’s energy use, emphasizing the need to focus on inference efficiency.
EVIDENCE
He states that inference is going to be the most important thing in many respects, far more than the training part of this [277-279].
MAJOR DISCUSSION POINT
Inference vs training energy focus
A
Ambassador Philip Tigo
4 arguments206 words per minute583 words169 seconds
Argument 1
Kenya leverages a 95 % renewable energy mix, educates users on responsible AI use, and engages in international sustainability frameworks (Ambassador Philip Tigo)
EXPLANATION
The ambassador explains that Kenya’s electricity generation is already 95 % renewable, and the country promotes green‑by‑design AI through both infrastructure and user education, while also participating in global sustainability initiatives.
EVIDENCE
He states that Kenya’s energy mix is 95 % renewable (geothermal, wind, water, solar, hydro) and that green-by-design includes educating users on responsible AI consumption, as well as working with the Coalition for Sustainable AI on the first AI environmental-sustainability resolution [107-112][119-120].
MAJOR DISCUSSION POINT
Kenyan renewable AI strategy
AGREED WITH
Anne Le Henanf, Dr. Tafik Delassie, James Manyika, Arthur Mensch
DISAGREED WITH
James Manyika, Abhishek Singh
Argument 2
Governments need realistic approach, sovereignty, standards, and deep‑dive assessments for AI sustainability (Ambassador Philip Tigo)
EXPLANATION
He cautions that emerging economies must balance sovereignty concerns with global AI sustainability goals, emphasizing the need for realistic stack decisions, expanded safety research that includes environmental impacts, and robust standards backed by detailed footprint studies.
EVIDENCE
He discusses sovereignty issues, the need to decide which parts of the AI stack stay national, expanding safety research to cover environmental concerns, and investing in standards and deep-dive assessments for specific use-cases such as food systems [286-306].
MAJOR DISCUSSION POINT
Sovereignty and standards in AI sustainability
AGREED WITH
Anne Le Henanf, Dr. Tafik Delassie, James Manyika, Arthur Mensch, Abhishek Singh
Argument 3
Educating users on responsible AI consumption is part of Kenya’s green‑by‑design strategy
EXPLANATION
He explains that beyond green infrastructure, Kenya promotes user education to encourage efficient AI usage, such as avoiding unnecessary AI queries.
EVIDENCE
He says part of green by design includes wide-scale education around how people use resources, giving the example that users shouldn’t be looking for the next Starbucks when using AI, encouraging choices like Google as an option [112-115].
MAJOR DISCUSSION POINT
User education for sustainable AI
Argument 4
Kenya actively participates in the Coalition for Sustainable AI and supports the first AI environmental‑sustainability resolution
EXPLANATION
He notes Kenya’s involvement in international frameworks to champion AI sustainability standards.
EVIDENCE
He mentions working with the Coalition for Sustainable AI to champion the first ever AI resolution on environmental sustainability, which includes four parts (energy, life cycle, sustainability, science) [119-120].
MAJOR DISCUSSION POINT
International collaboration on AI sustainability
A
Abhishek Singh
3 arguments193 words per minute907 words281 seconds
Argument 1
India prioritizes inference efficiency, grid‑loss reduction projects, and policies that open AI infrastructure to private investment (Abhishek Singh)
EXPLANATION
Singh outlines India’s focus on making inference energy‑efficient, reducing transmission and distribution losses in the power grid, and creating regulatory frameworks that invite private sector participation in AI infrastructure.
EVIDENCE
He notes a project with the Ministry of Power that uses AI to improve grid efficiency, cutting transmission-distribution losses by 10-15% [235-236], and references a new policy that opens the AI sector to private investment and encourages off-grid solutions [310-314].
MAJOR DISCUSSION POINT
Indian AI‑energy policy and grid efficiency
AGREED WITH
Anne Le Henanf, Dr. Tafik Delassie, Ambassador Philip Tigo, James Manyika, Arthur Mensch
Argument 2
India is exploring off‑grid renewable power and small modular reactors to meet AI compute demand without overloading the public grid
EXPLANATION
He describes plans to use off‑grid solar, wind, and small modular reactors to power AI workloads, reducing pressure on existing electricity infrastructure.
EVIDENCE
He states that to reduce load on the existing grid, India will need off-grid solutions and dedicated small modular reactors to power AI applications, citing ongoing considerations [315-316].
MAJOR DISCUSSION POINT
Off‑grid energy solutions for AI
Argument 3
Balancing AI efficiency with sustainability is essential to achieve the 2030 Sustainable Development Goals
EXPLANATION
He emphasizes that India must align AI efficiency improvements with broader sustainability targets to avoid creating new problems while solving others.
EVIDENCE
He notes that the strategy must balance efficient AI with reducing environmental impact to meet SDG 2030 goals, warning that failing to do so would create new problems [317-319].
MAJOR DISCUSSION POINT
Alignment of AI policy with SDGs
S
Speaker 1
3 arguments67 words per minute190 words168 seconds
Argument 1
Event host set the agenda, framing sustainable AI as central to the summit’s purpose (Speaker 1)
EXPLANATION
The opening host introduces the event, explicitly stating that the summit will focus on sustainable AI and that the two distinguished speakers will set the tone for the discussion.
EVIDENCE
In the opening remarks, the host says, “And this is what we will explore at this event… To introduce the topic, we will first have two distinguished speakers” [1-4].
MAJOR DISCUSSION POINT
Agenda framing by host
Argument 2
Speaker 1 highlights France’s pioneering role in ‘Sociable AI’, framing it as a model for sustainable AI leadership
EXPLANATION
In the opening remarks, the host thanks the minister for France’s pioneering contributions to sociable AI, positioning France as a leader in the field.
EVIDENCE
The host says “Many thanks, Madam Minister, for this insightful introduction and the pioneering role of France in Sociable AI” [34].
MAJOR DISCUSSION POINT
Recognition of national leadership in sustainable AI
Argument 3
Speaker 1 introduces Dr. Tafik Delassie’s landmark report on smaller models, underscoring the importance of model size reduction for sustainability
EXPLANATION
The host announces the arrival of Dr. Delassie and references his report on smaller models, signaling the relevance of model compression to the summit’s agenda.
EVIDENCE
The host states “I have now the pleasure to welcome Dr. Tafik Delassie… whose landmark report on smaller models was published in July last year” [35-36].
MAJOR DISCUSSION POINT
Emphasis on smaller models for sustainable AI
A
Anne Bouvreau
3 arguments78 words per minute971 words738 seconds
Argument 1
AI projected to consume 3 % of global electricity by 2030, posing major environmental risks
EXPLANATION
Bouvreau warns that, according to the International Energy Agency, AI is expected to use about three percent of worldwide electricity production by 2030, representing a huge expansion that could exacerbate climate impacts.
EVIDENCE
She cites the IEA forecast that AI will consume 3 % of global electricity by 2030 and stresses that this scale of growth entails significant environmental costs that must be mitigated [86-92].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Analyses estimate AI could consume around 2-3 % of global electricity, with data-centre demand projected to double, supporting the 3 % forecast [S41][S26].
MAJOR DISCUSSION POINT
Energy consumption forecast
Argument 2
AI can be leveraged to optimise energy and resource use, turning the technology into an environmental solution
EXPLANATION
Bouvreau points out that AI not only creates challenges but also offers opportunities to improve resource efficiency, including energy optimisation, suggesting a dual role for AI in sustainability.
EVIDENCE
She states that AI, at the same time, creates opportunity to optimise resources, including energy, indicating its potential to contribute positively to environmental goals [93].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI’s role in energy trading and resource optimisation is documented in case studies of AI-driven energy markets [S42].
MAJOR DISCUSSION POINT
AI as a tool for resource optimisation
Argument 3
Ensuring AI development aligns with planetary sustainability, especially in developing countries, is essential for inclusive progress
EXPLANATION
Bouvreau raises the question of how AI development can be pursued in a way that safeguards the planet, emphasizing the need for a sustainability focus that includes developing nations.
EVIDENCE
She asks how AI development, particularly in developing countries, can be pursued together with a focus on the planet, highlighting the importance of inclusive, sustainable AI strategies [94-95].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
UN resolutions and the Global Digital Compact emphasize inclusive, sustainable AI development for the Global South [S30][S31].
MAJOR DISCUSSION POINT
Sustainable AI development in the Global South
Agreements
Agreement Points
AI’s growing energy consumption and the urgent need for efficiency
Speakers: Anne Le Henanf, Dr. Tafik Delassie, James Manyika, Arthur Mensch, Ambassador Philip Tigo
AI energy demands outpace green energy progress, risking climate goals (Anne Le Henanf) AI inference consumes hundreds of GWh annually, comparable to electricity use of millions in low‑income countries (Dr. Tafik Delassie) Energy‑intensive training reinforces compute access gaps, threatening equitable deployment (Dr. Tafik Delassie) Efficiency lowers cost per token, essential for ROI and survival of AI services at scale (James Manyika) Inference will become the dominant energy consumer in AI, surpassing training, making efficient inference a priority (James Manyika) Competitive market pressures make energy efficiency a key differentiator for AI providers (Arthur Mensch) Kenya leverages a 95 % renewable energy mix, educates users on responsible AI use, and engages in international sustainability frameworks (Ambassador Philip Tigo)
All speakers highlighted that AI’s energy demand is large and rising, threatening climate goals and equity, and stressed that improving efficiency-through greener energy mixes, cost-per-token reductions, and smarter inference-is essential [10-13][52-56][151-153][277-279][194-196][107-115].
POLICY CONTEXT (KNOWLEDGE BASE)
Recent analyses at Davos 2025 and UN-focused reports document a sharp rise in AI-related electricity demand that outpaces clean-energy supply, underscoring the urgency for efficiency measures [S61][S70][S68].
Measurement, standards and transparency are essential for sustainable AI
Speakers: Anne Le Henanf, Arthur Mensch, James Manyika
Measurement is a prerequisite for improvement; without metrics, AI sustainability cannot be advanced (Anne Le Henanf) Transparency through third‑party carbon‑intensity audits meets customer demand and drives sustainable choices (Arthur Mensch) Governments should incentivize off‑grid renewable power, update standards, and support deep‑dive footprint assessments (James Manyika, Abhishek Singh, Ambassador Philip Tigo)
The speakers agreed that robust measurement and standardized, transparent reporting are prerequisite for progress, calling for global standards, third-party audits, and policy incentives to ensure reliable metrics [24-25][193-194][267-270].
POLICY CONTEXT (KNOWLEDGE BASE)
The UNDP Human Development Report 2025 warns that innovation incentives often sideline transparency, prompting calls for robust measurement and standards in AI policy frameworks; procurement guidelines that embed open standards reinforce this need [S56][S59][S60].
Future AI breakthroughs will come from smaller, efficient, resilient models rather than ever larger ones
Speakers: Anne Le Henanf, Dr. Tafik Delassie, James Manyika, Arthur Mensch, Abhishek Singh
Large models deepen global inequality by excluding low‑resource regions (Anne Le Henanf) Future breakthroughs will arise from leaner, resilient systems rather than ever larger models (Dr. Tafik Delassie) Model compression and task‑specific architectures can cut AI energy use by up to 90 % without performance loss (Dr. Tafik Delassie) Google’s Gemini family spans performance‑efficiency frontier, using mixture‑of‑experts and dedicated efficient variants (James Manyika) Sparse mixture‑of‑experts activates only ~5 % of parameters, drastically reducing FLOPs per token (Arthur Mensch) Open‑sourcing large models amortizes the carbon cost of training across the community (Arthur Mensch) India prioritizes inference efficiency, grid‑loss reduction projects, and policies that open AI infrastructure to private investment (Abhishek Singh)
All participants stressed that the next AI advances will rely on leaner, compressed, or sparsely activated models, which cut energy use dramatically and avoid widening inequities, rather than pursuing ever larger parameter counts [14-16][59-60][65][133-148][169-170][172-178][221-224].
POLICY CONTEXT (KNOWLEDGE BASE)
The “Smaller Footprint Bigger Impact” summit co-chaired by France and India emphasizes shifting research toward compact, high-efficiency models as a cornerstone of sustainable AI strategy, a view echoed in discussions on heterogeneous compute for democratizing access [S54][S67].
Public policy, incentives and institutional frameworks are crucial to drive sustainable AI
Speakers: Anne Le Henanf, Dr. Tafik Delassie, Ambassador Philip Tigo, James Manyika, Arthur Mensch, Abhishek Singh
France implements low‑carbon AI policies, green data centers, and leads the Sustainable AI Coalition with concrete standards (Anne Le Henanf) Launch of the Resilient AI Challenge by India, France, and UNESCO to move from principles to action (Dr. Tafik Delassie) Governments need realistic approach, sovereignty, standards, and deep‑dive assessments for AI sustainability (Ambassador Philip Tigo) Governments should incentivize off‑grid renewable power, update standards, and support deep‑dive footprint assessments (James Manyika, Abhishek Singh, Ambassador Philip Tigo) Public procurement criteria can accelerate industry adoption of sustainable AI practices (Arthur Mensch) India prioritizes inference efficiency, grid‑loss reduction projects, and policies that open AI infrastructure to private investment (Abhishek Singh)
There was broad consensus that governments and multilateral bodies must create policies, standards, procurement rules, and challenge-based incentives to steer AI development toward sustainability and equity [26-27][68-74][286-306][267-270][196-197][310-314].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy analyses highlight government regulation, incentives and nationally-determined contributions (NDCs) as key levers for greener AI, exemplified by Canada’s proposed $15 bn clean-energy AI data-centre incentive and UN-aligned sustainability frameworks [S58][S69][S60].
AI can be leveraged as a tool to address environmental and societal challenges
Speakers: James Manyika, Anne Bouvreau, Dr. Tafik Delassie
AI applications in grid management and climate adaptation can deliver substantial sustainability benefits at scale (James Manyika) AI can be leveraged to optimise energy and resource use, turning the technology into an environmental solution (Anne Bouvreau) AI breakthroughs should focus on lean, resilient systems that serve low‑resource environments such as rural health and education (Dr. Tafik Delassie)
All three speakers highlighted that AI is not only a source of environmental pressure but also a powerful means to improve grid efficiency, optimise resources, and deliver services in low-resource settings [203-205][93][58-61].
POLICY CONTEXT (KNOWLEDGE BASE)
IGF 2023 and UNESCO-led forums stress AI’s potential to accelerate climate action and improve social outcomes, positioning AI as an enabler for Sustainable Development Goals and broader societal challenges [S73][S71][S55].
Similar Viewpoints
Both emphasize that AI must be inclusive and avoid reinforcing global inequities; large, unsustainable models marginalise low‑resource regions, so AI should be built for all communities [14-16][63-65].
Speakers: Anne Le Henanf, Dr. Tafik Delassie
Large models deepen global inequality by excluding low‑resource regions (Anne Le Henanf) AI must be designed for all communities, not just those with abundant compute power, to ensure inclusive access (Dr. Tafik Delassie)
Both see energy efficiency as a core business driver that reduces costs and provides a competitive edge in a price‑sensitive AI market [151-153][194-196].
Speakers: James Manyika, Arthur Mensch
Efficiency lowers cost per token, essential for ROI and survival of AI services at scale (James Manyika) Competitive market pressures make energy efficiency a key differentiator for AI providers (Arthur Mensch)
Both advocate for off‑grid or renewable energy solutions to power AI workloads, reducing pressure on national grids and supporting sustainability goals [107-110][267-270].
Speakers: Ambassador Philip Tigo, James Manyika
Kenya leverages a 95 % renewable energy mix, educates users on responsible AI use, and engages in international sustainability frameworks (Ambassador Philip Tigo) Governments should incentivize off‑grid renewable power, update standards, and support deep‑dive footprint assessments (James Manyika, Abhishek Singh, Ambassador Philip Tigo)
Both point to technical strategies—open‑source model sharing and model compression—that dramatically lower the carbon footprint of AI development and deployment [172-178][65].
Speakers: Arthur Mensch, Dr. Tafik Delassie
Open‑sourcing large models amortizes the carbon cost of training across the community (Arthur Mensch) Model compression and task‑specific architectures can cut AI energy use by up to 90 % without performance loss (Dr. Tafik Delassie)
Unexpected Consensus
Off‑grid renewable energy solutions for AI compute
Speakers: Ambassador Philip Tigo, James Manyika
Kenya leverages a 95 % renewable energy mix, educates users on responsible AI use, and engages in international sustainability frameworks (Ambassador Philip Tigo) Governments should incentivize off‑grid renewable power, update standards, and support deep‑dive footprint assessments (James Manyika, Abhishek Singh, Ambassador Philip Tigo)
It is unexpected that a representative of a developing nation (Kenya) and a senior executive from a leading AI corporation both converge on the need for off-grid renewable power to support AI workloads, despite differing resource capacities and market positions [107-110][267-270].
POLICY CONTEXT (KNOWLEDGE BASE)
Studies on heterogeneous compute recommend hybrid and off-grid renewable mixes to improve PUE and lower carbon intensity of AI data centres, supporting off-grid renewable strategies for compute workloads [S67][S68][S61].
Overall Assessment

The discussion revealed strong, cross‑sectoral agreement that AI’s energy footprint is a critical challenge, that measurement and standards are prerequisite, that the future lies in smaller, efficient models, and that governments and institutions must create policies, incentives, and procurement rules to drive sustainable AI. Participants also concurred that AI can be a tool for environmental and societal benefits.

High consensus across governments, industry, and academia, indicating a shared commitment to prioritize efficiency, measurement, and policy support, which bodes well for coordinated international action on sustainable AI.

Differences
Different Viewpoints
Strategic focus on model size: large, high‑performance models versus smaller, leaner models for sustainability
Speakers: Anne Le Henanf, Dr. Tafik Delassie, James Manyika, Arthur Mensch
Large models deepen global inequality by excluding low‑resource regions (Anne Le Henanf) Future breakthroughs will arise from leaner, resilient systems rather than ever larger models (Dr. Tafik Delassie) Google’s Gemini family spans performance‑efficiency frontier, using mixture‑of‑experts and dedicated efficient variants (James Manyika) Open‑sourcing large models amortizes the carbon cost of training across the community (Arthur Mensch)
Anne Le Henanf warns that massive AI models create a fairness crisis and widen inequality [14-16]. Dr. Delassie argues the next AI breakthrough will come from leaner, resilient systems rather than ever larger models [59-60]. James Manyika describes Google’s Gemini portfolio as covering both high-performance and highly efficient models, maintaining investment in large-scale architectures while adding efficient variants [133-148]. Arthur Mensch counters that releasing large models openly spreads the training carbon cost, reducing overall emissions [172-178]. The speakers share the goal of sustainable AI but diverge on whether the priority should be to shrink models or to continue developing large models with efficiency tricks.
POLICY CONTEXT (KNOWLEDGE BASE)
Policy debates, such as those at the Sustainable AI Coalition summit, argue that prioritizing smaller, efficient models aligns with climate commitments, while large models raise sustainability concerns [S54][S71].
Preferred energy strategy for AI compute: off‑grid renewable and modular reactors versus relying on national renewable mixes and realistic constraints
Speakers: James Manyika, Abhishek Singh, Ambassador Philip Tigo
Governments should incentivize off‑grid renewable power, update standards, and support deep‑dive footprint assessments (James Manyika, Abhishek Singh, Ambassador Philip Tigo) India is exploring off‑grid renewable power and small modular reactors to meet AI compute demand without overloading the public grid (Abhishek Singh) Kenya leverages a 95 % renewable energy mix, educates users on responsible AI use, and engages in international sustainability frameworks (Ambassador Philip Tigo)
James Manyika calls for policy incentives that promote off-grid solar, wind, geothermal and even fusion solutions to relieve pressure on public grids [267-270]. Abhishek Singh echoes this, describing India’s plans for off-grid solutions and small modular reactors to power AI workloads [314-316]. In contrast, Ambassador Tigo highlights Kenya’s existing 95 % renewable energy mix and stresses that many solutions are realistic only for developed economies, urging a more pragmatic approach for emerging nations [107-112][286-288]. The disagreement centers on whether AI compute should be powered primarily by new off-grid installations or by leveraging existing national renewable infrastructures.
POLICY CONTEXT (KNOWLEDGE BASE)
Recommendations for off-grid renewable and modular solutions coexist with calls for integrating AI workloads into existing national renewable grids, reflecting a policy tension noted in energy-sustainability reports [S67][S68][S61].
Policy levers to accelerate sustainable AI: public procurement mandates versus broader government incentives and standards
Speakers: Arthur Mensch, James Manyika
Public procurement criteria can accelerate industry adoption of sustainable AI practices (Arthur Mensch) Governments should incentivize off‑grid renewable power, update standards, and support deep‑dive footprint assessments (James Manyika, Abhishek Singh, Ambassador Philip Tigo)
Arthur Mensch proposes that governments use public procurement to require sustainability metrics, thereby pushing the market toward greener AI solutions [252-257]. James Manyika, while also supporting government action, emphasizes financial incentives for off-grid renewable power, updates to standards, and detailed footprint assessments rather than procurement mandates [267-270]. Both agree on the need for governmental action but differ on the primary mechanism to achieve industry-wide sustainability.
POLICY CONTEXT (KNOWLEDGE BASE)
Procurement-driven approaches (e.g., embedding standards in public contracts) are highlighted alongside broader incentive schemes such as tax credits and clean-energy subsidies, illustrating divergent policy pathways for sustainable AI [S59][S60][S58][S69].
Unexpected Differences
Sovereignty and control of the AI technology stack versus global collaborative approaches
Speakers: Ambassador Philip Tigo, Anne Le Henanf, Dr. Tafik Delassie
Governments need realistic approach, sovereignty, standards, and deep‑dive assessments for AI sustainability (Ambassador Philip Tigo) Sustainable AI is embedded in the UN Global Digital Compact and a UN Environment Assembly resolution; coalition now includes 15 countries (Anne Le Henanf) Launch of the Resilient AI Challenge by India, France, and UNESCO to move from principles to action (Dr. Tafik Delassie)
While most speakers frame sustainable AI as a globally coordinated effort (Anne Le Henanf cites UN-level embedding of Sustainable AI and Dr. Delassie announces a multinational Resilient AI Challenge [21-22][68-74]), Ambassador Tigo raises concerns about national sovereignty over the AI stack and argues that emerging economies must decide which components stay domestic [288-293]. This tension between global collaboration and national control was not anticipated given the overall consensus on cooperation.
POLICY CONTEXT (KNOWLEDGE BASE)
The Global AI Policy Framework and related diplomatic analyses propose a “managed interdependence” model that balances national AI sovereignty with international cooperation, offering a third-way perspective between full openness and isolation [S64][S65][S66].
Overall Assessment

The panel largely agrees that AI’s environmental impact must be curbed and that sustainable AI is a strategic priority. However, clear disagreements emerge around (1) whether the industry should prioritize shrinking models or continue developing large models with efficiency tricks; (2) the optimal energy strategy—off‑grid renewable installations versus leveraging existing national renewable mixes; and (3) the most effective policy lever—public procurement mandates versus broader incentives and standards. An unexpected clash over national sovereignty versus global collaboration also appears.

Moderate. The disagreements are substantive but do not fracture the overall consensus on the need for sustainable AI. They highlight divergent pathways that could affect policy design, industry investment, and international coordination, suggesting that achieving the shared sustainability goal will require negotiated compromises across model‑size strategies, energy sourcing, and governance mechanisms.

Partial Agreements
Both speakers concur that robust measurement and standards are essential for sustainable AI. Anne Le Henanf stresses the need for metrics and a global standardisation framework [24-25], while Ambassador Tigo calls for detailed, use‑case specific footprint assessments and strong standards [300-306]. Their disagreement lies in the scope: Anne promotes a universal global metric, whereas Tigo emphasizes national sovereignty and tailored deep‑dive studies.
Speakers: Anne Le Henanf, Ambassador Philip Tigo
Measurement is a prerequisite for improvement; without metrics, AI sustainability cannot be advanced (Anne Le Henanf) Governments need realistic approach, sovereignty, standards, and deep‑dive assessments for AI sustainability (Ambassador Philip Tigo)
Takeaways
Key takeaways
AI’s growing energy demand threatens climate goals and widens the digital divide, making sustainability and fairness an imperative. The future of AI is expected to rely on leaner, resilient models rather than ever larger ones; techniques like model compression, sparse mixture‑of‑experts, and task‑specific architectures can cut energy use by up to 90% without losing performance. Open‑sourcing large models helps amortize the carbon cost of training across the community, reducing duplicated high‑energy training runs. International collaboration is advancing through the Sustainable AI Coalition, UN‑backed standards, and the Resilient AI Challenge, linking research, measurement, and concrete action. Governments and industry see business and market incentives for efficiency—lower cost per token, competitive advantage, and compliance with public‑procurement sustainability criteria. National examples show concrete steps: Kenya’s 95% renewable energy mix and education on responsible AI use; India’s focus on inference efficiency, grid‑loss reduction, and policy opening AI infrastructure; France’s low‑carbon AI policies, green data centers, and leadership in standards.
Resolutions and action items
Launch of the Resilient AI Challenge (India, France, UNESCO) to benchmark and reward energy‑efficient model compression; registrations close 15 March; winners announced at AI for Good Summit in July. Publication of Version 2 of the global AI environmental‑sustainability standardization framework by ITU, IEEE, and ESO. Commitment by France to implement low‑carbon AI policies, green data centers powered by renewable energy, and to promote the three‑pillar approach (research, measurement, action). India’s AI Impact Summit Working Group to continue supporting inference‑efficiency projects, including grid‑loss reduction pilots with the Ministry of Power. Kenya’s pledge to maintain a 95% renewable energy mix for AI workloads, promote user education on efficient AI usage, and engage in the Sustainable AI Coalition’s standards work. Industry pledges (Google, Mistral, Hugging Face) to expand efficient model families (e.g., Gemini, GEMA), invest in carbon‑free compute, and provide third‑party carbon‑intensity audits.
Unresolved issues
How to develop and adopt universally accepted, detailed AI carbon‑footprint measurement methodologies beyond the current standard draft. Balancing national sovereignty over the AI stack with the need for shared, energy‑efficient infrastructure in emerging economies. Research gaps in AI safety that explicitly incorporate environmental impacts and lifecycle considerations. Scalable financing and deployment models for off‑grid renewable power (e.g., small modular reactors, solar/wind micro‑grids) to support AI inference in low‑resource regions. Specific mechanisms for integrating sustainability criteria into public procurement across different jurisdictions.
Suggested compromises
Use public procurement policies to require sustainability metrics, thereby accelerating market adoption without mandating uniform technology stacks. Encourage open‑source release of large pretrained models so that downstream developers can build smaller, task‑specific models, sharing the training carbon cost. Adopt a mixed‑model strategy: retain large models for research and specialization, while deploying compressed or expert‑sparse variants for production inference. Combine renewable‑energy‑powered data centers with targeted off‑grid solutions for high‑density AI workloads, reducing pressure on national grids. Allow countries to retain critical components of the AI stack for sovereignty while collaborating on shared standards for energy efficiency and environmental safety.
Thought Provoking Comments
The question we face is no longer how can AI work for us, but how can we ensure AI works efficiently, responsibly and fairly for people and for our planet.
Reframes the AI debate from a utility perspective to a sustainability and equity imperative, setting the thematic foundation for the entire discussion.
Established the central narrative of the event, prompting subsequent speakers to frame their contributions around efficiency, fairness, and planetary boundaries rather than pure technological advancement.
Speaker: Anne Le Henanf (France Minister Delegate for AI and Digitalization Affairs)
What if the next breakthrough in AI is not about building larger models, but about building leaner, more resilient systems that can solve real‑world problems in low‑resource environments?
Challenges the prevailing hype around ever‑larger models and introduces the concept of resilience as the next frontier, shifting focus to resource‑constrained innovation.
Triggered a pivot in the conversation toward model compression, energy‑efficient design, and the need for AI that works under strict resource constraints, influencing the questions posed to the panelists.
Speaker: Dr. Tafik Delassie (UNESCO)
A single large AI model can consume over 1,000 MWh of electricity—enough to power villages across India for a whole year—placing increasing pressure on energy systems and reinforcing inequalities in access to compute.
Provides a concrete, relatable metric that illustrates the scale of the problem, linking technical choices directly to social and environmental inequities.
Grounded the abstract sustainability discussion in tangible numbers, prompting panelists to discuss concrete mitigation strategies such as mixture‑of‑experts, open‑source sharing, and off‑grid solutions.
Speaker: Dr. Tafik Delassie
We are investing heavily in green energy for our compute—nuclear, geothermal, hydro, wind, solar—with an audacious goal to be 24/7 carbon‑free by 2035.
Shows a major corporate commitment that aligns business incentives with sustainability, demonstrating that large‑scale AI can be decarbonized through infrastructure investment.
Shifted the tone from problem‑identification to actionable corporate pathways, encouraging other panelists to discuss similar commitments and the role of private investment.
Speaker: James Manyika (Senior Vice President, Google Alphabet)
Open‑sourcing large models amortizes the carbon cost of training because many parties can build on a single trained model instead of each training their own, reducing overall emissions.
Introduces a novel economic‑environmental argument for open source, linking community sharing directly to carbon savings—a perspective not previously highlighted.
Prompted discussion on policy levers such as public procurement and standards, and reinforced the idea that collaboration, not competition, can drive sustainability.
Speaker: Arthur Mensch (CEO, Mistral AI)
Public procurement can be a powerful accelerator for efficiency; by embedding sustainability criteria in contracts, governments can push the industry toward greener models.
Identifies a concrete governance tool that can align market forces with environmental goals, moving the conversation from technical solutions to policy mechanisms.
Led to a broader dialogue on governmental roles, with subsequent remarks from James Manyika and Ambassador Philip Tigo about incentives, off‑grid solutions, and standards.
Speaker: Arthur Mensch
India is deliberately not chasing trillion‑parameter models; instead we focus on inference efficiency, sector‑specific use cases, and grid‑loss reduction, achieving 10‑15 % improvements in transmission and distribution losses.
Provides a real‑world national strategy that prioritizes impact over scale, illustrating how policy, industry, and research can be coordinated for sustainable outcomes.
Reinforced the earlier theme of resilience over size, and gave the panel a concrete example of how a large economy can operationalize sustainable AI, influencing the closing remarks.
Speaker: Abhishek Singh (Lead Organizer, AI Impact Summit, India)
Sovereignty concerns mean emerging economies must decide which parts of the AI stack to keep locally, and standards for environmental impact need to be developed just as they are for other electronics.
Highlights geopolitical and regulatory dimensions often overlooked in technical debates, emphasizing the need for tailored standards and local capacity building.
Shifted the discussion toward governance, standards, and the balance between global collaboration and national autonomy, setting the stage for final policy recommendations.
Speaker: Ambassador Philip Tigo (Kenya)
Overall Assessment

The discussion was shaped by a series of pivotal remarks that moved the conversation from a high‑level declaration of sustainable AI as an imperative to concrete technical, economic, and policy pathways. Anne Le Henanf’s framing set the agenda, while Dr. Delassie’s challenge to the ‘bigger‑is‑better’ paradigm redirected focus toward resilience and resource‑efficiency. Quantitative illustrations of energy use grounded the debate, prompting industry leaders like James Manyika and Arthur Mensch to showcase corporate commitments, open‑source strategies, and procurement levers as viable solutions. National perspectives from Kenya and India added layers of sovereignty, standards, and pragmatic implementation, turning abstract concepts into actionable roadmaps. Collectively, these comments created a dynamic flow that progressed from problem definition to solution design, highlighting the interdependence of technology, business models, and governance in achieving sustainable AI.

Follow-up Questions
How can research on AI safety be expanded to explicitly include environmental impact considerations?
Tigo highlighted a gap in current AI safety research, noting that environmental concerns are not typically addressed, and called for dedicated studies in this area.
Speaker: Ambassador Philip Tigo
What methodologies are needed for deep‑dive analyses of AI’s environmental footprint in specific sectors such as food systems?
He emphasized that understanding AI’s impact requires sector‑specific, detailed assessments, and urged investment in such deep‑dive studies.
Speaker: Ambassador Philip Tigo
What standards should be developed and adopted globally to measure and certify the environmental sustainability of AI hardware and software?
Tigo pointed out the necessity of robust, scalable standards—similar to those in other electronics—to ensure consistent environmental performance across AI systems.
Speaker: Ambassador Philip Tigo
How can public procurement policies be designed to prioritize sustainable AI solutions and accelerate industry adoption?
Mensch suggested that governments can leverage procurement requirements to create market pressure for more energy‑efficient AI models and practices.
Speaker: Arthur Mensch
What research is needed on model routing, selection, and distillation techniques to reduce compute demand without large GPU clusters?
He identified these algorithmic areas as high‑leverage opportunities where public research could significantly lower training and inference energy use.
Speaker: Arthur Mensch
What off‑grid renewable energy solutions (e.g., solar, wind, small modular reactors) are viable for powering AI inference workloads at scale?
Manyika advocated for off‑grid power to lessen strain on public grids, mentioning solar, wind, geothermal, and emerging fusion technologies as potential sources.
Speaker: James Manyika
How can governments support the deployment of off‑grid or dedicated energy solutions for AI infrastructure in emerging economies?
Singh echoed the need for off‑grid and modular reactor options to meet the massive inference demand anticipated in India while keeping energy costs manageable.
Speaker: Abhishek Singh
How does the carbon intensity of local energy grids influence the overall environmental impact of AI training, and how should location be factored into model development strategies?
He stressed that training in regions with low‑carbon electricity (e.g., nuclear‑heavy France, hydro‑rich Sweden) can dramatically reduce AI’s carbon footprint.
Speaker: Arthur Mensch
What are the trade‑offs between AI stack sovereignty for emerging economies and the pursuit of greener, more sustainable AI solutions?
Tigo warned that insisting on full domestic control of the AI stack may conflict with sustainability goals, suggesting a realistic balance is needed.
Speaker: Ambassador Philip Tigo
How can the reported 10‑15% reduction in transmission & distribution losses from AI‑driven grid optimization be validated and scaled?
He cited a pilot project improving grid efficiency, indicating a need for systematic evaluation and broader implementation studies.
Speaker: Abhishek Singh
In what ways can AI accelerate fusion energy research, particularly in plasma containment, and what are the implications for sustainable AI development?
Manyika highlighted AI’s role in advancing fusion, linking breakthroughs in clean energy to the broader sustainability of AI itself.
Speaker: James Manyika
What benchmark frameworks should be created to jointly assess model accuracy and energy efficiency for AI competitions like the Resilient AI Challenge?
The challenge emphasizes dual metrics, indicating a need for standardized, transparent benchmarking that balances performance with sustainability.
Speaker: Dr. Tafik Delassie (via challenge description)
What comprehensive lifecycle assessment methods are required to capture emissions from AI model training, inference, and hardware production?
She announced a second version of a global standardization approach, implying further work is needed to fully quantify AI’s total environmental impact.
Speaker: Anne Le Henanf
How does AI adoption affect electricity consumption in low‑income countries, and what policies can mitigate potential inequities?
He noted that AI inference consumes energy comparable to millions of people’s annual electricity use, raising concerns about equity and the need for inclusive policy responses.
Speaker: Dr. Tafik Delassie

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Science AI & Innovation_ India–Japan Collaboration Showcase

Science AI & Innovation_ India–Japan Collaboration Showcase

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel explored how “AI for Good” can democratize access to health and social services by simplifying and speeding up delivery mechanisms [1-3]. Kritika Sangani explained that Indus Action partners with governments to embed technology, policy redesign, and capacity-building into existing social-protection systems rather than creating parallel structures [14-19], aiming to cut the current ten-step entitlement process down to a single-touch interaction [24-26]. Using the Right-to-Education Act, Indus Action built an open-source digital public good that replaced a manual lottery with a digital one, scaling applications from 196 admissions to over 900,000 children across 18 states [70-82][84-89]; to improve targeting of the most vulnerable, the organization is piloting a multilingual WhatsApp chatbot that automates initial applications and supports frontline workers, thereby leveraging AI for discovery and outreach [91-98]. Himanshu from Atli Innovation Mission described the creation of State Innovation Missions to bridge regional disparities, citing examples such as a hackathon to map iron-contaminated water using AI and a dashboard to connect bamboo producers with global markets [122-164][165-176], and also noted broader AI-enabled public-service use cases including traffic-flow optimization, satellite-based lake monitoring, and sensor-driven leak detection in water pipelines [209-224]. Rajesh Babu illustrated AI’s potential in healthcare by developing an AI-driven briefing tool for pharma sales reps and an organ-matching system that evaluates numerous biological parameters to improve transplant outcomes [258-271][300-305]; he argued that AI agents can act as personal assistants for clinicians and patients, streamlining information flow and reducing waiting times for specialist care [280-295]. When asked about the risk of AI widening digital divides, participants emphasized that AI can be embedded with equity algorithms-such as gender-balanced lottery allocations-and that human-in-the-loop designs ensure frontline workers retain control [380-388][393-398]; Himanshu added that widespread smartphone penetration and multilingual language models help flatten urban-rural and linguistic gaps, though mentorship and venture capital still lag in less-developed states [400-417]. The discussion concluded that AI, when integrated as a digital public good and coupled with proactive equity safeguards, can accelerate social impact across education, health, and livelihoods while mitigating exclusionary effects [109-116][423-424].


Keypoints


Major discussion points


AI as a tool to democratise access to social protection and welfare - Kritika explains that “AI for good… is about enabling equitable access for vulnerable citizens to social protection” and describes how the Right-to-Education (RTE) digital public good reduced a 10-step process to a “single-touch” digital lottery, now being used in 18 states [23-26][70-82]. She also notes the use of a multilingual WhatsApp chatbot to target the most vulnerable families [95-98][91-93].


Government-led innovation missions to scale AI and reduce regional disparities - Himanshu outlines the role of the Atli Innovation Mission (AIM) as a “federal body that manages innovation for the country” and its focus on “setting up a State Innovation Mission” to bring AI to under-served eastern and northeastern states, including hackathons on water-quality data and bamboo-market dashboards [35-42][122-131][140-148][155-162].


Private-sector/start-up perspective on AI’s impact and sector focus - Kavikrut and later participants stress that AI is “the strongest tool that startups have ever had access to” and argue that founders should channel it into high-impact sectors such as healthcare and education rather than chase VC money [55-64][61-64][358-367][377-382].


Concrete AI applications across domains - Examples shared include:


• A multilingual chatbot for RTE admissions [95-98];


• An AI-driven “morning presidential briefing” for pharma sales reps that pulls past CRM conversations [261-267];


• AI-assisted organ-matching using multi-parameter biological data [300-307];


• AI-enabled water-pipeline leak detection and satellite-imagery monitoring [215-224].


Ensuring equity and avoiding a digital divide - Kritika proposes embedding equity algorithms (e.g., gender-balanced lottery, SES targeting) and keeping “human-in-the-loop” workers like Anganwadi staff [380-388]; Himanshu adds that AI can bridge language gaps across India’s 22 scheduled languages but warns that mentorship and VC access must also be spread evenly [400-410]; Rajesh argues AI is a “flattening” force that can reduce digital inequality [393-398].


Overall purpose / goal of the discussion


The panel was convened to explore how artificial intelligence can be harnessed for “good”-specifically, to democratise access to public services, accelerate social-impact innovation, and create scalable, equitable solutions for India’s vulnerable populations. Participants shared experiences from government, non-profit, and private-sector perspectives, highlighted concrete use-cases, and debated how to operationalise AI responsibly at scale.


Overall tone and its evolution


Opening (0-5 min): Optimistic and collaborative, with participants expressing enthusiasm about AI’s potential to increase access and speed [1-4][27-30].


Middle (5-20 min): Becomes more informative and technical, detailing specific programmes, regional disparities, and concrete AI pilots [35-42][70-82][122-148].


Later (20-35 min): Shifts to a reflective and cautionary tone, acknowledging challenges such as digital/linguistic divides, the need for equity safeguards, and the risk of “race to the bottom” [380-388][400-410].


Closing (35-45 min): Returns to a hopeful, call-to-action stance, emphasizing collective responsibility to build AI for good and summarising key take-aways [418-424].


Overall, the conversation moves from enthusiastic vision-casting to grounded examples, then to critical reflection, and finally to a unifying, forward-looking conclusion.


Speakers

Kavikrut – Moderator/Host, associated with T-Hub (startup incubator/accelerator) [S12]


Kritika Sangani – Chief of Staff at Indus Action; development sector professional, former Teach for India fellow; focuses on social protection and AI for Good [S1]


Himanshu AIM – Representative of Atli Innovation Mission (federal body under NITIO, public-policy think-tank of the Government of India); leads programs such as “Setting Up a State Innovation Mission” [S5]


Rajesh Babu – Speaker on AI applications in pharma/healthcare (AI-enabled personal agents for medical reps, organ-matching AI); role/title not specified in external sources


Audience Member – Unnamed audience participant who asked a question about medical breakthroughs and awareness; no role/title provided


Audience Member 2 – Unnamed audience participant who asked about sectors needing more startups; no role/title provided


Yashi Audience Member 3 – Unnamed audience participant who asked about preventing AI-driven digital divides; no role/title provided


Additional speakers:


None identified beyond the list above.


Full session reportComprehensive analysis and detailed insights

The panel opened with moderator Kavikrut stating that artificial intelligence (AI) is set to “create access” and “democratise access to healthcare” by lowering both price and availability barriers [1-2]. He reinforced this optimism by highlighting access and speed as the twin themes of AI for Good [27-29] and warned that AI could trigger either a “race to the top” or a “race to the bottom”, insisting that the former can only be achieved by focusing on impact [63-64]. After the audience question, Kavikrut later urged founders to channel their talent into high-impact domains such as healthcare and education [377-382].


Kritika Sangani (Indus Action) situated her work within this vision. The organisation positions the government as the “protagonist” and itself as an “enabler”, embedding technology, policy redesign and capacity-building into existing social-protection systems rather than creating parallel structures [14-19]. Working with about 18 state governments and national ministries such as Labour, Social Justice and Employment, Indus Action seeks to make welfare delivery “equitable” for vulnerable citizens [20-23]. For Indus Action, AI for Good means collapsing the current ten-step entitlement process into a single-touch interaction [24-26].


A concrete illustration of this ambition is the Right-to-Education (RTE) digital public good. The programme draws on Section 12.1c of the RTE Act, which reserves 25 % of seats in private unaided schools for children from economically weaker sections [72-75]. Initial on-ground campaigns in Delhi generated 19 000 applications in 2013-14, yet only 196 students received admission calls, exposing a severe bottleneck [79-80]. In response, Indus Action co-developed the “RTE MIS”, an open-source, modular digital lottery that replaces the physical draw and reduces the transaction to a single digital step [81-82]. This solution has since been adopted, in various forms, across 18 states [82] and has enabled roughly 900 000 children to enrol in schools, up from a few 10 000 a decade earlier [84-89].


To further improve reach, Indus Action is piloting a multilingual WhatsApp chatbot that serves as the first interface for parents applying under RTE [95-98]. The bot not only automates the initial application but also supports frontline workers by lightening their workload and by using AI-driven targeting to identify the most vulnerable families [91-93][332-334]. Kritika emphasised that the same data ecosystems can be “flipped” so that the state discovers eligible citizens, rather than requiring citizens to discover schemes themselves [330-334]. An equity algorithm embedded in the lottery ensures gender-balanced allocations and representation of children with special needs or from socio-economically weaker groups [350-355][380-384].


Himanshu (Atal Innovation Mission, AIM) broadened the perspective to the national level. AIM, housed under the National Institution for Transforming India (NITI Aayog) and created in 2016 as a “brainchild of the Prime Minister”, coordinates innovation across ministries, from schools to high-tech startups [35-42][37-40]. A key current priority is the establishment of State Innovation Missions to bridge the stark technology gap between the more advanced western and southern states and the lagging eastern, northern and northeastern regions [124-130]. The first such mission is slated to launch in an unnamed northeastern state, aiming to create a peer-to-peer learning network that will allow less-resourced states to benefit from successful models elsewhere [140-148][233-235].


Within these state-level pilots, concrete AI-enabled projects were showcased. One hackathon will use AI to map iron-contaminated water at sub-district granularity, providing data for low-cost diagnostic kits [155-162]. Another initiative builds a dashboard to connect bamboo producers in a high-quality-producing state with global markets, addressing price-discovery challenges [165-174]. Additional public-service use cases mentioned include AI-optimised traffic-light cameras to reduce fuel consumption [211-213], satellite-imagery monitoring of drying lakes [215-218], and sensor-driven leak detection in water pipelines [220-224]. These examples illustrate how frontier technologies can be layered onto existing government data to generate scalable, locally relevant solutions.


Returning to the startup ecosystem, Kavikrut described AI as a “super-charged tool” for startups, the most powerful capability they have ever had access to [55-62]. He noted that AI is also reshaping the gig-economy, citing platforms such as Swiggy and Zomato that use AI for demand-supply matching [55-62]. Kavikrut also referenced a joint Swissnext-India startup project that is building an AI-driven test to predict cancer risk from DNA data, currently in validation [340-345].


Rajesh Babu illustrated how AI can augment professional workflows and clinical outcomes. He first cited an earlier, less successful attempt using Amazon Alexa Lex and Polly-the “Agilesium” prototype for a pharma-rep briefing tool-that was shelved due to poor language handling [260-266]. His team later built an AI-driven “morning presidential briefing” that aggregates a pharma sales rep’s calendar, CRM history and recent doctor interactions into a concise voice memo, dramatically improving information availability at the point of care [261-267]. More ambitiously, a collaboration with the Scripps Institute is developing an AI model that evaluates dozens of biological and physiological parameters to identify the optimal liver-transplant donor-recipient match, a task that traditional algorithms cannot handle [300-307]. Rajesh argued that, because smartphones are ubiquitous, AI will flatten rather than deepen digital divides, acting as a universal equaliser [393-398].


While there was broad agreement that AI can democratise access, the panel diverged on how to safeguard equity. Kritika stressed the need for proactive equity algorithms (gender balance, socio-economic targeting) and for keeping frontline workers “in the loop” so that human judgement complements automated decisions [380-388]. Himanshu added that India’s linguistic diversity-22 scheduled languages and myriad dialects-constitutes a separate divide that must be addressed through multilingual AI models [403-410]. By contrast, Rajesh suggested that the sheer penetration of smartphones already ensures equal access, implying that additional safeguards might be unnecessary [393-398].


Infrastructure was identified as a critical enabler of these ambitions. Kavikrut noted that India’s average 5G mobile data consumption of 22 GB per month provides a robust backbone for nationwide AI deployment [206-208]. Coupled with near-universal smartphone ownership, this connectivity supports both the United Entitlements Interface (UEI) vision-a UPI-like single portal where citizens can check eligibility and claim any constitutional right in one step [112-116]-and the multilingual chatbot and other frontline tools.


In the closing remarks, the panel reaffirmed that AI for Good must be built as a digital public good, embedded within existing systems, and equipped with equity-by-design safeguards [109-116][380-384]. Consensus emerged around three pillars: (1) simplifying entitlement delivery through single-touch AI interfaces, (2) leveraging state-level innovation missions and robust connectivity to level regional disparities, and (3) ensuring inclusive outcomes via equity algorithms and human-in-the-loop designs. Remaining challenges include scaling awareness of welfare schemes among the rural poor, finalising the governance model for the UEI, and establishing clear pathways for converting grassroots innovations into viable startups [326-334][378-382][393-398]. The discussion concluded with a collective call to “build AI for good” and to prevent a “race to the bottom” by prioritising impact, equity and collaboration [418-424].


Session transcriptComplete transcript of the session
Kavikrut

Yes, you would think it will create access, it will democratize access to healthcare. Yes. Both in terms of price, in terms of availability. Yes. That’s great. Kritika, over to you. Tell us about yourself, the organization, and your take on AI for Good.

Kritika Sangani

Sure. Thanks. Thanks, Kavi. Really happy to be here and privileged to share the stage with each of you. I’m Kritika Sangani, and I work as Chief of Staff at Indus Action. I’ve been in the development sector for about 10 years, started in the corporate sector. Serving investment banks in another lifetime. I’m also a Teach for India fellow. So I joined Teach for India and decided to not look back and continue with this sector. And, in fact, have spent, have been associated with Indus Action for almost 10 years now. Indus Action, as Kavi also alluded to, we work with governments on making social protection accessible to vulnerable citizens. The end goal is that we ensure that. welfare is delivered to them during critical life moments for the household such that they are able to actually tide over moments of crisis and make the most of moments of opportunities like education or healthcare to ensure that they’re trying to codify pathways out of poverty for themselves and their families.

The government is the protagonist in our work because, of course, they are the biggest implementers of social protection. We are the enablers. We come in with tech solutions, policy redesign solutions, capacity building solutions as a team. Our model is to embed these solutions within existing systems instead of creating parallel systems, and that is what we are about. We work with about 18 state governments, actively working with national ministries like Labor and Ministry of Social Justice and Employment. We work with the Department of Health and Human Services, And yeah, very excited to be here. I think when I listened, I heard about the prompt, Kavi, as what AI for good, what does it mean to us really?

It actually is about enabling equitable access for vulnerable citizens to social protection. Right now, they take about 10 steps, 10 burdensome steps to access a single entitlement. How do we bring that down to a single touch process? That is what AI for good stands for us.

Kavikrut

Great. I heard access. You’re talking about simplicity and speed. I’m really curious to find out more themes as we keep talking. Thank you, Kritika. Himanshuji, over to you.

Himanshu AIM

Yeah. Thanks. Thanks, Kavi. And thanks to, I think, the government of India for managing such a humongous event. I know as a matter of fact because we’ve been part of the planning team, almost a year’s preparation has gone into it. So kudos to everybody who’s sort of pitching in their own respective roles. So I’m Imanshu. I come from an organization called Atli Innovation Mission, which is the federal body that manages innovation for the country. We are housed under NITIO, which is the public policy think tank of the government of India. Atli Innovation Mission was the brainchild of the current Prime Minister, Honorable Narendra Modi ji. 2016, when we felt as a country that you need a body for innovation that cuts across all ministries, all life cycles.

So essentially it works from school to startups. And as a startup, even the high -tech startups like space. So we’re going to celebrate 10 years next week. The new title that we have sort of decided for ourselves is School to Space. Right. covering almost everything. I’ll not drain by talking too much about the programs that we do. Maybe we’ll talk it in a bit. I think for me and for the organization as a whole, because you’re also housed within a government institution, so social good means that whatever AI is trying to enable, is it having some impact at the grassroots level? Whether we work with incubators like T -Hub or startups directly or the state government so I also lead a program called Setting Up a State Innovation Mission, for example, where the idea itself is that how do we move beyond the current level of innovation that is happening?

How do we bring every part of the ecosystem together and put a layer of AI? Not only AI, I would say all frontier technologies. For example, when you talk to mature states in the southern and the western part, there’s this huge conversation on quantum. There’s huge conversation around even AI for health, for example. There are conversations where Can we enable AI to improve the public service delivery, for example, right? Where what we say we are not building unicorns, but we are building or we are saying not unicorn in terms of valuation, but in terms of social capital, right? Where we say that can it start impacting a billion lives, right? Yeah. So I’ll pause here.

Kavikrut

No, this is great, Himanshuji. Thank you. We heard access. We heard simplicity and speed. You talked about scale and infrastructure because that’s the mandate with which organizations like AIM come from. I’ll talk about one bit on what we see at T -Hub. We have understood this very clearly that as T -Hub, as organizations, even like AIM, the goal is not for startups to be verticals or horizontals. Startups, MSMEs, nonprofits, I think we’re all here as vehicles to implement all this. Of economic growth. If you talk to startups, they don’t talk about policy. they don’t talk about infrastructure, they’re talking about building when they’re talking about building they just want to solve problems, they want to create value and I think AI has become a supercharged tool for startups to do that so the point I was trying to bring out at least from our experience with startups is that at least in the last 15 years, we see it all as a wave but it is probably the strongest tool that startups have ever had access to and a lot of social impact startups, even if you look at the Swiggy’s of the world and Zomato’s and if you take a step back and if you look at what they’re talking about on social media in the last let’s say a few months, both positively and negatively, you will see the major uproar is about gig economy and I was reading an article on The Economist which said that we are one of the only countries or economies in the world where the gig economy has become a true form of employment right so people are talking about minimum wages they are talking about you know labor treatment they are talking about human rights talking about all of that now who would have thought that a food delivery company right food delivery multiple food delivery and grocery delivery companies will actually create an organized labor market in our country so that’s the lens that we take for startups I will hand it back to you I have a question for some of you based on what you said but I will also drop in one theme here now when I said AI for good I think it obviously means AI for good impact you know for economic growth but it also means AI for good which is the other way to you know interpret the English of saying that it’s here to stay it’s here forever and it is our job to now figuring out what to do with it right and I’ll quote I think Nandan Nilkeniji yesterday said something about you know either it is a race to the top or race to the bottom with AI.

And I think the only way to go to the top is to focus on impact. So, Kritika, I want to pick this up with you. We know that intersection, when you talked about equitable rights, you talked about social protection, you talked about welfare and access to welfare and reducing 10 steps. Tell us a little bit of the work that you’ve done in RTE. I don’t think the audience is aware. Talk about admissions. And then tell us, where do you see the biggest opportunity for reducing those steps in that example with AI?

Kritika Sangani

Absolutely. Thanks, Kavi, for setting that up. So, intersection started with the Right to Education Act. There is a specific clause under it. It’s called Section 12 .1c. Most of you would have watched this movie called Hindi Medium or English Medium, both versions. So, it essentially mandates that 25 % seats in private, unedited schools be reserved for children from economically weaker sections and disadvantaged groups. Now, we picked this sliver of a large constitutional mandate, which is Section 12 .1c, and with a fundamental belief in the power of choice for parents from vulnerable sections of the society to put their children in a school of their choice and not whether it’s public or private. And with that spirit, which is also part of the letter and spirit of Section 12 .1c, we started working on this particular right.

So we started our work in Delhi. I think we started with running on -ground awareness campaigns. And very early we realized that we were able to mobilize about 19 ,000 applications in 2013 -2014 in Delhi, of which only 196 students got a call for admissions. And this is a constitutional right. This is their right to get into the school. that was a huge eye opener for us and we realized that only by working with citizens and parents we are not going to be able to solve this problem because I think the government system also needs streamlining and support and they were really trying to also there was willingness to execute but the government is also really constrained for resources tech capacity and so on so we went there and fortunately the government showed willingness to actually experiment with a full online approach Rajasthan government had already done it so some of the peer effect also worked in our favor and that’s when we introduced what we call the RTE MIS that was our first solution it’s now evolved into an education digital public good it’s an open source modular product that we’ve launched for any government to adopt but what that did was what parents were going through was they had to it was it was this particular act it actually works on a lottery mechanism for selection So the draw of lots the school has to do, and the parent has to go to 10 schools that they have applied to, to actually see which child has, if the child has been selected in A, B, C, or D.

We cut that entire physical transaction and we got a digital lottery integrated, which is actually our secret sauce. Yeah. And that particular module has now was now adopted in some shape or form across 18 states that we’ve sort of worked with.

Kavikrut

How many applications in total and children in schools now?

Kritika Sangani

Children in schools now are about 900 ,000, 9 ,000, 9 ,000 children from one 96, 10 years ago, 10 years ago. And states, we actually when we started, we made an exit from our first state. Which is Delhi and then Uttarakhand. Uttarakhand adopted our end to end system. We took about seven to 10 years. Now we’ve shortened that entire cycle to three

Kavikrut

And tell me, Kritika, to prevent, what is AI giving you as a tool to simplify and scale what you just described?

Kritika Sangani

I think what we’ve actually now moved it to the next level by saying we are now focusing on who are these children who are applying to this right? Are they the most vulnerable? So it’s the challenge of discovery that we are using AI to for improved targeting for the state.

Kavikrut

And how are you using targeting, AI in targeting?

Kritika Sangani

So AI in targeting, what we are currently experimenting with is we have a WhatsApp chatbot. It’s a multilingual chatbot, which serves as a first interface for the students or the parents to apply to. And also using frontline, building frontline worker capacity. What that does is it actually reduces the load of the overburdened frontline worker with respect to reaching out to these students as well. and also being able to target the most vulnerable with respect to just having that multilingual advantage, having a physical touch point if I’m confused to actually reach out to somebody to navigate the system and then apply to the…

Kavikrut

So the path from 196 to 9 lakhs was about 10 years, but I’m understanding that from 900 ,000 to 9 million will be much shorter. It wouldn’t be 10 years.

Kritika Sangani

Absolutely. So there are 20 lakh seats available under this particular right.

Kavikrut

Every year?

Kritika Sangani

Every year.

Kavikrut

Annually 20 lakh children?

Kritika Sangani

Annually there are 20 lakh children. Currently there is about 60 % coverage. When we started it was at the 30 % mark. So we definitely…

Kavikrut

So AI will scale that. And I think you didn’t talk about this, but I know from our other conversations that while AI will help you drive this deeper and scale, what you’re also doing is horizontally the DPG, the Digital Public Good they have built in education, the stack that they have built is now being universally applied to other entitlements or constitutional rights. So we heard the example of education. Absolutely. The dream, I know it’s called project name is UEI. They want to build a UPI like so instead of United Payments Interface, Intersection wants to build a United Entitlements Interface. Think of a DigiLocker meets a DigiYatra, meets the constitution. You log in, you check your own eligibility and you can tap into a constitutional right and apply for it and actually get access to something that is already rightfully yours.

Absolutely. This is phenomenal. Thank you so much for that, Kritika. Himanshuji will take a step back. I love depth, especially when people who are in action talk about it. Tell me where are you seeing, pick a specific example and where are you seeing AI truly unfold the real impact of what the work that you guys do. I have the same question for Rajesh Babuji after this.

Himanshu AIM

Okay. I’ll probably step maybe two steps even further back, right? When the cabinet approved our extension in 2024, one of the programs that was really high on priority was setting up a state innovation mission, right? We did a couple of workshops and we realized that there’s a huge disparity between the western and the southern parts at one end of the spectrum and the northeast, eastern and northern part of the country, right? So Telangana, Andhra, Karnataka, Maharashtra are already talking in a level which is more towards the evolved countries as a whole, right? Let’s not say that these are parts of India in terms of technology enablement and even understanding of how this can be leveraged by a startup.

And when you take a diametrically opposite view to even the startups in northeast and even eastern part of the country, right? They are really, really not even at the basic level, right? Even in the eastern part of the country, right? Even in the eastern part of the country, right? Even in the eastern part of the country, right? Even in the eastern part of the country, right? Even in the eastern part of the country, right? Even in the eastern part of the country, right? Even in the eastern part of the country, right? Even in the eastern part of the country, right? Even in the eastern part of the country, right? Even in the eastern part of the country, right?

Even in the eastern part of the country, right? new to all of these states. So one of the first few state innovation missions that we plan to set up is in the northeast, right? And the eastern part of the country. The first launch will be next week. I don’t want to take the thunder away. I know what’s coming up. We’re talking to them about it. You’re launching a state innovation mission next week. I won’t name the state. Yes, correct. And some of the conversation that we’re having with the state, right? And it’s a very progressive, evolved state. Not too much in terms of manpower or resources because of the simple reason that all the good people migrate to Hyderabad and Bangalore of the country and don’t remain in that state, right?

But those who have remained back, right? They’re actually building something really incredible for the state itself, right? They’re trying to solve some grassroot problems. I’ll talk something that is more relevant. And this is a conversation that happened about two months back when almost everything was set and the idea was how do we sort of launch a hackathon which is a ministry -backed hackathon. Now the state of, the state, let me not name the state, the water in the state has high content of iron, right? The idea was can we convert this into a hackathon and say that leverage AI, right, to first identify the content of water in different parts of even the district. So it’s not talking at the state level, going one level below district and even one level below district to a sub -district level.

Identify what is the different level of iron or different types of iron and then build a hackathon where a startup can leverage this data that has got collected, right? And build solutions two ways. One, a low cost diagnostic, right? Can we build a 100 rupee kit that can sort of diagnose this water, right? And second, maybe build a low cost solution to solve this particular problem. Other conversation that started happening. Now the state has a lot of bamboo, right? In fact, any of you. You would have been to the newer terminal at Bangalore airport. If you see all those beautiful bamboo things happening, it’s come from the state, right? The quality of bamboo is better than China and Vietnam, but they only get about one -tenth or one -twentieth the price, right?

One, because they don’t have access to market. Two, they are not able to even identify what bamboo will sell where at what price, right? Can we link it to the global market, identify who needs what? So they are trying to create a dashboard for it, right? And that thought came because when they realized that people have appreciated the bamboo that has gone into the terminal 2 of Bangalore airport, there could be a lot of use cases, right? Third element, and this was fascinating, right? And this is something which is sort of not something to be proud of in government. We have a lot of data, right? Everybody knows we have a lot of data in the government, right?

You are just trying to build a small funnel, right? That what’s the total? What’s the total number of innovations? who convert to a startup and create the jobs, right? Now, from startup to job is a slightly easier sort of, maybe multiplied by 10, 15, or even you have the data for these 100, 200 startups that are there in this district -registered DPIT startup. The first thought that came to my mind while I was talking to the secretary there, he said, let’s look at innovations that are happening in the state. Now, imagine the state has 120, 130 startups, but has 1 ,100 documented innovations which are grassroot innovations, right? And these are validated innovations because the overall innovation is 3 ,000, which is better than a lot of states when we took it in terms of per capita than even Karnataka and Maharashtra.

I come from a consulting background. The first thing I did was just divide it by the population, right? Look at a per capita thing. It’s twice the number for Maharashtra, right? And they are trying to solve problems which are very basic. For example, they are saying that, hey, if we mix this plant and this plant, that can lower your… blood sugar. I don’t know. Some of them may be patentable. Some of them may not be patentable. But the idea is that even if 10 % is good enough, right? So 10 % of that 1000 is about 100. It is just double the number of startups. Yeah. Where the innovation is already existing. Right? Somebody has to commercialize that innovation. Right? Right.

And this number sits on the National Innovation Foundation dashboard. Right? Which is, which was, and the state government had no clue about it. Yeah. That, oh, these many innovations. And then

Kavikrut

And a lot of this you can now supercharge with AI. Yes. Which was slower before. And now we have multiple, let me just say, infrastructure layers, right? You have everything from 5G smartphone usage to the largest user base for chat GPT to, you know, I don’t know if you know. This number, but 22GB is the average usage of 5G internet. mobile data in India, 22 GB per month.

Himanshu AIM

Correct. And the other thing that we are talking to this, how can we enable AI to improve the public service delivery when I was talking about that, right? And there are some broad use cases, right? Simple cameras on the traffic lights which monitors what is the flow of traffic at each hour or each minute, right? And LinkIt can be sort of automate that to reduce the consumption of the petrol, diesel, whatever it is, right? And then show some savings and fairly earn carbon credits. One very simple use case. Can we utilize, for example, satellite imagery to identify how lakes are drying up, right? Or what is the water? Just basic imagery. Everything is available. It’s not that we have to create newer thing.

And this is available with the government, right? One of the startups that we have funded at Artle Innovation Mission is a startup that has created small sensors that are embedded in the pipeline that tell you where, and measuring the flow of water at each level, maybe at every 10, 20, 30 feet, that where exactly is the leakage. So that people don’t have to go to that pipe and then see, oh, where is it going, let me just fix it up. You directly go to that. The next level that the startup is building is, can we also look at where the leakage is happening for the multiple times and look at what is the material, right? Is there some more stress at that particular point of the pipeline to ensure that next time that you’re building or repairing it, the leakage doesn’t come from this particular part of the pipeline.

Kavikrut

No, this is great, Himanshuji. What I’m hearing from you is that, first of all, I can feel your sense of energy and excitement for the region. Upcoming state innovation mission launch as well as the beautiful vest you’re wearing today. This is not. This is not the state that. Yeah, we’re not giving any hints. And I was just going to say that what you’re saying is in like the pipe. example that you talked about is that there are existing innovations, there are existing latent energy that startups have, there are real problems that can be solved and AI is helping, you know, as an example, tie all of this together. It’s not just supercharging or fueling this, but it’s the glue that’s bringing all this together.

And the best part is that in truest sense, this is democratization because AI will enable that Assam and Manipur or Meghalaya or Sikkim are at the same level as maybe Karnataka because if you have that AI level on a data, right, and you want to take any data back decision for public good, right, it doesn’t stop Sikkim to take that decision which Karnataka can take today.

Himanshu AIM

And the other very important thing that we are trying to build within this state innovation mission is to create a peer -to -peer learning network, right. There could be something fascinating that Sikkim can offer in terms of organic vegetation or agriculture which can be picked up from the other agrarian states. Yeah.

Kavikrut

Yeah. This is great. I’ll, we’ll go over to Rajiv. Rajiv Babuji and then we’ll break for questions. You know, we’ve heard the building in public perspective from a non -profit as well as a federal organization. I want to see the private slash corporate, you know, and the work you guys do in the foundation perspective, Rajababuji. You already spoke about, you know, access. You spoke about availability in healthcare. Pick an example that you’re already building towards and tell us what are you most excited about, about how the products and services that you are building will truly unlock value for social good and economic growth.

Rajesh Babu

Always. Yes. So I’ll first touch upon one point. See, there is a lot of concern and fear about AI coming and taking away the jobs, right? I think that’s partially true, but in the big picture, it is not true. See, what’s going to happen every time technology comes, for example, when humanity started, as a civilization we started, there was only one industry, which was agriculture. Yeah Pre -industrialization Exactly And then after that When industrialization happened More opened up It could have been seen Then with the power it came It saw very disruptive From horse power to steam power That could have been seen very disruptive To many people who Livelihood was depending on horse power Or bull power or whatever power it was Same way from steam When it moved to electricity It could have been very disruptive People whose business was all in steam Coal and all that stuff They would have thought Oh this is not necessarily a good thing It’s going to put all of us And even yesterday somebody was referring In early late 1800 They said we’ll close the patent office Because everything needs to be Innovated has been already innovated There is not much to do But then here we are Almost 130 -140 years later Of making the patent And we are still in the process Of making the patent And we are still in the process And we are still in the process Of making the patent And we are still in the process new things are unfolding.

So I think, you know, like in the, you know, many in the forum leaders spoke, A is like another energy, right, like electrification. It’s going to do a big transformative change, and there is not a thing it won’t touch. It will touch everything. And it’s going to make everything intelligent. And when everything becomes intelligent, largely, very good things will happen. Very good things, very positive things. It’s going to enhance the livelihood or quality of life for many people, and it’s going to create a lot more opportunities. With that being said, in the industry we are focused in, right, for example, I will tell one example which we tried, Agilesium. One of the things we wanted to do is that, so we work with pharma company, biotech company.

One of the problems we were trying to solve I will just give few problems Which we were trying to solve Was not unsolvable At that time because of the technology As recent as 4 -5 years

Kavikrut

Tell us more about this example

Rajesh Babu

For example in 2018 We invested in a project Our customers were Pharma companies It was reps So the reps basically are going And meeting the doctors They are supposed to know which doctor They are meeting What was the last time the conversation They had with the rep And doctor said I wanted to know About this medicine What kind of adverse impact it could have For the patients Side effects it may have if I give it to the patients Some technical details the doctor would have requested So they have to go back And maybe talk about it Give those details and all that stuff But between two appointments Sometimes it’s three months to six months So they don’t remember what happened So what we wanted is Last time conversation recorded Of course they have it in paper or sales force And all that stuff They may put it But then you are going today from the previous meeting You don’t remember that You are not accessible You are in the car driving and all that stuff So what we wanted to do is We wanted to do an AI Where in the phone app it’s implemented And you Previous day Based on it will look at your calendar It will look at who are all the doctors you are visiting And then it will go to the CRM What was the conversation Last two three conversations happened with the doctor You take that And what homework you did on that It will take all that Then it will take as a voice memo Morning 7am when you are going to meet the doctors You play that voice memo It says hey today you are meeting so and so doctors This is what was the conversation last three This is what you are supposed to tell It’s

Kavikrut

almost like a morning presidential briefing Exactly,

Rajesh Babu

morning presidential briefing And I got this idea based from that only Amazing then at that time The technology available for me Was Lex Lex is a Offering tool AWS had released from Alexa So Alexa, whatever they used to do Build Alexa They gave that tool as Alexa And then there was another technology called Poly, which also from AWS Technology So we took these two technologies For our customers from the AWS platform We were trying to build it It sucked At that time The experience was really bad Because They could not understand the keywords The medical keyword It would not understand People’s accent it would not understand And reading structured and unstructured data Because you have to read the unstructured data Structured data and all that stuff And you have to do it And you have to do it It was not so great Plunky now, so we have to shelve that project.

We invested almost a million and a half, two million dollars at that time, multiple customers of ours. And the experience when we took it to the rep, it was not so great. They did not If it’s not easy and usable, nobody adapts. It would not understand the question. Can you tell, repeat it again, repeat it again, three times, they would throw the phone. Right? It was like that. And when it got it also, it did not understand. It was bad. Because of the accent, various things, medicine, technical. Now, that has been implemented a year back with the AI super seamless.

Kavikrut

And where do you see the impact of adoption of this technology?

Rajesh Babu

Across the board, everywhere. Because everywhere if you think about it now, right, it can take every individual, every one of our individual, let’s say our calendar we can take. We can create a personal agent for it. and it will look at the task, it will look at our email, it will look at our calendar and it will tell this is what you are supposed to do. I

Kavikrut

think a lot of people have begun to use that. Yes,

Rajesh Babu

and then now not only self, an organization can create an agent for each and every buddy. Instead of manager going and telling each and every person, hey, you are supposed to complete this, we can have their personal agent who is their agent buddy can tell them, which is a little bit private and more comfortable to hear. And

Kavikrut

I think in healthcare and the service that you are in, a continuous flow of information, high quality information, makes a huge difference in availability of the work that you do. Exactly,

Rajesh Babu

this is simple as accessibility information like you said. And scale. But then I will take another complex situation where we are working with another research institute in San Diego. Yeah. Where… every individual, every one of our individual, let’s say our calendar we can take, we can create a personal agent for us. And it will look at the task, it will look at our email, it will look at our calendar, and it will tell, this is what you are supposed to do. I think a lot of people have begun to use that.

Kavikrut

Yes.

Rajesh Babu

And then, now, not only self, an organization can create an agent for each and everybody. Instead of manager going and telling each and every person, hey, you are supposed to complete this, we can have their personal agent, who is their agent buddy, can tell them, which is a little bit private and more comfortable to hear.

Kavikrut

And I think in healthcare and the service that you are in, a continuous flow of information, high quality information, makes a huge difference in availability of the work that you do.

Rajesh Babu

Exactly.

Kavikrut

Which is access to medicine. This is simple as accessibility, information like you said. And scale.

Rajesh Babu

But then, I will take another complex situation where we are working with another research institute in San Diego.

Kavikrut

Yeah.

Rajesh Babu

Where basically, this is in the liver transplant. Where the patient basically is waiting there Who is in the waiting list For a longer time The donor comes in, they would be matched Now what we are doing is That is not the way Because you know in the liver transplant Any organ transplant Absorption of that organ is very difficult Because it is seen as a foreign body

Kavikrut

Yes, the foreign body

Rajesh Babu

And the body will Try to reject it Making it accept is a very very big problem So then You need to look at the biologics Of both the patients You see who is most conducive From a biological point of view To receive this organ So there are parameters, biological parameters And physiological parameters that now you can do a match And there are too many parameters to do In a simple Old algorithm It is very difficult Now what we do, and then there is multiple research papers

Kavikrut

How far along are you on this matching For organs

Rajesh Babu

Now we are helping Institute One of the top institute in San Diego Scripps And lot of Nobel laureates from medical are there And we are helping One of the researchers actually from India It’s published so I can quote His name is Sunil Kurian He has published and he has implemented this with this and with the AI Now he has told based on the patients, donor and the patient The AI can tell who is the best recipient for this match And their livelihood, the organ, after that they will have a better living condition It can predict based on various parameters which is not very easy to do So from a simple use case as a

Kavikrut

No it’s a phenomenal To very deep science You went from morning presidential briefings to almost Tinder for organs in one shot No this is great I want to ensure Rajeshji we have enough time for questions We have a small audience here I wish we could take live questions online too But we don’t have that facility yet But any questions here, small audience Any questions that we have Very happy to take them You can point it to a certain panelist if you like Yes please we will pass the microphones right behind you please tell us your name and you can point a question to any of us thank you very much

Audience Member

I have two questions one is for Rajesh ji and one is for Kritika ji so Rajesh ji what medical breakthrough do you believe will come up in the market in next 3 -4 years after this integration of medical science and AI is my question crystal ball question on healthcare what is the breakthrough on the healthcare medical side that you see it’s a crystal ball question that might happen in AI that you are excited about second question for Kritika your initiative is very impressive what I want to know is the average poor person somewhere in the rural area is he aware of your initiatives or if not then in what time frame do you believe that you will be able to reach that awareness question

Kavikrut

I will add a flavor to Raj’s question of saying can that awareness be unlocked for further media We will first start with Rajababuji Please go ahead

Rajesh Babu

Thank you, very good question, appreciated See, I think definitely Lots Lots, right, it is going to be very Transformative, it is going to be Both in the healthcare side First the healthcare, right, I think the Primary healthcare Lot of it will move to AI, right, I think Through our AI Basically doctors will Start creating the doctor agent And first your Questions, your personal doctor agent will Address it, basic, and then Based on that, your personal doctor agent Your doctor’s agent Will now next contact the doctor If there is a need

Kavikrut

I have a feeling that the doctor’s agent will talk to your patient’s agent

Rajesh Babu

Yes, yes

Kavikrut

Before it comes to the two humans

Rajesh Babu

Exactly

Kavikrut

So what is the breakthrough for you in that

Rajesh Babu

See, that Basically the access, no waiting Especially, you know, in the western countries, the waiting, some of these specialists is unbelievable, we would not have heard in India, they wait for six months to one year.

Kavikrut

Right, that’s the waiting time for surgeries too.

Rajesh Babu

Yes, so I think that will definitely transform and then also you said like, talked about one of the things, sir talked about the bamboo situation, why not a global healthcare? Like why can’t there be a marketplace immediately, they can reach and there is a doctor, of course already that is happening but that can happen in a much much greater level because these agents of AI could be sitting at hospital level, doctor level, patient level, as a patient, I may not be AI savvy to reach various things but my AI agent avatar which I have created, if we can create in some easy way that would reach it and definitely address it.

Kavikrut

No, I think you brought up a very important point and we will go to Kritika right after is that, what you have said is that in highly skilled professions, where human capacity is at a very significant scarcity that capacity can be unlocked to a further level simply because agentic will come in. No that’s great I hope Yuvrajji the first question is answered I just add on to what you are saying right I think we were doing a joint program with Swissnext which is the Swiss arm of the embassy of Switzerland here and this was like a exchange kind of a program right and one of the Indian startups that actually spent a week in Switzerland what they are trying to create is look at all the profile of cancer patients right and start mapping that to the DNA and start predicting what is the probability right so they have created a test it seems still at a validation state what is the probability of you getting cancer looking at all the data patterns that they have in the database right and they feel that this will sort of enable a much faster cure for cancer because you’ll be able to predict it much better.

Yeah. Krithika, over to you. I’ll expand Ugra’s question again. I’ll recap. I think he’s asked something very interesting and important is can in a country of a billion plus people despite the availability of technology awareness of important, interesting impactful things continues to be a challenge. Given your experience at Indus Action, do you feel there is an awareness problem based on the work you’ve done with now 9 lakh students and can that awareness challenge be leapfrogged using AI? Over to you.

Kritika Sangani

Absolutely. Absolutely. No, this is a really relevant question and something that we are very actively working on. So what we’re saying is we want to flip this question. Whoever the citizen is, citizen discover. That the citizen has to discover the scheme. Can we use AI? And tech to flip this and say can the state discover the citizen? and building in those layers of AI ML into existing data which is exhaustive and it is with the state. To say that what if I layer your VBG Ramji or erstwhile M .G. Narega data with PDS repository with say aspirational district data and say in this district almost 95 % citizens are eligible for this particular entitlement. And this currently happens for demographic longitudinal research.

Correct. Absolutely. It is not used for actual access. Absolutely. And actually there is an organization. So Educate Girls has a model wherein they have been able to use ML to be able to identify and pinpoint to say these are the districts which need an intervention. These are the districts where we have the most number of out of school girls. So awareness will flip to actual targeted outreach. That is exactly what we are now trying to attempt with the state. The problems of validation, verification of citizens, are they eligible, not. Can we use? Yes. Can we use AI? Yes. ML, tech to be able to flip this and say, let tech enable the stage to discover the fascinating.

And that will also reduce the 10 steps in one, in one single shot.

Kavikrut

What an answer. Love this. I think we can take one more question. We have a few minutes left. We have to, we’ll take to keep your question short and pointed to, you know, uh, whoever you want to ask, please go ahead.

Audience Member 2

Thank you for a valuable session. Uh, I just wanted to ask that, uh, there are a lot of startups in India, but in which sector you think there should be more startups, but, uh, but aren’t currently.

Kavikrut

Great question. Great question. Do you want to answer it? You and I can pick this up. You go first and I’ll follow which sector lose largely defined, loosely defined. Do you think there’s a huge volume opportunity for founders?

Himanshu AIM

Okay. Uh, I think that it is a two level. One is if I define sector by, uh, what we traditionally call a sector, let’s say ed tech or a health tech, uh, or the second would be. Is it like a frontier AI, non AI kind of a thing, right? Uh, So, if you look at the current sectors, right, what most startups are doing, that’s my personal belief, not the organizational belief, right? What most startups are doing is they’re trying to, or they were trying to chase venture capital money. So, if I felt that EdTech is hot today, let me go EdTech. Today, that same thing is for deep tech, right? Just enable some element of AI or some element of deep tech and say, I am a deep tech startup, right?

The point is not that. So, I think one, founders should look at some problem to solve. I’m not saying a social problem. Some problem, some gap that they feel. With the current ecosystem or the current startups are not fulfilling. Or create your own niche. That’s one. Second, what some of our programs also do, and I did talk briefly about it, is the social kind of a thing, right? Do we create solutions that can sort of enable or solve for problems in India? Because a lot of problems that we solve for in India or in Indian context. Are replicable to the global south. Because similar per capita GDP, similar environment, similar diversity, similar price point.

Kavikrut

you want to know that is great very very interesting perspective it’s a fantastic question i’ll just answer it with one simple perspective foundership is a long journey even if you look at the current ipos they were all been building for 10 -15 years ai will not shorten that journey but will make impact faster i believe so i think given the world that we are in right now and this massive phenomenal super tool that we are getting i think founders should build things that take time because that can be accelerated now these are essential problems if i have to pick sectors i would say healthcare and education and not because we have them here but i think even if 10 percent of the talent that’s currently in fintech or is currently in consumer tech i think if they move to healthcare it could be engineers founders investors it will fundamentally change this you know what this country can do and needs and it’s the same for education i don’t want to say unemployment but we have a massive massive young talent pool the largest in the world if we can change you know as founders and as startups we can build more tools for education i think we will unlock a superpower you know for our country so i think that’s where founders should go thank you one last question you had one to ask us

Yashi Audience Member 3

hello i’m yashi i wanted to ask one question anybody from the panelists can answer my question is how can we ensure that ai systems deployed for public welfare do not deepen digital divides especially for rural and marginalized communities

Kavikrut

what an amazing question ai can deepen divides i will use the i’ll take the generosity to not just say digital it could be economic social uh you know cultural there are many divides that exist in the country how can it ensure uh kritika do you want to take a shot at it given that you’re in the welfare space i’ll ask maybe all of us can give a quick one line 10 second rapid fire answer on what can we do ahead proactively for the reducing a potential digital divide thanks to ai

Kritika Sangani

yeah i think uh two perspectives uh One is in our solutioning, when we, and I’m going to take a live example, when we actually embed, say, a lottery algorithm within a state, what we tend to do is we also have this equity algorithm to say that the gender ratios are going to be balanced. It’s going to be the girl and the boy are going to have a 50 -50 application rate there, right? Or to say that this is going to be the percentage of, you know, children with special needs or children from, say, socioeconomically weaker backgrounds or like SESD, OBC. So one is, like, how do we proactively embed these algorithms to be able to address what you’ve just suggested, which is also a problem that we actively work on.

I think other is I definitely feel. I feel that we cannot discard the human in the loop. I feel like AI has to make. their jobs easier, give them my Anganwadi worker or my Asha worker. They need to have access just the way Rajbabuji was referring. They have information which is so simple and so easy to disseminate that they don’t have to spend hours and hours parsing through these complex eligibility criteria for welfare schemes and so on. Those are the two perspectives that I have.

Kavikrut

Fantastic. You are basically saying build a pro -social equity bias into AI.

Kritika Sangani

Absolutely.

Kavikrut

Rajbabuji, do you have a quick rapid fire answer to this?

Rajesh Babu

Thank you. I want to also answer the previous question because that was a very good question. I will quickly touch on that and come to this. See, I think like Kavi told, don’t chase the VC money. I mean, it should be always see the monetization should follow the value. What you should be focusing is Are you creating a value And what is happening is Many are chasing the monetization And missing the value Are you making a difference If you are making a difference And wherever it is, whatever small it is Like he said, health tech Especially a lot of potential is there We need to be making disruption there So focus on that But there are others also On digital divide and social equity On the digital divide I think AI with the phones already Everybody has a phone now Almost everybody is going to have a smart phone If not, already they have it Once they have a smart phone, they have AI So AI is not going to divide If anything, it’s flattening everybody out For example, the people with computer engineering They were on the pedestal Programmers Now it has brought innovation Not only for a programming person It has brought down to everything If anything, AI is the biggest equality It is not a divider

Kavikrut

Fantastic Himanshuji, any view on that? What can we do proactively?

Himanshu AIM

Yeah, I think I think sort of agree to what everybody has talked about. I think with AI and the smartphone and we are one of the largest consumers of Internet. Right. It has sort of democratized the divide between rural urban. I think there’s one more divide which we generally don’t talk about is a language divide. Right. There are 22 scheduled languages. No other country in the world has 22 scheduled languages. Right. And this is just scheduled languages. In a state like UP, it looks like Hindi, but you travel from the western part of the eastern part, the dialect changes. Right. So that’s another divide that probably is being sort of democratized by AI, for example. And everybody’s building solutions around it.

Right. All the LLM models that are being developed, both by the government and even some of these larger companies that want to look at our data. Right. So in the future, I think one thing that we need to be very, very cognizant about is which we I started with. Right. The divide between the western. And the southern states at one end of the. spectrum, eastern and the northern states at the other end of the spectrum, right? The challenge is that a lot of mentorship, VC money, all that has not reached it. So they are still trying to build those native solutions, which is not bad, right? But for them to really equalize with some of the other states, that’s where AI is going to enable.

And some of the state governments are building and all of us are playing our own little role in that.

Kavikrut

This is great. You brought it all together Himanshree. We’ll wrap this panel now. Now, I think AI is here for good and it’s our duty to build AI for good. And what an interesting conversation. The divide is a matter of what we do. And it’s great to have all your perspective. Thank you everyone for joining us and for this conversation. Thank you so much. Thank you. Thank you. Thank you. This is the photo of the marathon. That’s why they take it at the start. You can check it out there. Please, thank you. Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (37)
Factual NotesClaims verified against the Diplo knowledge base (3)
Confirmedhigh

“Moderator Kavikrut stated that artificial intelligence (AI) will democratise access to healthcare by lowering both price and availability barriers.”

The knowledge base notes that AI is helping to democratise access to healthcare through early disease detection and wider availability of high-quality diagnostics [S16] and that leaders highlight AI’s role in making healthcare more accessible globally [S101].

Confirmedhigh

“Kavikrut warned that AI could trigger either a “race to the top” or a “race to the bottom”, emphasizing the need to focus on impact to achieve the former.”

Sources discuss the concept of a “race to the bottom” in AI regulation and warn that it is progressing faster than a race to the top, underscoring the urgency to steer AI diffusion responsibly [S106] and note the dangers of a regulatory race to the bottom [S94].

Confirmedmedium

“Kavikrut urged startup founders to channel their talent into high‑impact domains such as healthcare and education rather than chasing venture‑capital trends.”

The moderator explicitly encouraged founders to focus on high-impact sectors like health and education in his remarks [S1].

External Sources (118)
S1
Science AI &amp; Innovation_ India–Japan Collaboration Showcase — -Kritika Sangani: Chief of Staff at Indus Action, works in the development sector for 10 years, former Teach for India f…
S2
https://dig.watch/event/india-ai-impact-summit-2026/science-ai-innovation_-india-japan-collaboration-showcase — Absolutely. Absolutely. Absolutely. No, this is a really relevant question and something that we are very actively work…
S4
Keynote-Rajesh Subramanian — -Frederick W. Smith: Role/Title: Founder of FedEx; Area of expertise: Not specified in current context (referenced by Ra…
S6
Global Perspectives on Openness and Trust in AI — -Audience member 2- Part of a group from Germany
S7
Day 0 Event #82 Inclusive multistakeholderism: tackling Internet shutdowns — – Nikki Muscati: Audience member who asked questions (role/affiliation not specified)
S8
The Arc of Progress in the 21st Century / DAVOS 2025 — – Paula Escobar Chavez: Audience member asking a question (specific role/title not mentioned)
S9
Building the Workforce_ AI for Viksit Bharat 2047 — -Audience- Role/Title: Professor Charu from Indian Institute of Public Administration (one identified audience member), …
S10
WS #150 Language and inclusion – multilingual names — – Fouad Bajwa: Audience member asking question (role/expertise not specified) – Jamal Shaheen: Audience member asking q…
S11
Day 0 Event #82 Inclusive multistakeholderism: tackling Internet shutdowns — – Nikki Muscati: Audience member who asked questions (role/affiliation not specified)
S12
Science AI &amp; Innovation_ India–Japan Collaboration Showcase — -Kavikrut: Moderator/Host of the panel discussion, appears to be associated with T-Hub (startup incubator/accelerator)
S13
Relations between Cyprus and Germany (1960 – 1968) — planed even to publish a biography of Makarios under the title ‘The Christ in the
S15
AI Transformation in Practice_ Insights from India’s Consulting Leaders — -Audience member 3- Student -Audience member 6- Role/title not mentioned
S16
AI: The Great Equaliser? — Artificial intelligence (AI) has the potential to revolutionise various aspects of global society. It can democratise he…
S17
Setting the Rules_ Global AI Standards for Growth and Governance — Comparison across models enables race to the top rather than bottom in safety and quality
S18
Driving Indias AI Future Growth Innovation and Impact — Absolutely. Thank you for that wonderful perspective, Professor. Coming to you, Manish. Now, the Dell Technologies bluep…
S19
Building Indias Digital and Industrial Future with AI — So I think we are in a very good place. We have got very robust infrastructure. And how do we now navigate this world of…
S20
Employing AI for consumer grievance redressal mechanisms in e-commerce (CUTS) — A prime example of this is the United Payments Interface (UPI), which has revolutionized the Indian digital payments lan…
S21
Keynote_ 2030 – The Rise of an AI Storytelling Civilization _ India AI Impact Summit — Startup nation statuswith entrepreneurial ecosystems across all sectors enables rapid innovation.
S22
AI Collaboration Across Borders_ India–Israel Innovation Roundtable — But our premise or hypothesis is that if you make the extra effort in identifying the true AI startups, and what do I me…
S23
HIGH LEVEL LEADERS SESSION IV — Audience :Thank you very much, Joelle. Good afternoon, everyone. I bring a perspective from the Caribbean, as pointed ou…
S24
Leveraging AI to Support Gender Inclusivity | IGF 2023 WS #235 — Addressing biases in AI systems is another important aspect for Google. Emma Higham acknowledges that AI algorithms can …
S25
Open Forum #37 Her Data,Her Policies:Towards a Gender Inclusive Data Future — Victor Asila: Right. Thank you. So there are numerous opportunities. Victor, maybe before you start, I would like yo…
S26
Artificial intelligence (AI) – UN Security Council — Another significant risk is the potential for bias in AI algorithms, which can reflect existing prejudices and stereotyp…
S27
AI for social good: the new face of technosolutionism — Abeba Birhane presents a critical analysis of AI systems and their impact on society, arguing that current AI technologi…
S28
Welfare for All Ensuring Equitable AI in the Worlds Democracies — Artificial intelligence | Social and economic development
S29
AI Innovation in India — -Deepak Bagla- Role: Mission Director; Title: Atal Innovation Mission Atal Innovation Mission’s Decade of Impact Thank…
S30
AI for Social Empowerment_ Driving Change and Inclusion — He asks how governments and institutions can govern AI responsibly to minimise labour market disruption and ensure a smo…
S31
Securing access to financing to digital startups and fast growing small businesses in developing countries ( MFUG Innovation Partners) — Depending on the stage of the startup, VC may not be the best answer He believes that the private sector, particularly …
S33
Democratizing AI: Open foundations and shared resources for global impact — ## International Collaboration Examples ## Practical Applications and Real-World Impact **Climate and Agriculture**: A…
S34
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — Owen Larter from Google DeepMind provided an industry perspective on the technical requirements for robust AI assurance,…
S35
Education, Inclusion, Literacy: Musts for Positive AI Future | IGF 2023 Launch / Award Event #27 — Finally, the analysis highlights the need for academics to propose alternatives to address biases in the digital medium….
S36
AI Governance: Ensuring equity and accountability in the digital economy (UNCTAD) — Inclusivity is another key aspect of AI governance. It is crucial to have more inclusive conversations and ensure the pa…
S37
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — Achieving inclusive AI requires addressing inequalities across three fundamental areas: access to computing infrastructu…
S38
Powering AI _ Global Leaders Session _ AI Impact Summit India Part 2 — The disagreement level is moderate but significant for policy implications. While there’s consensus on the core challeng…
S39
AI Governance: Ensuring equity and accountability in the digital economy (UNCTAD) — Furthermore, the concentration of data collection and usage among a few global entities has led to a data divide. Many d…
S40
Artificial intelligence (AI) – UN Security Council — Another significant risk is the potential for bias in AI algorithms, which can reflect existing prejudices and stereotyp…
S41
Pre 3: Exploring Frontier technologies for harnessing digital public good and advancing Digital Inclusion — AI systems reflect the quality and inclusiveness of their underlying data and decision-making processes. Currently, both…
S42
WS #110 AI Innovation Responsible Development Ethical Imperatives — Gong addresses the need for inclusive development policies that ensure technology access for developing nations and prev…
S43
Day 0 Event #251 Large Models and Small Player Leveraging AI in Small States and Startups — This set the moral and strategic foundation for the entire session, establishing that the conversation wasn’t just about…
S44
Building Inclusive Societies with AI — Leveraging the Startup Ecosystem for Social Impact Romal emphasizes the need for deeper industry‑government collaborati…
S45
IndoGerman AI Collaboration Driving Economic Development and Soc — Another example in our strategic agenda in the future of AI is that we set up an AI innovation lab at Hessian AI, co -fu…
S46
AI for Social Empowerment_ Driving Change and Inclusion — He asks how governments and institutions can govern AI responsibly to minimise labour market disruption and ensure a smo…
S47
Trade regulations in the digital environment: Is there a gender component? (UNCTAD) — In conclusion, the analysis reinforces the potential of digitalisation and emerging technologies, such as artificial int…
S48
Building fair markets in the algorithmic age (The Dialogue) — However, without proper governance, algorithms can have harmful effects. It is crucial to have the appropriate oversight…
S49
AI: The Great Equaliser? — Artificial intelligence (AI) has the potential to revolutionise various aspects of global society. It can democratise he…
S50
AI/Gen AI for the Global Goals — Priscilla Boa-Gue argues for the creation of supportive policy environments to foster AI startups. This includes develop…
S51
Driving Indias AI Future Growth Innovation and Impact — But there was also a lot of fear around AI about trust factors, about privacy, data, sovereignty, multiple issues about …
S52
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Hemant Taneja General Catalyst — “Whether it’s… scaling of the energy infrastructure so we can deploy AI, all those capabilities to present enormous op…
S53
Science AI &amp; Innovation_ India–Japan Collaboration Showcase — Kritika highlights that current welfare schemes require many cumbersome steps and proposes reducing the process to a sin…
S54
AI for Democracy_ Reimagining Governance in the Age of Intelligence — Authorities and independent media will lag behind while malicious actors remain behind. one step ahead. Accountability w…
S55
Open Forum #70 the Future of DPI Unpacking the Open Source AI Model — This comment exposed a critical gap in the open source AI narrative – that true democratization requires not just access…
S56
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — This observation is particularly insightful because it reveals how current AI development exploits commons-based resourc…
S57
Powering AI _ Global Leaders Session _ AI Impact Summit India Part 2 — The disagreement level is moderate but significant for policy implications. While there’s consensus on the core challeng…
S58
Global AI Policy Framework: International Cooperation and Historical Perspectives — Werner highlighted that connectivity challenges extend beyond infrastructure availability – many regions have technical …
S59
AI as critical infrastructure for continuity in public services — Many participants are unfamiliar with existing AI standards, creating both awareness and capacity challenges. Articulati…
S60
Regional Leaders Discuss AI-Ready Digital Infrastructure — And in there, you can see, for example, that some of the lower income economies can seem quite open in that space. But i…
S61
Press Conference: Closing the AI Access Gap — Data strategies are another critical aspect in the AI era. Countries need robust data strategies that include sharing fr…
S62
Leveraging AI to Support Gender Inclusivity | IGF 2023 WS #235 — Another important point emphasized in the analysis is the significance of involving users and technical experts in the p…
S63
Education, Inclusion, Literacy: Musts for Positive AI Future | IGF 2023 Launch / Award Event #27 — It is also highlighted that biases and discrimination in AI algorithms pose a significant challenge. The analysis acknow…
S64
Science AI &amp; Innovation_ India–Japan Collaboration Showcase — Kritika Sangani discussed Indus Action’s work in making social protection accessible to vulnerable citizens, particularl…
S65
Welfare for All Ensuring Equitable AI in the Worlds Democracies — Building confidence and security in the use of ICTs | Artificial intelligence
S66
Extreme poverty and human rights * — 88 European Commission, ‘Human capital: digital inclusion and skills’, 2019. 47. The United Kingdom provides an examp…
S67
AI Innovation in India — -Deepak Bagla- Role: Mission Director; Title: Atal Innovation Mission Atal Innovation Mission’s Decade of Impact Thank…
S68
Cooperation for a Green Digital Future | IGF 2023 — Yawri Carr:Today I want to share with you the transformative role that artificial intelligence can play in shaping a sus…
S69
Securing access to financing to digital startups and fast growing small businesses in developing countries ( MFUG Innovation Partners) — Depending on the stage of the startup, VC may not be the best answer He believes that the private sector, particularly …
S70
Shaping Investment: Spurring Investment in Cyber Sector Start-Ups — Capital investment in cybersecurity startups is necessary for their growth and expansion. However, investment capital is…
S73
Open Forum #27 Make Your AI Greener a Workshop on Sustainable AI Solutions — Marco Zennaro provided concrete examples of TinyML applications that address real-world challenges across diverse sector…
S74
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — Owen Larter from Google DeepMind provided an industry perspective on the technical requirements for robust AI assurance,…
S75
Panel Discussion Data Sovereignty India AI Impact Summit — The panelists shared concrete examples of sovereignty implementation. Gupta’s Bhashini migration demonstrated how critic…
S76
Workshop 1: AI &amp; non-discrimination in digital spaces: from prevention to redress — Menno Ettema: Great. It always takes a moment before the screen comes up. Yes, to open the eyes, we want to launch a lit…
S77
AI Governance: Ensuring equity and accountability in the digital economy (UNCTAD) — Inclusivity is another key aspect of AI governance. It is crucial to have more inclusive conversations and ensure the pa…
S78
Artificial intelligence (AI) – UN Security Council — Another significant risk is the potential for bias in AI algorithms, which can reflect existing prejudices and stereotyp…
S79
WS #236 Ensuring Human Rights and Inclusion: An Algorithmic Strategy — Ananda Gautam: that build capacity of developers and design makers to understand the risks of algorithms, bias, and a…
S80
LinkedIN and UN Women India project: bridging the digital divide for equal opportunities — In February 2023, UN Women India met with Dan Shapero, Global COO of LinkedIn, in Mumbaito address the significance of d…
S81
Building the AI-Ready Future From Infrastructure to Skills — The tone was consistently optimistic and collaborative throughout, with speakers expressing excitement about AI’s potent…
S82
Discussion Report: AI Implementation and Global Accessibility — The tone was consistently optimistic and collaborative throughout the conversation. Both speakers maintained a construct…
S83
Elevating AI skills for all — The tone is consistently optimistic, enthusiastic, and collaborative throughout. The speaker maintains an upbeat, missio…
S84
AI for equality: Bridging the innovation gap — The conversation maintained a consistently optimistic yet realistic tone throughout. Both speakers demonstrated enthusia…
S85
AI Policy Summit Opening Remarks: Discussion Report — The tone is consistently optimistic and collaborative throughout both speeches. Both speakers maintain an encouraging, f…
S86
Safe and responsible AI — – The start of the transformation of education according to the prepared proposal and the Education Policy Strategy aft…
S87
Evolving AI, evolving governance: from principles to action | IGF 2023 WS #196 — Auidence:I think maybe it’s easier if we all ask the question then any panel member can just catch on it. In four minute…
S88
1 Introduction — Source: Complex analysis of barriers of applied and oriented research, experimental development and innovation in the Cz…
S89
The AI Pareto Paradox: More computing power – diminishing AI impact?  — Capturing tacit and hidden institutional knowledge that isn’t in any manual or policy papers Meticulous data annotation…
S90
Main Topic 3 –  Identification of AI generated content — Aldan Creo:Great. Hello. How are you, everyone? Well, it’s a pleasure to be able to have this session. I hope we’ll make…
S91
Digital democracy and future realities | IGF 2023 WS #476 — Rachel Judistari:Well, it’s kind of very interesting questions, but there are some risks that can be affecting public in…
S92
Multistakeholder digital governance beyond 2025 — The discussion maintained a constructive and collaborative tone throughout, with speakers sharing both challenges and su…
S93
Strengthening Corporate Accountability on Inclusive, Trustworthy, and Rights-based Approach to Ethical Digital Transformation — The discussion maintained a professional, collaborative tone throughout, with speakers demonstrating expertise while ack…
S94
Overcoming the fragmentation of the digital governance: what role for the Global Digital Compact and e-trade rules? (South Centre) — The concept of a “race to the bottom” in regulations is viewed as dangerous. Currently, there is a lack of regulations i…
S95
From summer disillusionment to autumn clarity: Ten lessons for AI — As we refocus on existing risks, some accountability is due:how and why did respected voices get carried away with AGI p…
S96
Upskilling for the AI era: Education’s next revolution — The tone is consistently optimistic, motivational, and action-oriented throughout. The speaker maintains an enthusiastic…
S97
Safeguarding Children with Responsible AI — The discussion maintained a tone of “measured optimism” throughout. It began with urgency and concern (particularly in B…
S98
AI for Good Technology That Empowers People — The tone was consistently optimistic and collaborative throughout, with speakers demonstrating genuine enthusiasm for so…
S99
AI for Safer Workplaces &amp; Smarter Industries Transforming Risk into Real-Time Intelligence — The discussion maintained an optimistic and collaborative tone throughout, with speakers consistently emphasizing human …
S100
DC-CIV &amp; DC-NN: From Internet Openness to AI Openness — Sandrine Elmi Hersi: Thank you. First of all, let me thank the organizers of this session for this important conversa…
S101
Technology in the World / Davos 2025 — – Marc Benioff- Mark Rutte Ruth Porat highlights how AI is currently enhancing healthcare by enabling early disease det…
S102
Democratizing AI Building Trustworthy Systems for Everyone — -Justin Carsten- Moderator/Host of the panel discussion The session was moderated by Justin Carsten, who opened by noti…
S103
Panel Discussion AI &amp; Cybersecurity _ India AI Impact Summit — The moderator opens, transitions, and closes the session, guaranteeing that speakers are introduced, the discussion proc…
S104
Part 5: Rethinking legal governance in the metaverse — AI is rapidly becoming entrenched in sectors such as healthcare, finance, and media, making it difficult to reverse or m…
S105
Shaping AI to ensure Respect for Human Rights and Democracy | IGF 2023 Day 0 Event #51 — Merve Hickok:First of all, thank you so much for the invitation, Chair Schneider. Good to see you virtually. And I appre…
S106
Fireside Conversation: 01 — And the race to the bottom is faster than the race to the top. So I think all of us who have a stake in AI being useful …
S107
Keynote-Bejul Somaia — The playing field is meaningly more level than it has ever been. Now this mind shift is not trivial. Scarcity thinking i…
S108
https://dig.watch/event/india-ai-impact-summit-2026/scaling-enterprise-grade-responsible-ai-across-the-global-south — I would start off with processing capacity. That’s the underpinning for building these systems in -house and running inf…
S109
https://dig.watch/event/india-ai-impact-summit-2026/keynote-jeet-adani — She rises to stabilize, she rises to anchor a world searching for balance and she rises to build systems that are inclus…
S110
https://dig.watch/event/india-ai-impact-summit-2026/building-population-scale-digital-public-infrastructure-for-ai — Well, it’s difficult to choose only one thing, I guess. Maybe this perspective from management, you’re always looking fo…
S111
Not Losing Sight of Soft Power — Paetongtarn Shinawatra: Yes. So I noticed that I think all Thai people realise that we have a very rich culture and we…
S112
Trust in Tech: Navigating Emerging Technologies and Human Rights in a Connected World — 3. **Collaborative Approach**: The speaker advocates for a collaborative model involving private sector entities, civil …
S113
The Role of Government and Innovators in Citizen-Centric AI — This comment cuts to the heart of digital transformation failures – the tendency to digitize existing processes rather t…
S114
Adoption of the agenda and organization of work — Canada’s efforts align with several Sustainable Development Goals (SDGs), particularly championing peace, justice, and s…
S115
All hands on deck to connect the next billions | IGF 2023 WS #198 — Improving “cyber hygiene” skills is also important, which involves educating individuals on safe and secure internet pra…
S116
The Power of the Commons: Digital Public Goods for a More Secure, Inclusive and Resilient World — Eileen Donahoe: Great. First, let me congratulate the organizers here. This is a really remarkable event and it’s a ver…
S117
DC-OER The Transformative Role of OER in Digital Inclusion | IGF 2023 — Advocacy exists for public or stakeholder ownership of open education resources. The argument is that open education res…
S118
Introduction — As societies invest in these goods, a wealth of knowledge, best practices and experience is being gathered….
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
K
Kavikrut
5 arguments175 words per minute2461 words840 seconds
Argument 1
AI can democratize access to healthcare and social services, turning AI into a race to the top rather than a race to the bottom (Kavikrut)
EXPLANATION
Kavikrut argues that AI should be leveraged to broaden access to essential services, positioning it as a catalyst for positive societal outcomes rather than a source of inequality. He frames the debate as a choice between a race to the top, focused on impact, versus a race to the bottom.
EVIDENCE
He opened the discussion by stating that AI will create and democratize access to healthcare and services, noting that it should improve price and availability [1-2]. Later he quoted Nandan Nilekani’s comment about AI being a race to the top or bottom and emphasized that focusing on impact is the way to go to the top [63-64].
MAJOR DISCUSSION POINT
Democratizing access
AGREED WITH
Kritika Sangani, Rajesh Babu, Himanshu AIM
Argument 2
Massive 5G and mobile data usage in India provides the infrastructure needed for AI to be deployed uniformly across regions (Kavikrut)
EXPLANATION
Kavikrut highlights India’s high mobile data consumption and 5G penetration as a foundational layer that can support nationwide AI deployment, ensuring that even remote areas can benefit from AI‑driven services.
EVIDENCE
He cited statistics that the average 5G mobile data usage in India is 22 GB per month, underscoring the scale of connectivity that can underpin AI applications [206-208].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
India’s robust digital and connectivity infrastructure, which underpins nationwide AI deployment, is noted in [S19].
MAJOR DISCUSSION POINT
Infrastructure readiness
AGREED WITH
Himanshu AIM, Rajesh Babu
Argument 3
The vision of a “United Entitlements Interface” (UEI) – a single portal akin to UPI for all constitutional rights – would streamline eligibility checks and applications (Kavikrut)
EXPLANATION
Kavikrut describes a proposed unified digital platform that would let citizens check eligibility and claim any constitutional entitlement through one interface, mirroring the simplicity of the UPI payment system.
EVIDENCE
He explained that the UEI would combine features of DigiLocker and DigiYatra, allowing users to log in, verify eligibility, and apply for rights in a single step [112-116].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The analogy to UPI for public services and the push for a single-touch entitlement experience are described in [S20] and [S1].
MAJOR DISCUSSION POINT
Unified entitlement portal
AGREED WITH
Kritika Sangani
Argument 4
AI is the most powerful tool startups have ever had, enabling rapid problem‑solving and scaling across sectors (Kavikrut)
EXPLANATION
Kavikrut asserts that AI acts as a super‑charged engine for startups, allowing them to address problems faster and at larger scale than any previous technology.
EVIDENCE
He noted that over the past 15 years AI has become the strongest tool for startups, citing examples from the gig-economy and food-delivery platforms that have reshaped labor markets [55-62].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The rapid innovation capacity of Indian startups is emphasized in [S21], and criteria for true AI-focused startups are outlined in [S22].
MAJOR DISCUSSION POINT
AI as startup catalyst
AGREED WITH
Himanshu AIM, Rajesh Babu
Argument 5
Founders should prioritize sectors with high social impact—especially healthcare and education—to unlock national super‑powers (Kavikrut, Audience Member 2)
EXPLANATION
Kavikrut (and echoed by an audience member) urges entrepreneurs to focus on healthcare and education, arguing that channeling talent into these sectors can generate massive social and economic benefits for the country.
EVIDENCE
He argued that even a 10 % shift of talent from fintech to healthcare or education would fundamentally change the nation’s capabilities, emphasizing the size of India’s young talent pool [377-382].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for founders to focus on health and education for high impact appear in [S1] and are reinforced by the broader startup-nation narrative in [S21].
MAJOR DISCUSSION POINT
Sector focus for founders
K
Kritika Sangani
7 arguments154 words per minute1552 words602 seconds
Argument 1
AI reduces the number of steps required to claim entitlements, turning a 10‑step process into a single‑touch experience (Kritika Sangani)
EXPLANATION
Kritika explains that current entitlement processes involve around ten cumbersome steps, and AI can compress this into a single interaction, dramatically simplifying citizen access.
EVIDENCE
She described the existing ten-step burden for citizens to obtain a single entitlement and posed the question of reducing it to a single-touch process [24-26].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The reduction of a ten-step entitlement process to a single-touch workflow is discussed in [S1] and further illustrated in [S23].
MAJOR DISCUSSION POINT
Simplifying entitlement steps
AGREED WITH
Kavikrut
Argument 2
Embedding equity‑focused algorithms (e.g., gender balance, socio‑economic targeting) ensures AI solutions promote inclusive outcomes (Kritika Sangani)
EXPLANATION
Kritika stresses that AI systems should be designed with built‑in equity parameters—such as gender parity and socio‑economic representation—to guarantee fair outcomes for vulnerable groups.
EVIDENCE
She detailed how their digital lottery includes an equity algorithm that balances gender ratios and ensures representation of children with special needs or from weaker socio-economic backgrounds [380-384].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Equity-by-design algorithmic approaches and bias mitigation strategies are covered in [S24] and [S26].
MAJOR DISCUSSION POINT
Equity‑by‑design in AI
AGREED WITH
Himanshu AIM
Argument 3
The Right‑to‑Education (RTE) digital lottery replaces physical visits with an online, algorithm‑driven selection, dramatically cutting transaction time (Kritika Sangani)
EXPLANATION
Kritika outlines the development of an RTE Management Information System that digitizes the lottery‑based school seat allocation, removing the need for parents to physically visit multiple schools.
EVIDENCE
She explained that the RTE MIS digitizes the lottery mechanism, integrating it into a digital platform that has been adopted in 18 states, cutting down a multi-step physical process to an online draw [81-82].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The RTE digital lottery platform and its impact on school admissions are detailed in [S1] and [S2].
MAJOR DISCUSSION POINT
Digitalizing school admissions
Argument 4
A multilingual WhatsApp chatbot serves as the first interface for parents, improving targeting of the most vulnerable and easing frontline worker load (Kritika Sangani)
EXPLANATION
Kritika describes a multilingual chatbot on WhatsApp that acts as the initial point of contact for parents, helping to identify the most vulnerable applicants and reducing the burden on frontline workers.
EVIDENCE
She noted that the chatbot is multilingual, serves as the first interface for students or parents to apply, and also builds frontline worker capacity, thereby reducing their workload while improving targeting [95-98].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The multilingual WhatsApp chatbot serving as the initial citizen interface is described in [S1].
MAJOR DISCUSSION POINT
Chatbot for vulnerable outreach
Argument 5
Proactively embed equity checks (gender, socio‑economic status) into AI algorithms to avoid bias and ensure balanced outcomes (Kritika Sangani)
EXPLANATION
Kritika reiterates that AI solutions must contain proactive equity safeguards—such as gender balance and socio‑economic targeting—to prevent systemic bias and guarantee inclusive results.
EVIDENCE
She again highlighted the equity algorithm that ensures a 50-50 gender application rate and representation of children from weaker socio-economic backgrounds within the digital lottery system [380-384].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Proactive equity safeguards and bias mitigation in AI systems are discussed in [S24] and [S26].
MAJOR DISCUSSION POINT
Bias mitigation in AI
Argument 6
Keep humans “in the loop”—frontline workers must retain access to AI‑enhanced information to serve marginalized communities effectively (Kritika Sangani)
EXPLANATION
Kritika argues that while AI can automate many processes, human frontline workers (e.g., Anganwadi or ASHA workers) must remain integral, using AI tools to simplify their tasks rather than replace them.
EVIDENCE
She emphasized that AI should make frontline workers’ jobs easier by providing simple, accessible information, ensuring they do not spend excessive time parsing complex eligibility criteria [385-388].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Human-in-the-loop principles and the importance of user involvement in AI design are highlighted in [S24] and [S26].
MAJOR DISCUSSION POINT
Human‑in‑the‑loop principle
Argument 7
AI can enable the state to discover eligible citizens by layering multiple data sources, flipping the discovery process from citizen‑led to state‑led.
EXPLANATION
She proposes using AI and machine learning to combine government databases (e.g., VBG, PDS, district‑level data) so that the state can proactively identify individuals who qualify for entitlements, reducing the need for citizens to search for schemes.
EVIDENCE
In response to an audience question, she explains that AI can layer VBG, PDS, and aspirational district data to identify districts where 95 % of citizens are eligible for a particular entitlement, turning the discovery process around [326-334].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Machine-learning-driven district targeting for welfare outreach (Educate Girls) is exemplified in [S2], and state-led digital public infrastructure concepts are discussed in [S20].
MAJOR DISCUSSION POINT
AI‑driven proactive welfare outreach
R
Rajesh Babu
6 arguments184 words per minute2059 words670 seconds
Argument 1
AI‑powered personal agents can give pharma reps instant briefing and help patients navigate care, improving availability and speed of health services (Rajesh Babu)
EXPLANATION
Rajesh describes an AI‑driven personal assistant that aggregates past interactions with doctors and delivers a concise briefing to pharma representatives before each visit, enhancing the speed and relevance of information delivery.
EVIDENCE
He explained that the AI app scans the rep’s calendar, pulls recent CRM conversations, and generates a voice memo summarizing key points for the upcoming doctor visit, acting like a “morning presidential briefing” [261-267].
MAJOR DISCUSSION POINT
AI briefing agents for pharma reps
AGREED WITH
Kavikrut, Himanshu AIM
Argument 2
AI‑driven “morning briefing” agents synthesize past doctor‑rep interactions, giving sales reps actionable insights at the point of care (Rajesh Babu)
EXPLANATION
Rajesh expands on the same concept, emphasizing that the AI system creates a daily briefing that equips sales reps with the latest conversation history and recommended talking points, thereby improving care coordination.
EVIDENCE
He detailed how the system pulls data from calendars and CRM, creates a voice memo, and delivers it to reps each morning, ensuring they are prepared with up-to-date information [261-267].
MAJOR DISCUSSION POINT
Daily AI briefing for health sales
Argument 3
Advanced AI models can match liver‑transplant donors and recipients by analyzing complex biological parameters, improving transplant outcomes (Rajesh Babu)
EXPLANATION
Rajesh outlines a collaboration with a research institute where AI evaluates numerous biological and physiological parameters to identify optimal donor‑recipient matches, a task too complex for traditional algorithms.
EVIDENCE
He described how AI processes multiple donor and patient parameters to predict the best match, citing work with Scripps Institute and an Indian researcher, leading to better post-transplant outcomes [300-306].
MAJOR DISCUSSION POINT
AI for organ transplant matching
Argument 4
AI, delivered via ubiquitous smartphones, can flatten rather than widen digital gaps, offering equal access to advanced tools (Rajesh Babu)
EXPLANATION
Rajesh argues that because smartphones are widespread, AI delivered through them can level the playing field, providing equal access to sophisticated technologies across socio‑economic groups.
EVIDENCE
He stated that AI, accessed through smartphones, is not a divider but rather a great equalizer, noting that everyone now has a phone and thus AI can reach all [393-398].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
India’s robust digital and mobile infrastructure that enables smartphone-based AI access is noted in [S19].
MAJOR DISCUSSION POINT
Smartphone‑based AI equity
AGREED WITH
Kavikrut, Kritika Sangani, Himanshu AIM
DISAGREED WITH
Kritika Sangani, Himanshu AIM
Argument 5
AI‑driven personal agents can automate task reminders and coordination for employees, boosting organisational efficiency.
EXPLANATION
He describes a system where an AI agent scans a user’s calendar and email, summarises recent interactions, and delivers a concise voice briefing each morning, effectively acting as a personal assistant for work tasks.
EVIDENCE
He explains that the AI app looks at the calendar, pulls recent CRM conversations, and generates a voice memo briefing the rep on upcoming doctor visits, likening it to a “morning presidential briefing” [261-267][280-292].
MAJOR DISCUSSION POINT
AI personal assistants for workplace productivity
Argument 6
AI agents can reduce long waiting times for specialist consultations and surgeries by streamlining referrals and information flow.
EXPLANATION
He argues that AI‑enabled doctor and patient agents can handle routine queries and triage, allowing human doctors to focus on complex cases and thereby shortening wait periods that currently stretch months.
EVIDENCE
He notes that in western countries patients wait six months to a year for specialists, and that AI agents could eliminate such delays by handling preliminary interactions and coordinating care [315-316].
MAJOR DISCUSSION POINT
AI for accelerating access to specialist healthcare
H
Himanshu AIM
9 arguments188 words per minute2364 words751 seconds
Argument 1
State Innovation Missions use AI to bring frontier technologies to underserved regions, aiming for equitable impact across the country (Himanshu AIM)
EXPLANATION
Himanshu explains that the Atli Innovation Mission establishes State Innovation Missions that embed AI and other frontier technologies into local ecosystems, especially in regions that have lagged behind in digital adoption.
EVIDENCE
He described the mission’s goal to move beyond current innovation levels, bring AI and frontier tech to all parts of the ecosystem, and cited examples such as water-quality hackathons and bamboo market dashboards [122-124][155-174].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The goal of extending frontier AI technologies to all regions aligns with the strong digital foundation described in [S19] and the policy-framework emphasis in [S18].
MAJOR DISCUSSION POINT
AI‑enabled state missions
AGREED WITH
Rajesh Babu, Kavikrut
Argument 2
AI‑driven data dashboards for water quality and bamboo market access illustrate how sector‑specific AI tools can scale local solutions (Himanshu AIM)
EXPLANATION
He presents two concrete use‑cases where AI‑powered dashboards help states address specific challenges: detecting iron content in water and creating a market linkage platform for bamboo producers.
EVIDENCE
He recounted a hackathon to use AI for mapping iron levels in water across districts and a dashboard that connects bamboo producers with global markets, showing how data can be turned into actionable solutions [155-164][165-174].
MAJOR DISCUSSION POINT
Sector‑specific AI dashboards
Argument 3
State Innovation Missions address the technology gap between advanced southern/western states and lagging eastern/northern states, fostering peer‑to‑peer learning (Himanshu AIM)
EXPLANATION
Himanshu highlights regional disparities in tech capacity and explains that the missions aim to bridge this gap by facilitating knowledge exchange and mentorship between more advanced and less‑advanced states.
EVIDENCE
He noted the stark contrast between states like Telangana, Karnataka, Maharashtra and the northeastern/eastern states, and described plans to launch a mission in the northeast to promote peer-to-peer learning [122-130][140-148].
MAJOR DISCUSSION POINT
Bridging regional tech gaps
Argument 4
Language diversity (22 scheduled languages and numerous dialects) is a critical divide; AI language models are essential to bridge it (Himanshu AIM)
EXPLANATION
He points out India’s linguistic complexity and argues that multilingual AI models are crucial for ensuring that AI benefits reach all language communities, especially in rural areas.
EVIDENCE
He mentioned that India has 22 scheduled languages and many dialects, and that AI models being developed by government and private players can help democratize access across these linguistic groups [403-410].
MAJOR DISCUSSION POINT
AI for language inclusion
Argument 5
Startups should focus on solving real problems rather than chasing VC money; AI can accelerate long‑term value creation (Himanshu AIM)
EXPLANATION
Himanshu advises founders to prioritize genuine societal problems over short‑term venture capital attraction, noting that AI can speed up the creation of lasting value.
EVIDENCE
He argued that founders should look for real gaps, not just chase VC money, and that AI can help accelerate impact, referencing his belief that many startups chase trends rather than substantive problems [358-372].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The recommendation for problem-first startup focus is articulated in [S1] and reinforced by the startup-nation perspective in [S21].
MAJOR DISCUSSION POINT
Problem‑first startup mindset
AGREED WITH
Kavikrut, Rajesh Babu
Argument 6
AI‑enabled traffic cameras can monitor vehicle flow in real time, optimise signal timings and reduce fuel consumption, generating carbon‑credit savings.
EXPLANATION
He proposes installing simple AI‑powered cameras at traffic lights to analyse traffic patterns and automatically adjust signals, leading to more efficient traffic management and environmental benefits.
EVIDENCE
He describes using cameras on traffic lights to monitor flow each minute and linking them to AI that can automate adjustments, thereby reducing petrol and diesel consumption and earning carbon credits [211-213].
MAJOR DISCUSSION POINT
AI for smart urban mobility and environmental savings
Argument 7
Satellite imagery combined with AI can be used to monitor lake levels and detect drying trends, supporting early environmental interventions.
EXPLANATION
He suggests leveraging readily available satellite data and AI analysis to identify water bodies that are shrinking, enabling timely policy or remedial actions.
EVIDENCE
He mentions using satellite imagery to identify how lakes are drying up, noting that the data is already available and does not require new collection efforts [215-217].
MAJOR DISCUSSION POINT
AI‑driven environmental monitoring
Argument 8
AI‑powered sensors embedded in water pipelines can pinpoint leak locations and suggest material improvements, reducing water loss and maintenance costs.
EXPLANATION
He outlines a solution where small AI‑enabled sensors continuously measure flow at intervals along pipelines, detecting anomalies that indicate leaks and informing better material choices for repairs.
EVIDENCE
He describes startups that have created sensors embedded in pipelines to detect flow at regular intervals, locate leaks, and later analyze material stress to prevent future failures [220-224].
MAJOR DISCUSSION POINT
AI for infrastructure resilience
Argument 9
Ubiquitous smartphones provide a platform for AI that can flatten the rural‑urban digital divide, giving all citizens access to advanced tools.
EXPLANATION
He argues that because smartphones are now widespread, AI delivered through mobile apps can reach even remote populations, turning technology into an equaliser rather than a divider.
EVIDENCE
He states that AI, delivered via smartphones, is not a divider because almost everyone now has a phone, allowing AI to reach all segments of society [401-403].
MAJOR DISCUSSION POINT
Smartphone‑based AI as an equaliser
A
Audience Member
2 arguments172 words per minute129 words44 seconds
Argument 1
Awareness of government welfare schemes remains low among rural poor; AI can be used to flip discovery from citizen‑led to state‑led to improve reach (Audience Member)
EXPLANATION
The audience member asks whether AI can help the state discover eligible citizens rather than relying on citizens to find schemes, suggesting a proactive AI‑driven outreach model.
EVIDENCE
The question highlighted the low awareness among rural poor and asked about using AI to reverse the discovery process [307-308]; Kritika responded by describing a flip-the-discovery approach using AI-layered data to let the state identify eligible citizens [326-334].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI-driven district-level targeting for welfare outreach (Educate Girls) is presented in [S2], and the concept of state-led digital public infrastructure is discussed in [S20].
MAJOR DISCUSSION POINT
AI‑driven outreach for welfare awareness
Argument 2
Future AI breakthroughs may include AI‑based cancer risk prediction tools that integrate genomic and clinical data (Audience Member)
EXPLANATION
An audience member speculates that AI could soon enable predictive cancer risk assessments by combining genetic and health data, potentially transforming early detection and treatment.
EVIDENCE
The audience member referenced a joint program with Swissnext where a startup is developing a test that maps cancer patient profiles to DNA data to predict cancer risk, noting it is still in validation [321-322].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI’s role in early disease detection, including cancer, is highlighted in [S16].
MAJOR DISCUSSION POINT
Predictive AI in oncology
A
Audience Member 2
1 argument174 words per minute38 words13 seconds
Argument 1
Founders should prioritize sectors with high social impact—especially healthcare and education—to unlock national super‑powers (Audience Member 2)
EXPLANATION
The audience member asks which sectors need more startup activity and suggests that focusing on healthcare and education could unleash significant social and economic benefits for India.
EVIDENCE
The question emphasized the need for more startups in sectors beyond current trends, and Kavikrut later echoed this sentiment, stating that shifting talent to healthcare and education would fundamentally change the country’s capabilities [351-355][377-382].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for sector focus on health and education appear in [S1] and are echoed in the broader startup narrative in [S21].
MAJOR DISCUSSION POINT
Sector prioritisation for impact
Y
Yashi Audience Member 3
1 argument142 words per minute40 words16 seconds
Argument 1
AI systems deployed for public welfare risk deepening digital divides unless safeguards are built in.
EXPLANATION
The audience member warns that without careful design, AI could exacerbate existing inequities for rural and marginalized communities, calling for proactive measures to prevent such outcomes.
EVIDENCE
She directly asks how to ensure AI does not deepen digital divides for rural and marginalized communities, highlighting the concern about potential negative impacts of AI deployment in public welfare [378].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Risks of bias and the need for equity safeguards in AI systems are discussed in [S24] and [S26].
MAJOR DISCUSSION POINT
Preventing AI‑driven widening of digital inequities
Agreements
Agreement Points
AI is viewed as a democratizing force that can broaden access to healthcare, social services and other public entitlements, thereby reducing digital divides.
Speakers: Kavikrut, Kritika Sangani, Rajesh Babu, Himanshu AIM
AI can democratize access to healthcare and social services, turning AI into a race to the top rather than a race to the bottom (Kavikrut) AI reduces the number of steps required to claim entitlements, turning a 10‑step process into a single‑touch experience (Kritika Sangani) AI, delivered via ubiquitous smartphones, can flatten rather than widen digital gaps, offering equal access to advanced tools (Rajesh Babu) AI and the smartphone have democratized the divide between rural and urban, and multilingual AI models can bridge the language divide (Himanshu AIM)
All four speakers stress that AI should be leveraged to make essential services reachable for all citizens, cutting through cost, geographic and linguistic barriers. Kavikrut opens the panel by noting AI will create and democratise access to healthcare and services [1-2] and later cites the “race to the top” framing [63-64]. Kritika highlights the current ten-step entitlement burden and the potential to collapse it to a single interaction [24-26]. Rajesh argues that smartphone penetration makes AI an equaliser rather than a divider [393-398]. Himanshu adds that smartphones and multilingual AI models further level the playing field across language groups [401-410].
POLICY CONTEXT (KNOWLEDGE BASE)
This view mirrors UNCTAD’s description of AI as a ‘great equaliser’ for health services [S49] and the broader narrative that technology can reduce digital gaps [S48]; however, UNCTAD also warns that without inclusive data practices AI may exacerbate divides [S41].
AI solutions should embed equity safeguards (gender balance, socio‑economic targeting) and keep humans in the loop to avoid bias and ensure inclusive outcomes.
Speakers: Kritika Sangani, Himanshu AIM
Embedding equity‑focused algorithms (e.g., gender balance, socio‑economic targeting) ensures AI solutions promote inclusive outcomes (Kritika Sangani) Language diversity (22 scheduled languages and many dialects) is a critical divide; AI language models are essential to bridge it (Himanshu AIM)
Both speakers argue that AI must be designed with built-in equity measures. Kritika describes an equity algorithm that guarantees gender parity and representation of vulnerable groups in the RTE lottery system [380-384] and stresses keeping frontline workers in the loop [385-388]. Himanshu points out India’s linguistic fragmentation and the need for multilingual AI models to prevent exclusion of language minorities [403-410].
POLICY CONTEXT (KNOWLEDGE BASE)
The recommendation aligns with UNCTAD’s call to mitigate algorithmic bias through diverse training data [S40] and its emphasis on addressing the gender-digital divide in trade regulations [S47]; gender-inclusive AI policies were also highlighted at IGF 2023 [S62].
AI can dramatically simplify entitlement and service delivery processes, moving from multi‑step, physical interactions to single‑touch digital experiences.
Speakers: Kritika Sangani, Kavikrut
AI reduces the number of steps required to claim entitlements, turning a 10‑step process into a single‑touch experience (Kritika Sangani) The vision of a “United Entitlements Interface” (UEI) – a single portal akin to UPI for all constitutional rights – would streamline eligibility checks and applications (Kavikrut)
Kritika emphasizes the need to cut ten cumbersome steps to a single interaction for citizens [24-26] and illustrates this with the RTE digital lottery that replaces physical visits [81-82]. Kavikrut expands the idea to a national UEI platform that would let users check eligibility and claim any right in one step, mirroring UPI’s simplicity [112-116]. Both converge on the goal of a single-touch, AI-enabled service delivery model.
POLICY CONTEXT (KNOWLEDGE BASE)
The single-touch entitlement concept was advocated in the India-Japan collaboration on welfare simplification [S53] and fits within UNCTAD’s observation that AI can streamline public services when responsibly integrated [S59]; the risk of deepening divides without inclusive design is noted in frontier-technology reviews [S41].
AI is the most powerful catalyst for startups, enabling rapid problem‑solving, scaling and organisational efficiency.
Speakers: Kavikrut, Himanshu AIM, Rajesh Babu
AI is the most powerful tool startups have ever had, enabling rapid problem‑solving and scaling across sectors (Kavikrut) Startups should focus on solving real problems rather than chasing VC money; AI can accelerate long‑term value creation (Himanshu AIM) AI‑powered personal agents can give pharma reps instant briefing and help patients navigate care, improving availability and speed of health services (Rajesh Babu)
Kavikrut describes AI as a super-charged engine for startups over the past 15 years [55-62]. Himanshu advises a problem-first mindset, noting AI can speed up impact once real gaps are identified [358-372]. Rajesh showcases a concrete AI personal-assistant that aggregates CRM data to brief sales reps, illustrating organisational efficiency gains [261-267]. All three see AI as a transformative accelerator for startup performance.
POLICY CONTEXT (KNOWLEDGE BASE)
This claim is supported by the ‘Building Inclusive Societies with AI’ report urging industry-government collaboration for social impact startups [S44] and UNCTAD’s recommendation to create supportive policy environments for AI-driven enterprises [S50]; concrete examples include co-funded AI innovation labs for startups [S45].
Targeted state‑level AI initiatives and robust connectivity infrastructure are essential to bridge regional technology gaps.
Speakers: Himanshu AIM, Rajesh Babu, Kavikrut
State Innovation Missions use AI to bring frontier technologies to underserved regions, aiming for equitable impact across the country (Himanshu AIM) AI, delivered via ubiquitous smartphones, can flatten rather than widen digital gaps, giving all citizens access to advanced tools (Rajesh Babu) Massive 5G and mobile data usage in India provides the infrastructure needed for AI to be deployed uniformly across regions (Kavikrut)
Himanshu outlines State Innovation Missions that embed AI in lagging states to level the playing field [122-124][140-148] and highlights the stark east-west versus north-east disparity [124-130]. Rajesh reinforces that widespread smartphone ownership makes AI an equaliser [393-398]. Kavikrut adds that India’s high 5G data consumption (average 22 GB/month) supplies the necessary backbone for nationwide AI deployment [206-208]. Together they agree that both policy-driven missions and connectivity are key to regional equity.
POLICY CONTEXT (KNOWLEDGE BASE)
State-level AI programmes and connectivity are highlighted in the Global AI Policy Framework, which stresses regional infrastructure and local-language content as prerequisites for inclusion [S58] and in regional leader discussions on AI-ready digital infrastructure [S60]; state-funded AI labs exemplify this approach [S45].
Similar Viewpoints
Both see AI as a nation‑wide equalising tool that must be deliberately deployed through state‑level programmes to avoid a ‘race to the bottom’ and instead drive inclusive impact [1-2,63-64,122-124,140-148].
Speakers: Kavikrut, Himanshu AIM
AI can democratize access to healthcare and social services, turning AI into a race to the top rather than a race to the bottom (Kavikrut) State Innovation Missions use AI to bring frontier technologies to underserved regions, aiming for equitable impact across the country (Himanshu AIM)
Both portray AI as a productivity‑boosting assistant that can streamline professional workflows and accelerate service delivery, especially in health‑related domains [55-62,261-267].
Speakers: Kavikrut, Rajesh Babu
AI is the most powerful tool startups have ever had, enabling rapid problem‑solving and scaling across sectors (Kavikrut) AI‑powered personal agents can give pharma reps instant briefing and help patients navigate care, improving availability and speed of health services (Rajesh Babu)
Both stress that AI must be designed with built‑in equity mechanisms—whether gender/socio‑economic balancing or multilingual capability—to prevent exclusion of vulnerable groups [380-384,403-410].
Speakers: Kritika Sangani, Himanshu AIM
Embedding equity‑focused algorithms (e.g., gender balance, socio‑economic targeting) ensures AI solutions promote inclusive outcomes (Kritika Sangani) Language diversity (22 scheduled languages and numerous dialects) is a critical divide; AI language models are essential to bridge it (Himanshu AIM)
Unexpected Consensus
AI will *flatten* rather than deepen digital divides, despite common fears of technology‑driven exclusion.
Speakers: Rajesh Babu, Himanshu AIM, Kavikrut
AI, delivered via ubiquitous smartphones, can flatten rather than widen digital gaps, offering equal access to advanced tools (Rajesh Babu) AI and the smartphone have democratized the divide between rural and urban, and multilingual AI models can bridge the language divide (Himanshu AIM) Massive 5G and mobile data usage in India provides the infrastructure needed for AI to be deployed uniformly across regions (Kavikrut)
While many debates anticipate AI exacerbating inequality, these speakers jointly assert that existing connectivity (5G, smartphones) and multilingual AI will level the field across geography and language, turning AI into an equaliser rather than a divider [393-398][401-410][206-208].
POLICY CONTEXT (KNOWLEDGE BASE)
The optimism aligns with the view of technology as a democratising force [S48], yet UNCTAD warns that AI could deepen existing inequities without proper safeguards [S41]; open-source debates also note that access alone does not guarantee equity [S55].
Recognition that language diversity is a primary barrier and that AI language models are essential for inclusive AI deployment.
Speakers: Himanshu AIM, Kritika Sangani
Language diversity (22 scheduled languages and numerous dialects) is a critical divide; AI language models are essential to bridge it (Himanshu AIM) Embedding equity‑focused algorithms (e.g., gender balance, socio‑economic targeting) ensures AI solutions promote inclusive outcomes (Kritika Sangani)
The explicit linking of linguistic inclusion to AI design is not raised by most panelists; only Himanshu foregrounds it, and Kritika’s equity-by-design stance aligns with the broader inclusion principle, creating an unexpected joint emphasis on language as a dimension of equity [403-410][380-384].
POLICY CONTEXT (KNOWLEDGE BASE)
The importance of local-language content is underscored in the Global AI Policy Framework, which cites language gaps as a barrier to inclusion [S58]; open-source discussions similarly stress the need for localized datasets to achieve equitable AI [S55]; broader analyses emphasize inclusive data to avoid bias [S41].
Overall Assessment

The panel exhibits strong convergence on the view that AI should be harnessed as an inclusive, democratizing technology—simplifying entitlement processes, embedding equity safeguards, and scaling impact through startups and state‑level missions. Participants across government, civil‑society and private sectors align on the need for robust connectivity, multilingual capability and human‑in‑the‑loop designs.

High consensus: most speakers echo each other’s core messages, indicating a shared strategic direction for AI‑for‑good initiatives in India. This broad agreement suggests that future policy and programme design can build on a common foundation of equity‑by‑design, single‑touch service delivery, and infrastructure‑driven deployment.

Differences
Different Viewpoints
How to ensure AI does not deepen digital divides and promotes equity
Speakers: Rajesh Babu, Kritika Sangani, Himanshu AIM
AI, delivered via ubiquitous smartphones, can flatten rather than widen digital gaps, offering equal access to advanced tools (Rajesh Babu) Proactively embed equity checks (gender, socio‑economic status) into AI algorithms and keep humans in the loop to guarantee inclusive outcomes (Kritika Sangani) Language diversity is a critical divide; multilingual AI models are essential to democratise access across 22 scheduled languages and many dialects (Himanshu AIM)
Rajesh argues that AI is automatically an equaliser because smartphones are widespread [393-398]. Kritika counters that without explicit equity-by-design safeguards and human-in-the-loop support, AI could perpetuate bias, so she advocates embedding gender and socio-economic balancing algorithms and supporting frontline workers [380-388]. Himanshu adds that linguistic diversity creates a separate divide that must be addressed with multilingual models, highlighting a gap not covered by the other two positions [400-410]. The three speakers therefore disagree on the necessary safeguards and focus areas to prevent AI from widening existing inequities.
POLICY CONTEXT (KNOWLEDGE BASE)
Ensuring AI does not widen gaps is a recurring theme in UNCTAD’s frontier-technology report calling for equitable data and policy frameworks [S41] and in workshops on responsible AI development that stress inclusive policies [S42]; concerns about algorithmic harm are also raised in debates on democratising technology [S48] and open-source limitations [S55].
What strategic focus should startups adopt when leveraging AI for social impact
Speakers: Kavikrut, Himanshu AIM
Founders should prioritize sectors with high social impact-especially healthcare and education-to unlock national super-powers and drive massive change [377-382] Startups should adopt a problem-first mindset, solving genuine societal gaps rather than chasing VC money or specific sectors; AI is a tool to accelerate impact, not a directive for sector choice [358-372]
Kavikrut urges a sector-specific reallocation of talent toward healthcare and education as the most effective way to harness AI for national development [377-382]. Himanshu, however, cautions against prescribing sectors, emphasizing that founders must identify real problems first and use AI to accelerate solutions, regardless of the domain [358-372]. This reflects a disagreement on whether AI strategy should be guided by sectoral priorities or by problem-driven entrepreneurship.
POLICY CONTEXT (KNOWLEDGE BASE)
Strategic guidance for AI-driven social-impact startups is addressed in the ‘Building Inclusive Societies with AI’ brief urging deeper industry-government collaboration [S44] and UNCTAD’s call for policy incentives to nurture AI startups [S50]; practical models include co-funded AI labs supporting startup growth [S45].
Unexpected Differences
Assumption that AI alone will automatically equalise access versus the need for targeted equity mechanisms
Speakers: Rajesh Babu, Kritika Sangani
AI delivered via smartphones will not divide but flatten inequalities (Rajesh Babu) Equity‑by‑design algorithms and human‑in‑the‑loop are required to prevent bias and ensure balanced outcomes (Kritika Sangani)
Rajesh’s confident claim that AI is inherently an equaliser [393-398] was unexpected given Kritika’s emphasis on deliberate equity design and safeguards [380-388]. This reveals an unanticipated divergence: one view treats AI as a self-equalising technology, while the other sees it as a tool that must be carefully engineered to avoid reproducing existing disparities.
POLICY CONTEXT (KNOWLEDGE BASE)
The critique reflects UNCTAD’s warning that AI can exacerbate divides if equity mechanisms are absent [S41], the observation that technology’s democratising promise is not automatic [S48], and the open-source debate highlighting the need for protective measures beyond mere access [S55].
Overall Assessment

The panel largely agrees on AI’s transformative potential for democratising access to health, education and social services, and on the importance of digital infrastructure. The main points of contention revolve around how to safeguard equity—whether AI is automatically inclusive or requires explicit algorithmic and linguistic interventions—and the strategic direction for startups, i.e., sector‑specific focus versus problem‑first innovation.

Moderate. While there is consensus on the overarching goal of AI for good, the disagreements pertain to implementation pathways (equity design, language inclusion, sector prioritisation). These differences suggest that policy and program design will need to balance optimism about AI’s equalising power with concrete measures to address bias, language diversity, and sectoral strategy.

Partial Agreements
All speakers concur that AI has the potential to broaden access and simplify service delivery, and that robust digital infrastructure underpins this potential. However, they differ on implementation details such as equity safeguards, sector focus, and the balance between human involvement and automation [1-2][24-26][55-62][206-208][380-388][400-410].
Speakers: Kavikrut, Kritika Sangani, Himanshu AIM, Rajesh Babu
AI can democratise access to healthcare, education and social services, turning a multi‑step entitlement process into a single‑touch experience (Kavikrut, Kritika, Rajesh) Embedding AI within existing government systems and digital public goods can streamline service delivery (Kavikrut, Kritika) AI infrastructure (5G, high mobile data usage) provides the foundation for nationwide deployment (Kavikrut, Himanshu) AI can act as a super‑charged tool for startups and innovators to solve problems at scale (Kavikrut, Himanshu)
Takeaways
Key takeaways
AI can democratize access to healthcare and social protection by turning multi‑step entitlement processes into single‑touch experiences. Embedding AI within existing government systems (e.g., RTE digital lottery, multilingual WhatsApp chatbot) can dramatically reduce transaction time and improve targeting of the most vulnerable. State Innovation Missions are being used to bring frontier technologies, including AI, to under‑served regions, creating peer‑to‑peer learning networks and reducing regional disparities. Equity‑focused algorithms (gender balance, socio‑economic targeting) are essential to ensure AI solutions produce inclusive outcomes. The concept of a United Entitlements Interface (UEI) – a UPI‑like portal for all constitutional rights – is envisioned to streamline eligibility checks and applications across sectors. AI‑driven personal agents (e.g., briefing tools for pharma reps, doctor‑patient agents) can enhance information flow, reduce waiting times, and improve health outcomes. Infrastructure strengths such as widespread 5G, high mobile data usage, and ubiquitous smartphones provide a foundation for uniform AI deployment. Language diversity is a critical divide; multilingual AI models are needed to reach rural and linguistically diverse populations. Startups view AI as the most powerful tool for rapid problem‑solving; sectors with highest social impact identified are healthcare and education. AI should be deployed with a human‑in‑the‑loop approach to support frontline workers and avoid deepening digital divides.
Resolutions and action items
Indus Action will continue experimenting with AI‑enhanced targeting via a multilingual WhatsApp chatbot to reduce frontline worker load and improve outreach to vulnerable families. Atli Innovation Mission (AIM) will launch a State Innovation Mission in an unnamed northeastern/eastern state next week, focusing on AI‑enabled solutions for water‑quality monitoring and bamboo market access. AIM will establish a peer‑to‑peer learning network among states to share successful AI‑driven innovations. Indus Action plans to embed equity algorithms (gender, socio‑economic balance) into future AI solutions for entitlement delivery. The panel discussed advancing the United Entitlements Interface (UEI) concept as a single digital gateway for all constitutional rights. Rajesh Babu’s team has redeveloped the AI briefing agent for pharma reps and is collaborating with Scripps Institute on AI‑based liver‑transplant matching, moving toward deployment. All participants agreed to keep humans in the loop and to prioritize impact‑driven AI projects over purely VC‑driven ventures.
Unresolved issues
How to achieve rapid, nationwide awareness of welfare schemes among the rural poor; specific timelines and scaling strategies remain unclear. Details on the technical roadmap, data governance, and validation processes for the AI‑driven organ‑matching system were not fully addressed. The exact implementation plan, governance model, and funding mechanisms for the United Entitlements Interface (UEI) were not resolved. Strategies for ensuring sustained AI literacy and capacity building among frontline workers across diverse linguistic contexts need further elaboration. The panel did not reach a concrete plan for encouraging more startups to enter health‑tech and edu‑tech beyond general encouragement.
Suggested compromises
Combine AI automation with human oversight: embed equity checks in algorithms while retaining frontline workers to verify and assist vulnerable users. Prioritize impact over rapid VC funding: founders should focus on solving real social problems first, using AI to accelerate long‑term value creation. Use AI to augment, not replace, existing government processes: integrate AI tools within current entitlement systems rather than building parallel platforms. Address language divide by developing multilingual AI models alongside universal smartphone deployment, ensuring rural and dialect‑rich populations are included.
Thought Provoking Comments
Right now, they take about 10 steps, 10 burdensome steps to access a single entitlement. How do we bring that down to a single touch process? That is what AI for good stands for us.
Highlights the core friction in social protection delivery and frames AI as a tool for radical simplification, turning a bureaucratic nightmare into a user‑centric experience.
Set the agenda for the whole panel, prompting others to discuss how AI can compress complex processes. It led directly to the deep dive into the RTE digital lottery and later AI‑driven targeting, shifting the conversation from abstract benefits to concrete process redesign.
Speaker: Kritika Sangani
We have a huge disparity between the western and southern parts at one end of the spectrum and the northeast, eastern and northern part of the country… we are launching State Innovation Missions in those lagging regions.
Brings regional inequality to the forefront and positions AI as a leveling mechanism, not just a national‑wide tool.
Created a turning point where the discussion moved from generic AI benefits to concrete policy‑level interventions. It sparked follow‑up remarks about peer‑to‑peer learning networks and the need for localized data (e.g., water‑quality hackathon).
Speaker: Himanshu (AIM)
Either it is a race to the top or race to the bottom with AI. The only way to go to the top is to focus on impact.
Frames AI adoption as an ethical choice rather than a technological inevitability, urging participants to align AI projects with social impact.
Prompted panelists to justify their initiatives in terms of impact rather than hype. It led to Kritika’s focus on equity algorithms and Rajesh’s emphasis on value‑first product design.
Speaker: Kavikrut
We introduced what we call the RTE MIS – a digital lottery integrated into an open‑source modular product that cut the physical transaction of school admissions from 10 steps to a single digital draw.
Provides a concrete, scalable example of a Digital Public Good that transformed a massive bureaucratic process, illustrating how AI can be embedded in existing systems.
Shifted the conversation from theory to practice, inspiring Himanshu to discuss state‑level replication and prompting the audience to ask about scaling numbers (900,000 to 9 million).
Speaker: Kritika Sangani
We are experimenting with a multilingual WhatsApp chatbot that serves as the first interface for parents, reducing frontline worker load and improving targeting of the most vulnerable.
Shows how low‑cost, widely available technology (WhatsApp) can be combined with AI for outreach and precise targeting, bridging the digital divide.
Expanded the dialogue on AI tools beyond high‑end platforms, leading to questions about awareness, language inclusion, and prompting Himanshu’s language‑divide comment.
Speaker: Kritika Sangani
An AI‑powered personal agent that, each morning, scans a pharma rep’s calendar, CRM, and past conversations to deliver a concise briefing – essentially a ‘morning presidential briefing’ for field staff.
Illustrates a tangible productivity boost for a high‑stakes industry, turning AI into an everyday assistant rather than a futuristic concept.
Prompted the panel to consider AI’s role in augmenting human decision‑making across sectors, leading to further discussion on personal agents for doctors and patients.
Speaker: Rajesh Babu
Using AI to match liver‑transplant donors and recipients by analysing a multitude of biological and physiological parameters that traditional algorithms cannot handle.
Demonstrates AI’s capacity to solve ultra‑complex, life‑saving problems, moving the conversation into deep scientific territory.
Elevated the discussion from service delivery to cutting‑edge medical research, reinforcing the “AI for good” narrative and inspiring audience curiosity about future breakthroughs.
Speaker: Rajesh Babu
There are 22 scheduled languages in India; the language divide is a major barrier, and AI models are beginning to democratise access across dialects.
Identifies a uniquely Indian challenge—linguistic diversity—and positions AI as a solution, expanding the definition of digital inclusion.
Redirected the panel to consider not just geographic but also linguistic equity, influencing Kritika’s later remarks on equity algorithms and prompting rapid‑fire answers about preventing digital divides.
Speaker: Himanshu (AIM)
Can we flip the discovery process so that the state discovers the citizen, layering AI/ML on exhaustive government data to proactively identify eligible beneficiaries?
Reverses the traditional outreach model, proposing a proactive, data‑driven approach that could dramatically reduce the “10‑step” barrier.
Sparked a new line of thought about predictive eligibility, leading to audience questions on awareness and Himanshu’s discussion of peer‑to‑peer learning networks.
Speaker: Kritika Sangani
We are using AI to map iron content in water at sub‑district levels and turning that data into a hackathon challenge for low‑cost diagnostics and solutions.
Shows a creative, grassroots application of AI that links environmental data to entrepreneurship, illustrating how AI can catalyse local innovation ecosystems.
Provided a vivid, relatable example that broadened the conversation from national policy to community‑level impact, reinforcing the theme of AI as an equaliser across regions.
Speaker: Himanshu (AIM)
Overall Assessment

The discussion was steered by a series of concrete, ground‑level examples that transformed the abstract notion of ‘AI for Good’ into actionable pathways. Kritika’s focus on simplifying entitlement access and embedding AI in existing public systems introduced the central problem of bureaucratic friction. Himanshu’s emphasis on regional disparity and language barriers reframed AI as a tool for equity, prompting the panel to consider both geographic and linguistic divides. Kavikrut’s ethical framing (race to the top vs. bottom) and Rajesh’s vivid illustrations—from personal agents for pharma reps to AI‑driven organ matching—added depth and breadth, moving the conversation from policy to daily workflow to cutting‑edge science. Each of these pivotal comments sparked new sub‑topics, shifted perspectives, and deepened the analysis, ultimately shaping a dialogue that balanced visionary ambition with pragmatic, inclusive implementation.

Follow-up Questions
What medical breakthrough do you believe will emerge in the market in the next 3‑4 years as a result of integrating medical science and AI?
Identifying near‑term AI‑driven health innovations helps guide investment, research priorities and policy support for transformative healthcare solutions.
Speaker: Audience Member (question to Rajesh Babu)
Is the average poor person in a rural area aware of Indus Action’s initiatives, and if not, what timeframe is realistic for achieving widespread awareness?
Understanding awareness gaps and timelines is crucial for designing outreach strategies that ensure equitable access to social protection schemes.
Speaker: Audience Member (question to Kritika Sangani)
How can we ensure that AI systems deployed for public welfare do not deepen digital (or economic, social, cultural) divides, especially for rural and marginalized communities?
Proactive safeguards are needed to prevent AI from exacerbating existing inequities and to promote inclusive public‑service delivery.
Speaker: Yashi (Audience Member)
Research needed on AI‑enabled water‑quality monitoring at sub‑district level (e.g., detecting iron content) to inform hackathon‑driven solutions.
Granular, AI‑driven water quality data can empower local innovators to create low‑cost diagnostics and remediation tools, improving public health.
Speaker: Himanshu AIM
Research needed on AI‑driven market‑linkage platforms for bamboo producers to connect with global buyers and improve price discovery.
Leveraging AI for market intelligence could unlock higher incomes for small‑scale bamboo growers and reduce regional price disparities.
Speaker: Himanshu AIM
Study mechanisms to convert documented grassroots innovations into viable startups and jobs (innovation‑to‑startup pipeline).
Understanding how to commercialise the large pool of local innovations can boost entrepreneurship and regional economic development.
Speaker: Himanshu AIM
Improve AI models for discovering and targeting the most vulnerable children in entitlement schemes (e.g., RTE).
Better targeting increases the efficiency and equity of social‑protection delivery, reducing exclusion errors.
Speaker: Kritika Sangani
Develop a unified digital public good (United Entitlements Interface) that lets citizens discover, apply for, and receive multiple constitutional rights through a single platform.
A single‑window interface could dramatically simplify access to a range of entitlements, scaling impact across sectors.
Speaker: Kritika Sangani (referenced by Kavi)
Research AI algorithms for multi‑parameter organ‑transplant matching to improve compatibility and reduce waiting times.
Advanced AI‑based matching could save lives and make transplant systems more efficient and equitable.
Speaker: Rajesh Babu
Explore AI‑powered personal doctor agents that synthesize patient data, schedule interactions, and communicate with clinician agents before human contact.
Such agents could streamline primary‑care workflows, reduce waiting periods, and enhance continuity of care.
Speaker: Rajesh Babu
Investigate multilingual and dialect‑aware AI models to bridge India’s language divide in AI applications.
Addressing linguistic diversity is essential for inclusive AI adoption across all regions and populations.
Speaker: Himanshu AIM
Design and evaluate peer‑to‑peer learning networks among state innovation missions to accelerate AI adoption in less‑served regions.
Effective knowledge sharing can reduce regional disparities in AI capacity and foster collaborative problem‑solving.
Speaker: Himanshu AIM
Research methods for embedding equity‑focused algorithms (gender balance, socio‑economic status) into AI‑driven welfare platforms to prevent bias.
Proactive bias mitigation ensures AI systems promote fairness and do not reinforce existing social inequities.
Speaker: Kritika Sangani
Examine governance frameworks that steer AI development toward a ‘race to the top’ rather than a ‘race to the bottom’, focusing on impact metrics for AI for Good.
Policy and governance research can guide responsible AI deployment that maximises social benefit.
Speaker: Kavikrut (referencing Nandan Nilekani)

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Shaping AI’s Story Trust Responsibility & Real-World Outcomes

Shaping AI’s Story Trust Responsibility & Real-World Outcomes

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel opened by framing a “sustainable AI future” built on the three sutras of People, Planet and Progress and introduced seven “chakras” – human capital, inclusion, trust, resilience, science, resources and social good – as concrete pillars for global cooperation [1-4]. The central question posed was how to achieve trust before skill, positioning outcomes and responsibility as a competitive advantage [6].


Paul Hubbard argued that AI should be viewed through an economics lens, focusing on public value rather than mere technological adoption, and that trust is the foundation that enables innovation [26-27][39-46]. He emphasized a people-first, democratic participatory approach that meets citizens where they are rather than imposing new technology [44-46].


Erik Ekudden described how telecom networks are evolving into an “intelligent fabric” that will host AI inference for devices such as AI glasses, requiring the network to be secure and trusted [49-58][75-82]. He noted that scaling this fabric is essential for future industrial AI applications in agriculture, healthcare and smart manufacturing [54-58][60-62].


Divyesh Vithlani explained First Abu Dhabi Bank’s platform-first strategy, embedding ethical AI governance, layered data-model-knowledge architecture, and dynamic oversight through separate execution and control planes to manage agents and their performance [111-133][200-226]. He added that agents are treated like humans with guardrails, performance monitoring and “agent university” to ensure accountability and mitigate hallucinations [225-226].


Hari Shetty contrasted “proof over promise” with a problem-first mindset, insisting that AI solutions must run continuously, earn trust through consistent performance, and be measured beyond simple productivity using “plus scores” that track failures and quality [152-157][236-247]. On accountability, Paul stressed a clear, inclusive plan that spreads AI benefits across communities while safeguarding citizens, whereas Erik highlighted a hierarchy of agent decision-making that ties responsibility to the domain providing the service [163-170][177-186]. Both agreed that perceived AI risk is manageable; Hari called hype overstated, while Erik warned that excessive caution in the public sector could hinder progress [344-348].


Looking ahead, Paul added that capability, competence and curiosity will differentiate AI-native nations, and Erik argued that energy-efficient hardware, software and inference distribution can keep AI expansion sustainable, with networks accounting for only a small share of total power consumption [300-306][311-326]. Finally, Paul described the AI CoLab as a cross-sector initiative that brings government, industry and academia together to solve problems collaboratively, and Bhandari concluded that aligning the seven pillars will redefine competitiveness, rebuild public trust and future-proof institutions [385-399][456-458].


Keypoints


Major discussion points


Trust as the foundation for AI innovation – The panel repeatedly stressed that trust is not an obstacle to innovation but the very base that enables it. Mridu asked how to build confidence in AI without slowing progress [33-37], and Paul replied that “trust lets you make the innovation” and that a people-first, participatory approach is essential [39-46].


The network as an “intelligent fabric” that must evolve from passive conduit to active, trusted AI enabler – Erik described how 5G/6G networks are becoming the host for distributed inference (e.g., AI glasses) and must be secure, scalable, and edge-enabled [49-60][75-82]. He later linked this infrastructure to business value, noting that AI-driven network services can generate large efficiency gains and new revenue streams [262-270][284-287].


Platform-first governance and dynamic oversight for enterprise AI – Divyesh explained that a layered, platform-centric architecture (execution plane + control plane) with built-in guardrails, deterministic “atomic” agents, and continuous performance monitoring is how banks can maintain accountability while scaling AI [130-138][200-219][225-226].


Moving from “proof-of-concept” pilots to proven, production-grade AI – Hari outlined four enterprise principles: start with the problem, adapt to legacy-heavy environments, ensure continuous, reliable operation, and earn long-term trust by avoiding hallucinations [147-154][155-162]. This shift is presented as the key to turning AI hype into measurable outcomes.


Future-oriented considerations: AI-native nations, sustainability, and ROI re-framing – Paul highlighted that beyond infrastructure, a nation’s capability, competence, and curiosity will separate AI-native from AI-dependent economies [300-306]. Erik warned that AI’s energy intensity can be mitigated through efficient hardware, software, and inference-centric deployment, turning network power use into a net-positive for emissions [311-326]. Hari and the panel also argued that ROI should be viewed as a capability (EI) rather than a simple cost-benefit metric [234-247].


Overall purpose / goal of the discussion


The session was convened to explore how global stakeholders can “shape a sustainable AI future” by aligning the seven “chakras” of human capital, inclusion, trust, resilience, science, resources, and social good [1-4]. Throughout the dialogue the panel sought concrete ways to achieve trust before skill, embed AI responsibly in public policy and enterprise operations, and translate high-level ambition into accountable, scalable actions.


Overall tone and its evolution


– The conversation opened with a formal, visionary tone, setting out broad principles and introducing the panel [1-14].


– It then shifted to a pragmatic, solution-focused tone, with detailed technical explanations about networks, platform governance, and operational safeguards [39-82][130-138][147-162].


– Mid-discussion the tone became optimistic and forward-looking, emphasizing future capabilities, sustainability, and the transformative impact of AI-native societies [300-326][447-452].


– The closing remarks returned to a hopeful, unifying tone, reiterating the “People, Planet, Progress” sutras and the belief that aligned global cooperation will future-proof institutions [456-458].


Overall, the dialogue moved from setting the agenda, through concrete technical and governance recommendations, to an inspiring vision of AI’s role in the next decade.


Speakers

Mridu Bhandari


Area of expertise: AI policy, responsible AI, multi-stakeholder governance


Role / Title: Moderator, Network18 (referred to as “Vipi Bhandari” in the opening) [S1]


Hari Shetty


Area of expertise: Strategy, technology implementation, AI consulting


Role / Title: Strategist and Technology Officer, Wipro [S4]


Divyesh Vithlani


Area of expertise: Banking technology, digital transformation, AI platform governance


Role / Title: Group Chief Technology and Transformation Officer, First Abu Dhabi Bank [S5]


Paul Hubbard


Area of expertise: Public-policy economics, AI governance in government


Role / Title: First Assistant Secretary for AI Delivery and Enablement, Department of Finance, Australian Government; also known as the “AI masked economist” (self-described) [S7]


Erik Ekudden


Area of expertise: Telecommunications networks, AI-enabled connectivity, 5G/6G infrastructure


Role / Title: Chief Technology Officer, Ericsson [S8]


Additional speakers:


Harish Yatich


Area of expertise: Technology strategy (introduced as part of the panel)


Role / Title: Strategist and Technology Officer, Wipro (mentioned in the opening but did not speak)


Dinesh


Area of expertise: (not specified)


Role / Title: (mentioned in a question prompt; no title provided)


Full session reportComprehensive analysis and detailed insights

Opening and framing


Mr Bhandari opened the session by positioning AI as a “sustainable AI future” built on the three sutras of People, Planet and Progress and introduced seven concrete “chakras” – human capital, inclusion, trust, resilience, science, resources, and social good – to guide global cooperation [1-4]. She then framed the core challenge of the AI-first decade as achieving trust before skill, arguing that outcomes and responsibility must become a competitive advantage rather than a cosmetic concern [6-8].


Paul Hrubag’s economics-first view (later self-identified as Paul Hubbard)


Paul Hrubag, first assistant secretary for AI delivery and enablement at the Australian Department of Finance (who later refers to himself as Paul Hubbard), responded from an economics perspective, insisting that AI should be evaluated on the public value it creates rather than on mere technology adoption [26-27]. He rejected any trade-off between trust and innovation, stating that “trust lets you make the innovation” and emphasizing a people-first, democratic-participatory approach – meeting citizens where they are and building on existing familiarity with AI – as essential for public confidence [39-46]. He also recounted how he earned the nickname “AI-masked economist” during the COVID-19 pandemic when he launched a podcast to demystify economics and AI jargon [??].


Eric Ekudin / Erik Ekudden on the intelligent fabric


Eric Ekudin (later speaking as Erik Ekudden) described the evolution of telecom networks from passive data carriers to an “intelligent fabric” that will host AI inference workloads such as AI glasses, which off-load processing to the edge [49-58]. He highlighted that 5G/6G must be secure, trusted and scalable to support industrial AI in agriculture, health-care and smart manufacturing, and that the network already provides the guarantees needed for billions of devices [60-62][75-82]. The transition to an active, AI-enabled fabric is presented as a prerequisite for future business value and new revenue streams [citation needed].


Divyesh Vithlani’s platform-first, agent-centric governance


Divyesh Vithlani outlined First Abu Dhabi Bank’s platform-first strategy, embedding ethical AI, data and model governance into a layered architecture (data, model, knowledge, context) and separating execution from control planes to enable dynamic oversight of autonomous agents [130-138]. Agents are treated like human staff – with guardrails, performance monitoring, and an “agent university” that tracks token consumption, output quality and hallucinations – ensuring accountability and mitigating risks [200-219][225-226]. He also noted that AI is a general-purpose technology, reinforcing the need for a platform-first approach [??].


Hari Shetty on proof-over-promise and “plus scores”


Hari Shetty contrasted “proof over promise” with the prevailing pilot-centric mindset, urging a problem-first methodology, adaptation to legacy-heavy environments, continuous-operation models and the earning of long-term trust through hallucination-free performance [147-152][155-162]. He introduced the term “product license” to describe the outcome-driven approach that solutions must meet before being marketed [??]. Shetty also defined “plus scores” as a metric that records model failures, hallucinations and any deviation from organisational quality thresholds, providing a concrete measure for ongoing trust [??]. Finally, he framed AI as a core EI capability rather than a simple ROI calculator, arguing that productivity is only an early indicator [236-247].


Accountability and governance


Paul stressed the need for a clear, inclusive national plan that spreads AI benefits to rural and marginalised communities while safeguarding citizens, framing responsible AI as a whole-of-society leadership task [163-170]. Erik added that responsibility follows the hierarchy of decision-making: each domain that provides a service – network, cloud, application or device – must retain accountability, and existing telecom guardrails can be translated one-to-one into the AI world [177-186]. Divyesh reinforced this by highlighting the platform’s execution/control separation as a mechanism for public-sector accountability [??].


Risk perception debate


A moderate disagreement emerged over whether the public sector still over-estimates AI risk (Erik) versus the Australian government’s recent shift to a more proactive stance (Paul) [346-348][351-353]. A further divergence concerned the locus of AI integration: Erik advocated a network-centric “intelligent fabric”, while Divyesh promoted a platform-centric governance model[75-78][130-138].


Sustainability and energy efficiency


Erik warned that AI’s energy intensity can be mitigated through energy-efficient hardware, software and inference-centric deployment, noting that network power consumption is a small fraction of total electricity use and that digital technologies can reduce emissions in other sectors by up to 15 % [citation needed].


Future outlook and cross-sector collaboration


Paul described the AI CoLab as a physical hub where government, industry, academia and NGOs co-create responsible AI solutions to real-world problems [??]. Erik envisioned AI-native networks as a “creative network” that can dynamically compose services for wearables, robotics and autonomous agents [??]. Hari projected a dramatic increase in decision-velocity, arguing that current organisational processes are too slow and that AI will enable near-real-time decision-making [??]. Divyesh painted a 2030 banking scenario where interactions are mediated by AI avatars, with instant cross-border payments and frictionless product discovery [??]. He also outlined a three-step plan for CEOs: (1) define a clear AI vision, (2) re-think operating models rather than merely automating tasks, and (3) engage Wipro for implementation [??].


Actionable take-aways


From the discussion the panel distilled several actionable recommendations:


(i) Build trust through people-first, participatory design and secure, low-latency networks [39-41][75-82];


(ii) Adopt a platform-first architecture with execution and control planes, layered ethical governance and continuous agent monitoring [130-138][200-219];


(iii) Move from pilots to always-on, problem-driven AI solutions that earn trust via consistent performance [147-152][130-138];


(iv) Measure AI impact not only by productivity but also by “plus scores”, decision-velocity and risk mitigation [236-247];


(v) Treat AI as a core EI capability rather than a pure cost-benefit exercise [236-247];


(vi) Manage risk with calibrated guardrails, avoiding premature over-regulation while ensuring public-sector accountability [344-345][351-353];


(vii) Foster cross-sector collaboration through initiatives such as the AI CoLab [??];


(viii) Pursue energy-efficient AI hardware and inference distribution to align AI expansion with sustainability goals [citation needed].


Closing


Mr Bhandari reiterated that aligning the seven chakras of human capital, inclusion, trust, resilience, science, resources and social good will allow AI to move beyond optimisation of businesses to redefining competitiveness, rebuilding public trust and future-proofing institutions for decades to come [456-458].


Session transcriptComplete transcript of the session
Mridu Bhandari

for shaping a sustainable AI future that we are calling People, Planet and Progress. And to translate these sutras into action, we are looking at what we call the seven chakras of aligned global cooperation. So these are the concrete pillars that will really turn ambition into accountability. We have human capital, inclusion, trust, resilience, science, resources and social good as the seven chakras that we are going to be talking about. Today we have with us a very eminent panel trying to answer the defining question of this AI first decade that we are in. How can we achieve trust before skill? Outcomes over optics and responsibility as a competitive advantage. I’m Vipi Bhandari from Network18 and I’m very delighted to be joined by a panel of very distinguished guests here tonight.

Starting from my left, Paul Hrubag, first assistant secretary for AI delivery and enablement at the Department of Finance in the Australian government. Paul Hrubag, first assistant secretary for AI delivery and enablement at the Department of Finance in the Australian government. Next to them, Vibhesh Vitlani, Group Chief Technology and Transformation Officer, First Abu Dhabi Bank. Eric Ekudin, the Chief Technology Officer of Ericsson. And Harish Yatich, Strategist and Technology Officer at Wipro. Welcome, gentlemen. Thank you so much for joining us here today. You know, perhaps let’s set the context with the foundations of trust and skill. And Paul, if I may start with you first, you know, I was going through your LinkedIn profile and you call yourself the AI masked economist.

So very interesting, Monica, there. Why don’t you first tell us what that really means? And then we’ll jump into the rest of the stuff.

Paul Hubbard

Thanks for having me. It’s great to be here in India. I think we all bring a mic. Yeah. Thanks for having me, and it’s great to be here in India. I think we all bring a lens to AI. My lens that I bring is economics. I’m a public policy economist, which for me means AI is not about technological adoption. It’s all about what can generate public value, what generates public welfare.

Mridu Bhandari

And why do you call yourself the masked economist?

Paul Hubbard

Economist. That’s another story for you. That started in COVID, remember, when we were all wearing masks. And at the time, I started a podcast, which was all about explaining economics and unpacking the jargon. And I’ve kept that because I think explaining AI, unpacking the jargon, seeing how it relates to everyday life is really, really important.

Mridu Bhandari

Right. Now, when we talk about AI for social good, public permission is really, really important. Public trust is very important. Now, how do we really build society? How do we build confidence in AI without really slowing down? innovation. How are you doing that in Australia? Give us some examples of how you’ve been able to do that, especially because citizens all over the world today are demanding a lot more transparency and accountability when it comes to not just AI, but everything in general.

Paul Hubbard

Yeah, absolutely. I think it’s really important that we don’t frame it as like trust versus innovation. It’s actually a foundation of trust that lets you make the innovation. It’s starting from the proposition of what’s the problem we’re trying to solve or what are we trying to deliver for citizens? If you’re a government, what are you trying to deliver for your customers? Meet them where they’re at. Now, different countries, different populations have different comfort already, different familiarity with AI. You’ve got to know where people are up to, what they want and build from there, rather than just say, here’s a brand new thing that we’re going to impose on you. So I think really that framing, that democratic participatory.

approach, that people -first approach is key.

Mridu Bhandari

Right. Eric, coming to you, it’s often discussed as the application law, but you’ve mentioned that intelligence must be embedded into the networks themselves. Now, how does infrastructure really evolve from being a very passive carrier of AI to becoming this active enabler of trust and of resilience?

Erik Ekudden

Yeah, so first of all, Ericsson builds networks, advanced connectivity, so 5G and 6G, and increasingly that’s becoming this fabric that we all depend on. But let’s start by thinking about what people are using today. Gen AI is already hundreds of millions of smartphones, actually billions, already doing AI applications across the mobile infrastructure. So it’s already secure and trusted. The network is already provided the guarantees that you need. But I think Especially here in India, we’re talking about industrial AI applications, agriculture. There’s going to be a lot of AI in the fields, hospitals, education, smart manufacturing. So there’s going to be a lot more dissemination of AI from where we’re focused today in training to distributed AI or inference generation.

That’s going to happen much further out in the network. So the network is actually becoming the host for all those great AI experiences. We need to scale the networks to handle that. I don’t think I’m the only one. Maybe not everyone carries two pairs of glasses here, but AI glasses. They are already available in millions. Good AI glasses that give you navigation support, that gives you real -time language translation, maybe a prompt if you are on a stage making a keynote. I mean, these kind of things, they cannot be done on the device, on the wearables. You need to offload the AI, the inference from the glasses. So you can see the actual data. You can see the actual data.

You can see the actual data. You can see the actual data. You can see the actual data. You can see the actual data. You can see the actual data. You can see the actual data. You can see the actual data. edge. That’s why we talk about this as a transition to an intelligent fabric. The network is already secure, trusted. It’s going to be a carrier of all these inference workloads. So we’re just starting that journey. But I think it really comes back to basic principles. Networks need to be trusted. They need to be secure. They’re already moving from consumers into enterprise and government services, mission critical, big example here in India.

Mridu Bhandari

So what have the AI glasses been doing for you this week?

Erik Ekudden

I didn’t read every question because everything is perfect in India when it comes to finding new ways. On a serious note, I actually use them privately at work. But I start to see people getting really … good value because it is an AI assistant. And think of it once, especially like me, wearing glasses. Once I’ve switched for good to these glasses, why would I go back? Even when I’m indoor, even when I’m at home, even when I’m training or in the elevator, I want it to work. And that, of course, means that the network, this intelligent fabric, needs to be so much better than it is today. Of course, great 5G networks here in India, but in the future, we will need even better funds.

And I think this is a change in terms of we will not get the full value of AI. We will not leverage AI fully until we connect it to that better network for AI. And that’s really what I’m focusing on. But you want to try it on? It’s a good one. No, it’s a great one. It’s a little bit fantastic. That was a little bit of a gallop, yeah. Using AI or AR wearables, glasses. Earpods. Cameras. Cameras. A few? Okay, well, two, two. Probably a representative crowd here. I think we are very early in this journey. It’s going to be a fantastic journey, I believe, for both consumers and anyone of us working in companies.

Mridu Bhandari

Absolutely, absolutely. Well, I’m going to come back to you with the knowledge that bringing the wish in, you know, in banking now, trust is not philosophical, it is existential. So how do you really embed AI into core decision making and also ensuring to dilute any risk discipline? So what governance models have you put in place that actually work for you? You know, any best practices that you can share with us here today?

Divyesh Vithlani

Sure. Well, first of all, it’s great to be here. And I’m already, you know, benefiting from the wisdom of my panelists, because my kids will tell you that I’ve been in denial about, or needing glasses. The eyesight is perfect, but the enlarging, the zooming really helps. But in reality, now I’ve got a different story for them, that I’ve been waiting for AI glasses before I really don on apparel specs. But coming back to your question, I kind of pick up on what Paul said. It’s not either or. It’s not about you have trust or you have productive AI. What we believe is like any regulated institution, that there is no compromise on risks and controls.

Our business in banking relies 100 % on trust. So that is not a value that we can compromise on any time. However, in order to make sure that we do deploy AI at scale in a trusted manner, it starts with conviction. and we have conviction right at the very top of the organization that AI is a force for good. We’ve heard a lot this week about AI being a general purpose technology. I really love what Eric said about AI in the network, and I’ll sort of come to that in a second, because that is a large part of the answer is establishing a platform. But if we take a step back, conventional organization is defined by its people, its processes, and its technology.

And there are all sorts of safeguards, guardrails, controls that have been built in. In the AI world, I think it’s going to be about agents, models, and data. And I think we’re going to have to have the same guardrails, and perhaps the same controls, and the same controls, and the same controls, even more stronger, because it will need AI to oversee and govern AI to sort of be really effective. So the approach we’ve taken is on the basis of the conviction that we have that AI is a force for good, it is a game changer, and it is truly going to transform everything about how we live, work, play, and bank. We want to basically make sure that we empower the entire organization to leverage and scale AI in a safe, secure, efficient, and compliant manner.

Now, the only way, in my opinion, to do that is to take a platform -first approach. Just like Eric said about the network needs to be safe and secure, our AI platform and our agentic platform needs to be safe and secure. So we have taken the approach of building a platform with all the different layers from data, model, knowledge, context, and the use cases that sit on top of that by building ethical, AI, data governance. level governance, the fair and appropriate use of AI into the platform. And by taking that approach, we are able to unleash the power of the technology in the hands of the end users. So just like when you open up Microsoft and start a new Excel, you’re not thinking about is this safe, what’s the underlying architecture.

You’re doing it fairly intuitively. And we’re going to be able to do the same thing with AI, that our folks, our business colleagues, our engineers can use AI as naturally and seamlessly as they do any other task. So taking that platform -first approach is what really is driving our sort of strategy to ensure that we drive AI at scale but with all the right trust and safeguards.

Mridu Bhandari

Right. All right. Bringing in Harry as well, you know, we’ve talked a little bit about public permission. We’ve talked about infrastructure. We’ve talked about governance, security. There’s a final leap. which is from promise to proof. Now, enterprises are, of course, often caught between the AI hype. There is hesitation. You speak a lot about proof over promise. Elaborate that for us. And what really separates scalable AI from perpetual pilots that we keep seeing a lot of enterprises deploying?

Hari Shetty

First and foremost, very happy to be here with this panelist here. And putting on the virtual lens, what do we do? We take Eric’s network, layer in the pro -intelligence on top of it, and provide solutions to Devesh. That’s where we fit it into this entire graph in terms of what we do. Now, coming back to proof over promise, you absolutely brought the most important topic that’s in discussion across the summit here as well. AI is no longer about pilots. It’s about being able to get value out of AI. And when we talk about proof over promise, we talk about four distinct elements that are important from a Wipro perspective. number one don’t start with a model don’t talk about model x or model y and then start start with a model first thinking start with a problem first thinking so you you pick a problem figure out what’s the right approach to solving the problem and then work the way backwards to look at you know what models can actually help you solve the problem so that’s the first approach the second part that we that we take care of is the the enterprise story is very different than the consumer story enterprises are necessarily messy you’ve got technology that’s like 20 years old 30 years old you’ve got different personas you’ve got different security needs uh data is you know in fragments across the organization so the enterprise story is a completely different story than a than a consumer grid story in terms of how how things have to come together from an perspective so in that context our ability to prove a solution in the enterprise world is extremely important for us and when we show it works in an enterprise that’s when other enterprises build trust that’s ready for diffusion and by the way we act as client zero for our solution so if we don’t get it to work in our own enterprise there’s no point talking to any of the clients about implementing the solution the third principle here is about whatever solution we build it’s not about making it work once it should work every day every hour and every minute and solutions that are only capable of you know following that principle are the ones that we actually take it to to take it to the market and that’s another principle that’s extremely important for us and last going back to trust that we all talked about if you look at human trust human trust is earned even agentic trust is earned you need something that can work for a long period in time without hallucination without fundamental flaws in the model so that there’s trust built into it so only when things work consistently over a longer period in time do you build trust and these are the four principles that we use to actually talk about you know proof or promise as what we call the product license

Mridu Bhandari

Right. All right. Well, we’re going to shift gears a little bit and also talk about accountability because we’re talking a lot about architecture. Let’s also talk about who’s accountable for what in an enterprise and perhaps in the society as well. Now, Paul, when we talk about responsible AI at a national level, what does accountability really look like for leaders? Is it about measurement frameworks? Is it about reporting outcomes? Is it about, you know, independent oversight? What are the signals that you need to tell citizens that, you know, this is being deployed in your interest?

Paul Hubbard

Yeah, thanks. I think it’s really about having a clear plan that you can communicate. In our case, that making it clear throughout the economy, throughout government, throughout society, that we’re going to seize the opportunity of AI. That means better jobs. That means investment in data centers and all the things we’ve been talking about. But the second thing is really even perhaps more important is we’re going to spread the benefit of AI, not just to people in the tech center, but to every aspect of community, people in rural areas, to people from marginalized groups, to people who maybe haven’t had the full benefit of current technology now. So spreading that benefit further. And then finally, just making it really clear that we’re also acting at every level, whether it’s businesses or whether it’s government, to keep citizens safe in the process.

We’ve had a big conversation here at a model level about AI safety and AI harms, but we’ve also got to have that conversation in the context of our communities and what does it look like to keep citizens safe there. So I think it’s the whole of. Society leadership piece. It’s not just saying, well, the tech people can look after this from a technical perspective.

Mridu Bhandari

Right. And, you know, ecosystems, of course, are very, very interdependent today. You have cloud providers, you have the telecom networks, you have enterprises. There are decisions flowing across the distributed stack by the second. So it’s really countable.

Erik Ekudden

Yes. I want to build on what you said, the version of the Harry here, and the difference between where we are today and when we are introducing agents at scale. And to me, there isn’t so much a question of who, because if you are replacing work with an agent, that basically needs to translate into an accountability and then also a transparency, trust and governance issue around those agents. And increasingly, we get agents at different levels. There’s super advanced agents at the top. And, of course, you follow down the stack, we get more. fine -grained agents having less knowledge making decisions that are guard -railed in a different way than the top models. So think of this as a hierarchy of decision -making and, of course, accountability.

But to me, there’s no question that if you are, and when you are introducing agentic technology, you need to take the responsibility for your part. If your complete service consists of many different agents on the cloud side, on the advanced connectivity side, on the application side, device side, it needs to come together. But, of course, responsibility should reside in the domain that you are providing, and that you are providing to the market, to the customer, to the employees. Then, of course, it’s never as simple as that, but in the world that I come from, in telecom, we’re already providing critical infrastructure. People’s everyday and life depend on it. So we have already guard -raised from a safety -security perspective that we have to move up to.

in today’s world of 5G and telecom. That, to me, should carry over into the, oh, yeah, an identic world. I know there are, of course, discussions about increasing governance, increasing regulation. I think that’s a dangerous way to go because if you regulate before you have innovated, you never know what you will get. But I think if you stay with these basic principles that we do have requirements and we have guardrails in the world we’re coming from, and you translate that more or less one -to -one into the identic world, I think we are on a good starting point.

Mridu Bhandari

Right. And, Vibhish, you know, we are talking about the way this identic, as Kavik said, machine working hand -in -hand. Now, as these identities shift, how should we be rethinking governance? How should we be rethinking trust? And, of course, governance is never static. It’s going to go on. It’s going to keep evolving. So what does dynamic oversight really look like, especially in a very regulated industry like yours?

Divyesh Vithlani

Look, I really love that question because at the end of the day, as a CTO in a bank, I am accountable. I am responsible for the platform that we construct and the output that gets generated from that platform, whether it’s from a human or an agent, right? So that’s my accountability. And this is where I have interesting debates and conversations with colleagues from Wipro and other partners of mine who are very eager to sell me solutions. And I said, if the solution is a black box, then I’m going to find it very difficult to integrate that into my environment because ultimately I have to be able to explain the output that gets generated. So to your question in terms of that dynamic oversight, it again goes back to the platform and the way we’ve architectured.

The platform is on sort of, you know, without getting too technical, is on two planes. There’s an execution plane and a control plane, right? But again, it’s not that sophisticated. just like when you onboard a new graduate into your organization. You will give them a set of guardrails and a set of responsibility that is befitting of their skill set and their experience. You provide the right level of supervision. You give them the right level of oversight. And as they grow, become more proficient, you clearly give them more responsibility. We treat agents in exactly the same way. So there’s a lot of conversation about agents being autonomous and hallucinations. Well, individuals can do the same thing if they’re left to their own devices, right?

So the way that we have built and architected our agentic architecture is that, as Eric said, there are different types of agents. At the lowest level, agents are not just autonomous, but they’re atomic. And with the right set of guardrails, with agentic operating processes, they are also deterministic, right? and we basically create agents to perform a single task. And we make them as reusable as possible to compose them and to aggregate them into a higher level of workflow. And as you get more, as they learn more, which is the good thing about agents, they learn faster, you give them more responsibility, just like you do to humans. But again, going back to that execution plane where you are monitoring every activity that is being done through a control plane, and also the other features of the platform include how we sort of onboard, offboard agents just like you do with humans.

And we also have practices in place to manage the conflicts between agents and humans because, again, just like you have conflicts between two humans, you have conflicts between an agent and a human, right? And you need to be able to detect that in real time. so that’s where some of the kind of work that we’ve done and it’s again early days, I don’t mean we have all the answers, but certainly the space is moving very fast the key is that we humans always have to be in control so the way we design the architecture is to ensure that happens

Mridu Bhandari

so are agents being put through tough performance appraisals, are they being fired for hallucinating?

Divyesh Vithlani

100 % right, and again it may sound really basic, but I view an agent more different to a human so you do performance management you do, there’s a concept that we call agent university right and I love that term because I was chatting earlier with James about this, at university you’re learning how to learn right, so that’s what we want the agents to do as well and you know whilst humans may fill out a timesheet to account for the work that they’ve done and to measure the output that they’ve produced for the cost that they’ve consumed. Whilst agents may not fill out a timesheet, we’re also monitoring and monitoring the agent for the worth, the tokens that they’ve consumed for the output that they’ve generated to ensure that we measure their performance in a similar way.

Mridu Bhandari

Wonderful. Well, Harry, bringing you in as well, how should organizations measure the ROI? That’s a question that enterprises around the world have been debating. What’s the value beyond the profit or beyond the bottom line? Are we looking at trust scores? Are we looking at productivity? Are we looking at decision velocity, risk mitigation? At the core, how are you looking at the ROI?

Hari Shetty

Probably one of the most debated topics and one of the topics that I hear a lot, and I will probably provide you the Wipro context in terms of how we are looking at productivity. point number one while everybody talks about use cases and productivity measurement of EI we think you know EI is beyond just measuring return on investments or measuring productivity it’s almost like going back in time could you ask should we implement an email system what’s the ROI on the email system could you ask for example why should I go to the internet I have a brochure already in the company why should I be on the internet so a lot of the thinking should change from looking at ROI to looking at EI as a fundamental capability and a fundamental shift and a journey which is irreversible in terms of where we are going so it’s not a question of should we invest because there’s ROI or not it’s a question of we have to go down that path and we look at it as a capability so within Wipro we look at it as a capability so we are not really asking this question of for every single use case is there a ROI on it Now, having said that, you know, as a business leader, ROI is extremely important.

Mridu Bhandari

Well, your clients must be demanding the ROI for sure.

Hari Shetty

Yes, that’s equally true. So the elements that we talk about is the earliest signal of ROI is productivity, right? We always talk about productivity as an early indicator of what can come down the pipe, but productivity is only an early signal. The resulting benefit is basically always an end outcome. It can be cost. It can be units produced. It can be lower, better quality. It can be cycle time reduction. It’s many of those things. And our goal has always been to move beyond productivity because productivity is a number that people talk about very frequently in AI, but we are moving beyond productivity to look at some of those end outcomes that we can achieve. And our models are built to help clients understand the end benefit of AI rather than just look at productivity as an element.

Plus scores are becoming equally important. I will just touch upon plus scores for a minute. When we look at plus scores, we are looking at, you know, how many instances, how many instances of failure did happen? How many instances of failure did happen? How many instances of failure did happen? and is that within the vector of what an organization says is acceptable or the process says is acceptable. So it’s important to measure quality aspects, failure aspects, hallucination that we talked about, all the other aspects of AI where it can go wrong and then measure what’s the task goal and see whether it’s appropriate for the process that we’re talking about. So we had situations where we talked about probabilistic models, deterministic models.

We had customer cases where 100 % was the only answer or 99 .99 % was the answer. There are situations where 85 % was good enough. So again, there’s no one single answer to this. It depends on the kind of process, the kind of problem that we’re trying to solve.

Mridu Bhandari

Right. And do you think business innovation would perhaps be one of the biggest ROIs and any outstanding cases of business innovation that you’ve seen with AI being scaled successfully yet?

Hari Shetty

Yeah, that’s a fantastic question. And again, let me give you one or two quick examples because, you know, that would bring this to life. one of the projects that we did for a client this is an energy client and this is for a refinery and obviously you know everything was automated, instrumented, there are a lot of sensors all along the way and they were asking us what’s the value of AI in this context so the work that we did for them was basically analysis of a flame and you know interestingly out of the flame we could extract information about combustion efficiency, fuel to air mixture ratio, maintenance of the equipment we could derive out of models that we built just looking at the flame so the kind of information that we could actually secure just looking at the flame was so much superior to using a sensor based technology because sensors typically tell you something is working or something is not working based on a threshold, here we could actually find out the health of what’s happening with incremental change compared to looking at an on and off kind of a situation with sensors

Mridu Bhandari

Fantastic. Erik, you want to add?

Erik Ekudden

Yeah, can I just add one thing? I think it’s so interesting to look at how in our world we talk about this intelligent fabric of 5G. And, of course, there are gains if you apply AI in terms of efficiency, in terms of productivity. You can get more customer experience. And you can mention that in 10%, 50 % as a great achievement, 20 % saving. We’re talking about billions of dollars there. But where our customers get super excited is when they take an example from the complete network. They use modeling on top of it. And then they can start to produce new outcomes. It’s kind of a business growth. And, of course, it’s not always that you can find that clear case.

But that’s really where AI and autonomous networks are helping. Saving, yes, TCO is important. But it’s very much about that business growth.

Mridu Bhandari

Any example you can share with us there?

Erik Ekudden

Yeah. So, Glasses was one. In the future. device, every application, every service will need its own specific service, quality, latency, all of that. So you can start to sell services that are tailored for mission critical for enterprises. And that’s what leading customers, including here in Juba, are doing. So they’re using AI for that. We can get more customer experience. And you can mention that in 10%, 50 % as a great achievement, 20 % saving. We’re talking about billions of dollars there. But where our customers get super excited is when they take an example from the complete network, they use modeling on top of it, and then they can start to produce new outcomes. It’s kind of a business growth. And, of course, it’s not always that you can find that clear case, but that’s really where AI and autonomous networks are helping.

Saving, yes, TCO is important, but it’s very much about that business growth.

Mridu Bhandari

Any example you can share with us there?

Erik Ekudden

Yeah, I think Glasses was one example here. But in the future, every device, every application, every AI service will need its own specific service, quality, latency, all of that. So you can start to sell services that are tailored for mission critical, for enterprises. And that’s what leading customers, including here, so they’re using AI for that. that kind of segmentation and growth of the business, it’s an upside that is unlimited. So, of course, it’s more exciting.

Mridu Bhandari

Absolutely. Well, let’s also look at the long -term competitiveness and value creation that we can achieve with AI. Paul, if we were to project 10 years ahead, what do you think would really separate AI native nations from AI dependent nations? You know, is it infrastructure? Is it talent pipelines, compute capacity? What would you add to that list?

Paul Hubbard

I would add capability, competence, and curiosity. I think a lot of the things you mentioned in terms of data centers and things like that, they will be built, there will be investment, but the underlying models, the compute that will be commoditized and what will set countries apart is the ability of government institutions to adapt, the ability of the economy to be flexible to new approaches and to be able to do what they want. I think that’s a really important point. And I think that’s a really important point. And I think that’s a really important point. the ability of the workforce to find the new jobs, the new wants and needs that are created and where the bottlenecks shift, being able to move to those.

And I’ve got to say that coming to India this week, I see not just competence, capability and curiosity, but just a down -out enthusiasm for this. So I think maybe India is one to watch.

Mridu Bhandari

Good to know that and happy to hear that, of course. Because, well, Eric, you know, AI demands massive compute, massive energy, massive connectivity. Now, how do we really reconcile infrastructure -scale AI expansion with sustainability? You know, even with the AI globally, how do we ensure that efficiency is imperative to everything that is deployed?

Erik Ekudden

Well, AI is… …energy -intense, especially now in the training phase. I think some of the data that are out there, it’s… I mean, it’s… mind -boggling numbers and I’m not even sure we’re going to need that kind of energy that has been predicted. But what I was saying before is that we’re moving from that big data center training to the distributed inference. That’s kind of where the book is going. That means that you need to scale it to like 8 billion inference for glasses. Tens of billions of sensors using AI or visual sensors. So what we are doing and what needs to happen is to really have energy -efficient hardware, energy -efficient software, energy -efficient AI models.

Small models when you can do away with that and of course big models when you don’t. So we’re not going to explode energy consumption just because we use more AI. In fact, we’re going to use even smarter and better ways to do it both on the hardware and software side. Then just as a little bit of sort of putting things in perspective, all the world’s networks is around a percent of their total power bill or their power consumption. And it’s actually by using more of the digital technology, you are able to reduce emissions in other sectors by as much as 15%. So it’s kind of a 10, 15 times payback on that energy consumption. And again, if you combine that with what I said about really being conscious about energy efficiency as you move further out, I think it’s actually going to be a sustainable way to do a lot of things, not just replacing unnecessary traveling, logistics chains with more digital means.

Everything is going to be more efficient, so I think we have to be a little bit careful before we say that it’s just exploding and it’s completely outrageous. Because if you just project those big data center training clusters, it looks scary, but that’s not the whole picture.

Mridu Bhandari

All right. Well, Dinesh, you know, while we are talking of value creation from AI, you know, of course, many organizations still accounting and measuring AI success and cost savings, but… at your organization, how are you really reframing AI value in banking, resilience, fraud protection, customer trust, capital efficiency? What are some of the metrics that you are tracking and really ensuring that this is true value creation for us?

Divyesh Vithlani

I think it’s a question that sort of is constantly exercising our minds. And if I start with the productivity question that you asked earlier, whilst there wasn’t a straightforward answer, I can’t look at it in three levels. AI will provide micro -level productivity through, you know, co -pilot and sort of technologies like that, which might be difficult to measure, but certainly it’s helping with the whole literacy and those in the overall level of education and awareness in the organization. Secondly, at the enterprise level, and this is your point on value creation, we absolutely see the potential of AI to drive significant ROI. When you take very complex processes, which have been utilizing HIPAA -2 technologies, whether it’s RPA, OCR, etc., but when you apply AI and agentic, you can actually take them to the next level.

And these are extremely complex processes, which are error -prone, and you’re talking about large sums of money. And when we’ve applied AI and agentic to them, we’ve seen incredible outcomes, which is sort of giving us tangible value creation. And the third aspect I would look at is, if we really take a step back, certainly in banking, what is our biggest source of competitive advantage? it’s not necessarily the technology or the products or any other capabilities, right, because the next person can come along and emulate those. It’s really our ability to respond and react to change faster than our competitors. And that’s what AI is going to help us do in terms of creating value because it allows us to respond to change faster, do rapid experiments, and to scale and to double down where we think that we will see a significant ROI.

Mridu Bhandari

Right. Okay, so I have a question for all of you, and perhaps you can, you know, take about 30 seconds each to tell me. Do you believe today enterprises are overestimating or underestimating AI risk? And, you know, how should leaders and boards really measure AI, AI thrust readiness in practical terms? So, you know, how we may do if you want to start on that one.

Hari Shetty

see there is certainly a level of risk that one should be aware of and work with with risk and again in every business there’s always element of risk that one is to mitigate so ai is no different from that perspective but at same time the own hype about risk is also overstated it’s a manageable risk it’s not a uncontrolled unmanageable risk it’s a manageable risk and with the right kind of tool set that divesh talked about it’s definitely possible to get the best value out of ai without actually exposing oneself to risk

Mridu Bhandari

okay that’s a very diplomatic balanced answer that you give us, Eric what do you think

Erik Ekudden

i suspect that it’s become quite realistic the risk assessment among enterprises not to overestimate it they’re manageable i think maybe on the government side there’s still an overestimation on the risk side trying to sort of be too cautious, and that, I think, could hold back in certain public sectors and in other areas. Then the risks are very, very big if you mistreat this extremely powerful technology. So I’m not saying that we’re over the hump, but that’s what I think.

Mridu Bhandari

Paul, you want to take that on, considering, you know, Eric just said that perhaps the public sector overestimates risk. Would you say that for, you know, the government in Australia as well?

Paul Hubbard

I mean, certainly governments have a responsibility to start off probably with a more cautious approach than private sector folk. I’d say there’s a shift between the uncertainty of something new that isn’t quantifiable to actually I understand the risk, and then once you understand the risk, you can manage it. So certainly over the last year or so, and the government of Australia has taken. much more sort of active posture towards AI where embracing, in a sense embracing the risk a little bit more than we were in the past but as we grow the capability as we’ve got the foundation of trust, the guardrails that we need, it means you can actually manage that risk and that’s the key thing.

Mridu Bhandari

All right, Divyesh?

Divyesh Vithlani

Look, with any so -called new technology there is always going to be a level of, you know, fear, uncertainty, doubt but the kind of, the sort of the paradox for me is that AI is actually not a new technology. In fact, it predates cloud, mobile, robotics you know, judging by the lack of I was writing programs at university that that But AI was just well ahead of its time. We needed the cloud to be able to process large amounts of data. We needed the kind of data centers that we’re talking about for the compute, et cetera, for this technology to really come to light. And clearly, as we’ve gone through digital, social, cloud, and data, along the way, we’ve seen many, many regulations around data protection, how best to use cloud, data sovereignty, data residency, et cetera.

So as long as we are not sort of shedding those controls that we’ve already built and making sure that we tighten the guardrails as we deploy AI and deploy AI through a platform -centric approach where you’ve built the necessary guardrails, I think that those risks will be met. And I think that’s what we’ve managed and mitigated. And hopefully what we’ll start to see is that we’ve managed to do that. to see is the benefits of this combined technology will far outweigh the kind of risks and concerns that we’re seeing. The only qualification I would make, and I think that’s been talked about at this conference, is making sure that we do take

Mridu Bhandari

Absolutely. I mean, it has to be inclusive for all, especially in a country like India where, you know, we have divides of many kinds. Well, let’s spend a few minutes trying to look ahead and do some crystal ball gazing. And Eric, if I can come to you, you know, we are entering autonomous networks, embedded intelligence, physical AI from robotics to many, many massive systems. Now, what does an AI mean? An AI is a creative network then look like, say, five years from now, because anything more than five is just… much to envision and how do we get to this mobile and cloud infrastructure where we’re able for that future?

Erik Ekudden

Well, I think we have to look perhaps further out in five years because we’re building something that should work for society in broad terms. But of course, AI is moving super fast and when you ask about AI native, I think that any industry, including the one I represent, is going through major change now. And AI native is not just how you build your products, that they need to be data -driven, they need to learn, they need to be updated all the time. It’s very much about your processes. It’s about how you go to market with that, how you engage with lifecycle management, handling questions, and I think we talked about it in the pre -meeting as well.

There are so many things that are changes in terms of how you build AI native systems that it is a fundamental rework for, I would say, most AI native systems. product, actually service companies as well. So an AI -native world is something that is much more responsive to these fast changes that we talked about. An AI -native network is a network that is responsive to all of these needs. You already mentioned the physical AI, which is just around the corner, humanoids, robots, drones, all the things that are requiring much more tailoring, much more flexibility from that network or the intelligent fabric. So we need to do what I call user experience at scale or massive user experience.

Everything has to have its own and unique requirements met. I think only AI -native networks that are responsive in real time to these needs and adapt and create the best user experience can handle it. So it’s going to be a very different world, very intuitive, judging what works. What we see on the wearable side, but that’s going to be a completely new setup.

Mridu Bhandari

Right. And Paul, you know, as we’re looking ahead, of course, public -private partnerships are going to be key to any kind of success that we’re going to see. Tell us a little bit about AI CoLab and your approach towards, you know, bringing together public institutions, academia, industry, to really advance the practical adoption of AI while also keeping it very transparent and ensuring that public good is at the center of it.

Paul Hubbard

Absolutely. So the AI CoLab is a cross -sector initiative where folk from the government, folk from the private sector, academics, not -for -profits, can get together in one place and often in person to understand things. And I think everybody who’s come to the AI Impact Summit really understands that we can’t do this alone. Like nobody in their silo can solve the problem themselves. We’ve got to get capability from each other. We’ve got to learn from each other. And I think the 300 ,000 people who have been here this week have certainly proven that to be. proven that to be the case. I think that it’s also key to actually doing safe and responsible AI. It’s not just the technical controls or the networks that we have.

It’s having the people who are going to be in the room who may not care about AI, but they do care about the services that are being delivered. They do care about their voice being heard. They do care about the environment around them as well. So he keeps on bringing you back to reframing that. What’s the problem we’re trying to solve? What’s the mission we’re trying to achieve? And I think if we want to talk about impact, that’s the key question.

Mridu Bhandari

Right. All right. Well, let’s also look at the financial angle with Divesh. You know, we’ve talked about open finance and very effective financial ecosystems. What is it really going to take to scale AI to that level, especially in the near and short term, to enable very responsible deployment? And sustain… finance with egg farmers particularly in the Indian context given the complex complexities that we see in this country?

Divyesh Vithlani

So I think it’s going to be a force for good. If I look at banking, I don’t think the core of banking is going to change. However, how we bank, how we drive that experience for our customers is I think going to be transformationally different in the future. Just one example to pick up on your question, if you combine the technology of AI together with say digital assets and stable coins, the ability to move money faster like emails, why is it that it takes three or four days today to clear a cross -border payment, right? Which goes completely against the whole concept of open finance and inclusion. So I think AI together with some of these other is going is going to be a game changer in enabling things like that and really driving experience to be much more natural, much more intuitive than it is today.

Personally, as a CTO, there is a lot of questions about a job is going to go away, et cetera. If you look at sort of in any organization, certainly banks that I’ve worked in, typically the CapEx demand on an annual basis outstrips supply on a ratio of five to one. But if AI can help us change those legacy systems, modernize our platforms, because let’s be honest, 90 % of banks still operate with legacy technologies. There’s very few in the green field. All of those technologies need to be modernized, upgraded, and I think AI, again, is going to be a force for good there. And once we modernize those systems, we’ll again lend itself to connecting more seamlessly through microservices, APIs, without getting into the technical details, through MCPs, et cetera.

So I think that AI, together with some of these other technologies, digital assets, print and data line, I think will drive a very different paradigm in terms of

Mridu Bhandari

Lovely. Very exciting times ahead. Well, Hari, if you were to give a CEO a three -step plan today to really scale responsibly, what would that be? Three things.

Hari Shetty

Okay. Number one, be very clear about what you want to achieve with AI. So have the vision right. Have clear objectives in terms of what you want to achieve with AI. That’s the first part. The second part that I would call out is don’t think about task and task automation. Think about what does AI do to your business? And it’s an operating model shift fundamentally which can actually deliver value. So think big. Think about the operating model shift that will require structural changes, methods of working changes, skill changes, and, you know, it’s a complete change. It’s a complete transformation compared to just being an automation. And third thing, you know, please call Wipro.

Mridu Bhandari

All right. We are about to now imagine that we are at the India AI Impact Summit 2030, just about four years ahead. What has changed today in the way we live, work, and play that didn’t happen perhaps at the time you were here last, which is today? What has changed? Paul, do you want to start with that? And you can go ahead with the imagination.

Paul Hubbard

yeah okay look as an economist it’s very hard to predict the future I think what has changed is there’s a whole bunch of people turning up with job titles that we’ve never even heard of before and they’re telling us about things that people in a bureaucracy or the government only dream about so I think we’ll see a lot more diversity in what people do

Mridu Bhandari

right lots of new jobs and yeah most industry reports suggest that many of the new jobs of the next decade have not been invented yet so absolutely

Divyesh Vithlani

well in four years time we may not be here in person it will be our agents or avatars that are being kind of you know teleported in because the technology through Ericsson’s amazing network has the kind of bandwidth and the latency is improved vastly, and obviously with Wipro’s technology around creating these avatars and these agents. But no, I think, to be serious, I think what will have changed, at least from my perspective, is banking will be a lot more seamless. It will really be about putting the customers first rather than sort of imposing friction that we see today in terms of how financial services works. For instance, we will be shopping much more intuitively. We won’t even know that we need to get a new fridge or a new car.

It will kind of just occur to us naturally, and something will appear on your doorstep that you didn’t even know you needed, but once it arrives, you think, wow, that’s exactly what I needed. The payment’s taken care of. All the servicing is taken care of. So I think that is a near -term reality.

Mridu Bhandari

All right. Eric, Hari, go ahead.

Hari Shetty

couple of things one is I’ll definitely break my glasses and use Eric’s Eric’s glasses more importantly why I think will fundamentally change is the decision velocity good most importantly I think the decision velocity in organization will completely change in in the next four years one of the key things that we always talk in any enterprises our organization is so slow the processes take a lot of time it does not happen at the pace that we all want it to be and the experience that one gets out of it a slow process is not necessarily a great experience process the fundamental problem that AI will solve and I’m pretty sure it will solve in the next couple of years is the velocity of everything will increase so tremendously that we’ll look back and say how did we ever tolerate something that was as slow as what it is today

Erik Ekudden

yeah I I wonder if it’s doable in four years on a global scale. But I hope what we see four years from now is that we have this dissemination, we have diffusion, we have everyone being included in this fantastic journey that AI really, really is about. But I think it hinges on this dialogue that we have here, and it hinges, it’s conditional on the fact that we solve the trust issues. Because these things with security, privacy, we talk about them as things we can solve technically and so forth, but that needs to have fundamental anchoring in how humans behave so that you can really trust these agents, as was mentioned before, and that we put the right constraints on.

If that happens, of course, four years from now, it’s going to be so seamless where we have our digital colleagues or AI colleagues, AI, physical AI colleagues, and so forth, that it’s going to be a complete. It’s a completely different way of looking at work and, of course, how you get help outsourcing. I mean, you’re going to be an agent of something which is much, much bigger than what you’re commanding today. I think it’s an enormous shift.

Mridu Bhandari

absolutely well fascinating times ahead thank you gentlemen for your very very incredible insights that was very very educational and informational for all of us the takeaway for me I think from this conversation is clear that if people planet progress remain our guiding sutras and if we can align all the seven pillars of global cooperation AI is not going to just optimize businesses it is going to redefine competitiveness it is going to rebuild public trust and of course hopefully it will future -proof all our institutions for the decades ahead thank you very much appreciate you all taking the time here and thank you all for being a wonderful audience thank you you Thank you. Thank you.

Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (38)
Factual NotesClaims verified against the Diplo knowledge base (4)
Confirmedhigh

“Mr Bhandari framed AI sustainability using three sutras of People, Planet and Progress”

The knowledge base explicitly lists the three sutras as people, progress and planet, confirming the framing described in the report [S106] and [S108].

Additional Contextmedium

“Mr Bhandari introduced seven “chakras” – human capital, inclusion, trust, resilience, science, resources, and social good – to guide global cooperation”

While the sutras are confirmed, the knowledge base does not mention the seven chakras; it only references the broader three-sutra framework, providing context but not confirming the specific chakras [S106].

Additional Contexthigh

“Eric Ekudin described telecom networks evolving from passive data carriers to an “intelligent fabric” that will host AI inference workloads at the edge”

The transcript of the AI Impact Summit notes that telecom networks have evolved significantly from merely enabling connectivity to more advanced roles, aligning with the description of an “intelligent fabric” for edge AI workloads [S32].

Additional Contextmedium

“5G/6G must be secure, trusted and scalable to support industrial AI in agriculture, health‑care and smart manufacturing, and the network already provides guarantees for billions of devices”

The knowledge base discusses the upcoming 6G ecosystem where devices will have AI capabilities and emphasizes the need for secure, scalable networks for widespread AI deployment, adding nuance to the claim about 5G/6G requirements [S118].

External Sources (119)
S1
Shaping AI’s Story Trust Responsibility &amp; Real-World Outcomes — -Mridu Bhandari- Moderator from Network18 This comprehensive discussion at the AI Impact Summit brought together leader…
S2
Day 0 Event #82 Inclusive multistakeholderism: tackling Internet shutdowns — – Nikki Muscati: Audience member who asked questions (role/affiliation not specified)
S3
WSIS+20 Open Consultation session with Co-Facilitators — – **Jennifer Chung** – (Role/affiliation not clearly specified)
S4
Shaping AI’s Story Trust Responsibility &amp; Real-World Outcomes — -Hari Shetty- Strategist and Technology Officer at Wipro
S5
Shaping AI’s Story Trust Responsibility &amp; Real-World Outcomes — -Divyesh Vithlani- Group Chief Technology and Transformation Officer, First Abu Dhabi Bank
S6
https://dig.watch/event/india-ai-impact-summit-2026/shaping-ais-story-trust-responsibility-real-world-outcomes — All right, Divyesh? Starting from my left, Paul Hrubag, first assistant secretary for AI delivery and enablement at the…
S7
Shaping AI’s Story Trust Responsibility &amp; Real-World Outcomes — – Paul Hubbard- Divyesh Vithlani – Paul Hubbard- Erik Ekudden- Divyesh Vithlani- Hari Shetty – Paul Hubbard- Hari Shet…
S8
Shaping AI’s Story Trust Responsibility &amp; Real-World Outcomes — -Erik Ekudden- Chief Technology Officer of Ericsson
S9
Keynote by Marcus Wallenberg Chairman SEB &amp; Saab — – Mr. Ek Udden: Chief Technical Officer of Ericsson (mentioned by Marcus Wallenberg as being present, but did not speak …
S10
Scaling Trusted AI_ How France and India Are Building Industrial &amp; Innovation Bridges — First, trust. It’s trust. Trustability. Trustability because we need to trace the systems, the models, the data that we …
S11
UNSC meeting: Peace and common development — The speaker emphasises the importance of a comprehensive approach to achieving sustainable peace and security, rooted in…
S12
Agenda item 5 : Day 4 Morning session — Collaborative implementation of joint projects builds confidence
S13
Session — Building trust in electoral processes Eliud argues that building confidence in elections requires improvements in overa…
S14
DRAFT AUGUST, 2024 — What makes AI a compelling force for advancement and change is that the technology has the potential to make an impact f…
S15
Conversation: 01 — Thank you very much. And I must say that it’s very impressive to see India convene the world on such an important subjec…
S16
AI is here. Are countries ready, or not? | IGF 2023 Open Forum #131 — Other countries have recognized this and have implemented national-level initiatives. UNDP is actively involved in suppo…
S17
National Strategy for Artificial Intelligence — Decisions made by systems built on artificial intelligence must be traceable, explainable and transparent. This means th…
S18
National Strategy for Artificial Intelligence — Citizens and businesses must have confidence in artificial intelligence whenever it is used by the public authorities, s…
S19
STRATEGIE NATIONALE DE L’INTELLIGENCE ARTIFICIELLE — La stratégie nationale d’intelligence artificielle (IA) de la Côte d’Ivoire repose sur dix principes fondamentaux qui …
S20
Policymaker’s Guide to International AI Safety Coordination — In terms of what is the key to success, what is the most important lesson on looking back on what we need, trust is buil…
S21
Agentic AI in Focus Opportunities Risks and Governance — That’s great. Really appreciate that, Austin. And I love the focus on voluntary, industry -driven, consensus -based stan…
S22
Shaping the Future AI Strategies for Jobs and Economic Development — Nations defined by geographical dispersal of small islands, 1 ,200 islands, narrow economy base, and acute exposure to c…
S23
Keynote-Brad Smith — “We need to look at AI as the next great generator for human curiosity.”[11]. “Human capability is neither fixed nor fin…
S24
https://dig.watch/event/india-ai-impact-summit-2026/global-enterprises-show-how-to-scale-responsible-ai — Absolutely. And I told them that you were starting your journey on the Gen AI. Can we work with you on responsible AI? …
S25
https://dig.watch/event/india-ai-impact-summit-2026/building-the-future-stpi-global-partnerships-startup-felicitation-2026 — In areas like textiles, pharmaceuticals, etc. The question now is, how do we reliably move from ideas to impact and be m…
S26
Scaling AI Beyond Pilots: A World Economic Forum Panel Discussion — Absolutely. Last year I talked about 400 use cases that we came up with in Saudi Aramco. This year we’re talking about 5…
S27
AI for agriculture Scaling Intelegence for food and climate resiliance — “We are moving beyond pilots to projects at full scale.”[47]. “We will move from pilots to platforms, from fragmented da…
S28
The Intelligent Coworker: AI’s Evolution in the Workplace — Christoph Schweizer advocated for new measurement approaches, emphasising “adoption and usage,” “employee satisfaction s…
S29
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — Governments have collectively affirmed the importance of building trust by governing AI based on human rights, and that …
S30
WS #31 Cybersecurity in AI: balancing innovation and risks — Sergio Mayo Macias: Yes, thank you. Thank you, Gladys. Well, actually, the AI environment in Europe is known and has …
S31
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Cristiano Amon — This comment is revolutionary because it redefines what a telecommunications network fundamentally is. Rather than viewi…
S32
Building Indias Digital and Industrial Future with AI — “Today’s mobile networks are becoming intelligent, programmable and trusted layers of the national infrastructure.”[1]. …
S33
Secure Finance Risk-Based AI Policy for the Banking Sector — Embedded governance is not regulatory burden.It is strategic imperative.It ensures that innovation is sustainable, trust…
S34
Dynamic Coalition Collaborative Session — Legal and regulatory | Cybersecurity | Development The speaker outlines a comprehensive framework for AI governance tha…
S35
Building the AI-Ready Future From Infrastructure to Skills — The progression from proof-of-concept to production represents a critical challenge. Resources like AMD’s Developer Clou…
S36
Responsible AI in India Leadership Ethics &amp; Global Impact — “So first, as any part of the ecosystem, we need to be aware that this is our responsibility and we are accountable for …
S37
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — This reframing fundamentally altered the discussion’s direction, moving away from technical solutions toward structural …
S38
WS #466 AI at a Crossroads Between Sovereignty and Sustainability — Development | Capacity development | Infrastructure
S39
Global AI Policy Framework: International Cooperation and Historical Perspectives — Mirlesse outlines practical steps for implementing open sovereignty, emphasizing domestic AI deployment in key sectors w…
S40
Discussion Report: Sovereign AI in Defence and National Security — Faisal advocates for a strategic approach where countries focus their limited sovereign resources on the most critical c…
S41
Responsible AI in India Leadership Ethics &amp; Global Impact part1_2 — Deshpande’s framework emphasises three critical elements: providing scalable playgrounds for business units to operate w…
S42
Scaling Enterprise-Grade Responsible AI Across the Global South — “And those engineered systems might require, for example, yes, human in the loop or on the loop, for sure, but also agen…
S43
WS #283 AI Agents: Ensuring Responsible Deployment — As the session reached its time limit (with Prendergast noting the final 10 minutes), the discussion revealed both the p…
S44
Building the Next Wave of AI_ Responsible Frameworks &amp; Standards — What is interesting is India is uniquely positioned in this global AI discourse. Most global AI frameworks are designed …
S45
AI for agriculture Scaling Intelegence for food and climate resiliance — “We will move from pilots to platforms, from fragmented data to interoperable systems, from experimentation to execution…
S46
Shaping AI’s Story Trust Responsibility &amp; Real-World Outcomes — Hari Shetty, Strategist and Technology Officer at Wipro, addressed the persistent challenge of moving from pilot project…
S47
AI Meets Agriculture Building Food Security and Climate Resilien — “And under the visionary leadership of our Honorable Prime Minister Narendra Modi, India has placed digital public infra…
S48
Secure Finance Risk-Based AI Policy for the Banking Sector — India’s regulatory thinking reflects this balance, encouraging experimentation while reinforcing institutional responsib…
S49
GOVERNING AI FOR HUMANITY — – 190 Discussions about AI often resolve into extremes. In our consultations around the world, we engaged with those who…
S50
Advancing Scientific AI with Safety Ethics and Responsibility — -Balancing Open Science with Security: Panelists explored the challenge of preserving open science benefits while preven…
S51
Interim Report: — 52. Any AI governance effort should prioritize universal buy-in by different member states and stakeholders. This is in …
S52
What is it about AI that we need to regulate? — The Role of International Institutions in Setting Norms for Advanced TechnologiesThe discussions across IGF 2025 session…
S53
WS #123 Responsible AI in Security Governance Risks and Innovation — She emphasizes that UN-sponsored platforms like UNIDIR’s RAISE and IGF play a critical role in enabling multi-stakeholde…
S54
Scaling Trusted AI_ How France and India Are Building Industrial &amp; Innovation Bridges — First, trust. It’s trust. Trustability. Trustability because we need to trace the systems, the models, the data that we …
S55
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — Namaste, Your Excellencies. Thank you so much for organizing this great event. It’s a great honor for Austria to be here…
S56
Comprehensive Report: European Approaches to AI Regulation and Governance — And that goes along with this state intervention only whenever necessary. And the goals of the regulation should be, we …
S57
Building Trustworthy AI Foundations and Practical Pathways — Debayan proposes defining risk as the product of the likelihood of an undesirable outcome and its severity. He stresses …
S58
How AI Drives Innovation and Economic Growth — The speakers show broad agreement on AI’s transformative potential for development but significant disagreements on impl…
S59
https://dig.watch/event/india-ai-impact-summit-2026/shaping-ais-story-trust-responsibility-real-world-outcomes — i suspect that it’s become quite realistic the risk assessment among enterprises not to overestimate it they’re manageab…
S60
Agenda item 5: discussions on substantive issues contained inparagraph 1 of General Assembly resolution 75/240 part 2 — Kazakhstan: Thank you, Chair. As we advance in our discussions, it is evident that while significant progress has been …
S61
Future-Ready Education: Enhancing Accessibility &amp; Building | IGF 2023 — Another significant aspect highlighted is the role of multi-stakeholder engagement in the Internet Governance Forum (IGF…
S62
Cyber Resilience Playbook for PublicPrivate Collaboration — – Some capabilities have the profile of a pure public good (in the classic economics sense): their consumption is non-r…
S63
Summary — The Principality of Liechtenstein is supporting, developing and shaping digitalisation for the benefit of the population…
S64
Prediction Machines in International Organisations: A 3-Pathway Transition — Have you ever pondered whether it is appropriate to ask ChatGPT to write the first paragraph of a press release or rephr…
S65
UNSC meeting: Scientific developments, peace and security — The integration of artificial intelligence and neurotechnologies will enable ultra-fast decision-making
S66
Networking Session #26 Transforming Diplomacy for a Shared Tomorrow — Sebastian contends that AI’s ability to process vast amounts of historical and current data provides diplomats with pred…
S67
Leaders’ Plenary | Global Vision for AI Impact and Governance Morning Session Part 1 — “We also, I’m glad to announce establishing a specialized economic zone dedicated to digital technology and AI designed …
S68
Shaping AI’s Story Trust Responsibility &amp; Real-World Outcomes — Trust is the foundation that enables innovation rather than hindering it, requiring a people-first approach that meets c…
S69
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — Governments have collectively affirmed the importance of building trust by governing AI based on human rights, and that …
S70
AI Governance Dialogue: Presidential address — H.E. Mr. Alar Karis: Honourable leaders, excellencies, distinguished delegates. It is truly an honour to represent Eston…
S71
WS #31 Cybersecurity in AI: balancing innovation and risks — Sergio Mayo Macias: Yes, thank you. Thank you, Gladys. Well, actually, the AI environment in Europe is known and has …
S72
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Cristiano Amon — This comment is revolutionary because it redefines what a telecommunications network fundamentally is. Rather than viewi…
S73
Building Indias Digital and Industrial Future with AI — “Today’s mobile networks are becoming intelligent, programmable and trusted layers of the national infrastructure.”[1]. …
S74
Trusted Connections_ Ethical AI in Telecom &amp; 6G Networks — And not only that, but truly well performing networks. That is a fundamental platform to drive innovation on and to driv…
S75
Dynamic Coalition Collaborative Session — Legal and regulatory | Cybersecurity | Development The speaker outlines a comprehensive framework for AI governance tha…
S76
AI-Driven Enforcement_ Better Governance through Effective Compliance &amp; Services — So the data really doesn’t go out of the bank themselves. But there is a central aggregation service that we are running…
S77
Agentic AI in Focus Opportunities Risks and Governance — Caroline Louveaux outlined MasterCard’s four-pillar approach to agentic commerce guardrails. First, “know your agent” re…
S78
Building the AI-Ready Future From Infrastructure to Skills — The progression from proof-of-concept to production represents a critical challenge. Resources like AMD’s Developer Clou…
S79
Responsible AI in India Leadership Ethics &amp; Global Impact — And let me say how it’s translated into our products. And by the way, it’s in our products. It’s in our methodologies. E…
S80
Open Forum #30 High Level Review of AI Governance Including the Discussion — Melinda Claybaugh: Thank you so much for the question, and thank you for the opportunity to be here. As you were giving …
S81
Smaller Footprint Bigger Impact Building Sustainable AI for the Future — A particularly encouraging theme throughout the discussion was the natural alignment of commercial incentives with susta…
S83
Revisiting 10 AI and digital forecasts for 2025: Predictions and Reality — Collaboration across sectors, robust governance, and strategic investments will be critical in achieving a sustainable a…
S84
Green AI and the battle between progress and sustainability — AI is increasingly recognised for its transformative potential and growing environmental footprint across industries. Th…
S85
WS #148 Making the Internet greener and more sustainable — Disclaimer:This is not an official record of the session. The DiploAI system automatically generates these resou…
S86
African Priorities for the Global Digital Compact: A Comprehensive Discussion Report — The discussion began with a professional, diplomatic tone as panelists introduced themselves and outlined the compact’s …
S87
How Multilingual AI Bridges the Gap to Inclusive Access — The tone was consistently collaborative, optimistic, and mission-driven throughout the conversation. Speakers demonstrat…
S88
Building the Next Wave of AI_ Responsible Frameworks &amp; Standards — The discussion maintained a consistently collaborative and solution-oriented tone throughout. It began with an authorita…
S89
Opening of the session — The tone began very positively and constructively, with the Chair commending delegations for focused, specific intervent…
S90
Central Bank Tools and Independence: A Comprehensive Panel Discussion — The tone began as analytical and professional, with central bankers carefully explaining their institutional perspective…
S91
Towards a Resilient Information Ecosystem: Balancing Platform Governance and Technology — ## Major Discussion Points: The discussion maintained a professional, collaborative tone throughout, characterized by c…
S92
WS #187 Bridging Internet AI Governance From Theory to Practice — – **Risk-based approaches**: Multiple speakers supported prioritizing governance based on risk levels and application co…
S93
Panel 4 – Resilient Subsea Infrastructure for Underserved Regions  — The discussion maintained a professional, collaborative tone throughout, with panelists building on each other’s insight…
S94
AI Meets Cybersecurity Trust Governance &amp; Global Security — These key comments fundamentally shaped the discussion by challenging conventional assumptions about AI security and gov…
S95
WS #462 Bridging the Compute Divide a Global Alliance for AI — The discussion maintained a constructive and collaborative tone throughout, with participants building on each other’s i…
S96
Governments, Rewired / Davos 2025 — The overall tone was optimistic and forward-looking, with speakers highlighting the transformative potential of technolo…
S97
Discussion Report: AI-Native Business Transformation at Davos — The discussion maintains an optimistic and forward-looking tone throughout, with participants sharing insights as indust…
S98
Emerging Markets: Resilience, Innovation, and the Future of Global Development — The tone was notably optimistic and forward-looking throughout the conversation. Panelists consistently emphasized oppor…
S99
Welfare for All Ensuring Equitable AI in the Worlds Democracies — The conversation maintained an optimistic and collaborative tone throughout, with participants sharing practical solutio…
S100
Powering AI _ Global Leaders Session _ AI Impact Summit India Part 2 — The discussion maintained a predominantly optimistic and forward-looking tone throughout, despite acknowledging signific…
S101
Parliamentary Closing Closing Remarks and Key Messages From the Parliamentary Track — The discussion maintained a collaborative and constructive tone throughout, characterized by diplomatic language and mut…
S102
(Interactive Dialogue 4) Summit of the Future – General Assembly, 79th session — The overall tone was one of urgency and determination. Many speakers emphasized that “the future starts now” and stresse…
S103
High-Level Track Facilitators Summary and Certificates — The discussion maintained a consistently positive and celebratory tone throughout, characterized by gratitude, accomplis…
S104
Closing Ceremony — The discussion maintains a consistently positive and collaborative tone throughout, characterized by gratitude, celebrat…
S105
World Economic Forum Annual Meeting Closing Remarks: Summary — These key comments transformed what could have been a standard ceremonial closing into a meaningful reflection on the ph…
S106
Building Public Interest AI Catalytic Funding for Equitable Compute Access — Dr. Garg also referenced observations about the contrast between current AI systems requiring gigawatts of power and hum…
S107
Open Forum #53 AI for Sustainable Development Country Insights and Strategies — ## Key Commitments and Next Steps ## Opening Context and Audience Engagement A crucial dimension addressed energy cons…
S108
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Amb Thomas Schneider — Thomas Schneider delivered a keynote address at the AI Impact Summit in Delhi, announcing Switzerland’s role as host of …
S109
AI for social good: the new face of technosolutionism — Birhane concluded her presentation by acknowledging that being allowed to “take centre stage here and to speak about thi…
S110
Closing remarks — Secretary-General Martin offered insight into trust in AI systems, stating: “Trust isn’t a property of machines. It’s ho…
S111
UK names industry leaders to steer safe AI adoption in finance — The UK government hasappointed two senior industry figuresas AI Champions to support safe and effective adoption of AI a…
S112
Open Forum #75 Shaping Global AI Governance Through Multistakeholder Action — **Ernst Noorman**, Cyber Ambassador for the Netherlands and co-chair of the FOC Task Force on AI and Human Rights, share…
S113
Keynote-Sam Altman — First, that widespread access to AI is the only fair and safe path forward. He argued that democratizing AI capabilities…
S114
Tokenisation and the Future of Global Finance: A World Economic Forum 2026 Panel Discussion — – Brian Armstrong- Brad Garlinghouse – Brian Armstrong- François Villaroy de Galhau Regulation and innovation must wor…
S115
AI Innovation in India — Bagla articulated a compelling vision of India’s unique advantages in the global AI landscape, asserting that India will…
S116
Hard power of AI — Furthermore, the analysis addresses the proliferation of fake media, particularly through the use of deepfakes in crypto…
S117
Media Briefing: Unlocking the North Star for AI Adoption, Scaling and Global Impact / DAVOS 2025 — AI is transforming various aspects of the media and publishing industry, including content creation, workflow improvemen…
S118
Artificial intelligence as a driver of digital transformation in industries (HSE University) — The analysis offers a comprehensive examination of artificial intelligence (AI) and its impact on various sectors. One s…
S119
Comprehensive Report: Preventing Jobless Growth in the Age of AI — Strong aggregate demand and tight business-education partnerships are essential for successful transitions
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
P
Paul Hubbard
6 arguments171 words per minute1045 words365 seconds
Argument 1
Trust as foundation for innovation
EXPLANATION
Paul argues that trust is not an obstacle to AI innovation but rather the essential foundation that enables it. Without public trust, innovative AI solutions cannot be effectively deployed.
EVIDENCE
He states that AI should not be framed as a trade-off between trust and innovation; instead, trust provides the base that makes innovation possible, emphasizing that trust is the foundation for any AI advancement [39-41].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Trust is highlighted as essential for banking and AI deployment, and as a prerequisite for public confidence and inclusion in AI systems [S1][S10][S18][S20].
MAJOR DISCUSSION POINT
Trust as foundation for innovation
AGREED WITH
Erik Ekudden, Hari Shetty, Divyesh Vithlani
Argument 2
People‑first, democratic participation builds confidence
EXPLANATION
Paul stresses that AI adoption must be grounded in a people‑first approach, engaging citizens where they are and respecting their familiarity with technology. Democratic participation helps build confidence and acceptance of AI systems.
EVIDENCE
He explains that governments need to meet citizens where they are, understand their comfort levels, and build AI solutions from that foundation rather than imposing new technologies, highlighting a people-first, participatory approach [44-46].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Collaborative implementation and inclusive participation are identified as ways to build confidence in technology projects [S12][S20].
MAJOR DISCUSSION POINT
People‑first, democratic participation builds confidence
Argument 3
AI‑ready infrastructure distinguishes AI‑native nations
EXPLANATION
Paul notes that while data‑centers and compute capacity will be built everywhere, the true differentiator for AI‑native nations will be their ability to adapt, be competent, and stay curious. These capabilities enable rapid deployment of AI models and services.
EVIDENCE
He adds that beyond physical infrastructure, the key factors are capability, competence, and curiosity, which allow governments and economies to flexibly adopt new AI approaches and create new jobs, citing examples from his visit to India [300-306].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI is presented as a tool to bridge infrastructure gaps in developing economies and as a driver of institutional resilience and curiosity-driven capability [S14][S15][S22][S23].
MAJOR DISCUSSION POINT
AI‑ready infrastructure distinguishes AI‑native nations
Argument 4
National AI strategy must be transparent, spread benefits, and protect citizens
EXPLANATION
Paul outlines that a responsible national AI strategy should clearly communicate its goals, ensure AI benefits reach all segments of society, and safeguard citizens from potential harms. Transparency and inclusive benefit distribution are essential for public trust.
EVIDENCE
He describes the need for a clear plan that communicates AI opportunities, spreads benefits to rural and marginalized groups, and keeps citizens safe through AI safety and harm mitigation conversations across government and business levels [163-171].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
National AI strategies call for traceability, transparency, and citizen confidence, emphasizing inclusive benefit distribution [S17][S18][S20].
MAJOR DISCUSSION POINT
Transparent, inclusive national AI strategy
Argument 5
Governments shift from cautious to active posture, managing risk with guardrails
EXPLANATION
Paul observes that governments, traditionally cautious, are now adopting a more active stance toward AI, embracing risk while establishing guardrails to manage it. This shift enables faster AI adoption while maintaining safety.
EVIDENCE
He notes that the Australian government has moved from a cautious approach to a more active posture, embracing risk and implementing guardrails that allow risk to be understood and managed effectively [351-353].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Voluntary, industry-driven standards and guardrails are recommended to balance risk without stifling innovation, while over-regulation is warned against [S21][S24][S20].
MAJOR DISCUSSION POINT
Government shift to active AI risk management
AGREED WITH
Erik Ekudden, Hari Shetty
DISAGREED WITH
Erik Ekudden
Argument 6
AI‑native nations will be defined by capability, competence, and curiosity
EXPLANATION
Paul reiterates that the defining traits of AI‑native nations will be their internal capabilities, technical competence, and a culture of curiosity that drives continuous learning and adaptation. These traits outweigh pure infrastructure investment.
EVIDENCE
He emphasizes that while data-centers and compute will be built, the decisive factors are capability, competence, and curiosity, which enable governments and economies to flexibly adopt AI and create new jobs, citing observations from his visit to India [300-306].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Capability and curiosity are cited as key differentiators for AI-native economies, linked to broader capacity-building goals [S23][S22][S14].
MAJOR DISCUSSION POINT
Capability, competence, curiosity as AI‑native nation traits
H
Hari Shetty
6 arguments199 words per minute1619 words487 seconds
Argument 1
Consistent, hallucination‑free performance earns trust
EXPLANATION
Hari argues that trust in AI systems is earned only when they consistently deliver accurate results without hallucinations. Long‑term reliable performance builds both human and agentic trust.
EVIDENCE
He explains that trust must be earned over time, requiring AI to operate without hallucinations or fundamental flaws, and that only consistent, reliable performance can establish lasting trust [147-152].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Metrics such as “plus scores” and quality-tracking frameworks are proposed to monitor hallucinations and ensure reliable outputs [S28].
MAJOR DISCUSSION POINT
Reliability and hallucination‑free operation builds trust
AGREED WITH
Paul Hubbard, Erik Ekudden, Divyesh Vithlani
Argument 2
Problem‑first, continuous‑operation model drives proof over promise
EXPLANATION
Hari stresses that AI projects should start by defining the business problem rather than selecting a model, and solutions must operate continuously to demonstrate real value. This problem‑first, always‑on approach turns promises into proven outcomes.
EVIDENCE
He outlines a four-point approach: start with the problem, adapt solutions to enterprise complexity, ensure solutions work every day, and embed trust through consistent performance, thereby moving from pilots to proof [147-152].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
A problem-first, always-on approach is advocated to move from pilots to production, with emphasis on collaboration and scaling use cases [S26][S27][S1].
MAJOR DISCUSSION POINT
Problem‑first, always‑on AI delivery
Argument 3
Enterprise AI must move beyond pilots to reliable, always‑on services
EXPLANATION
Hari contends that enterprises can no longer rely on pilot projects; AI must be deployed as a reliable, continuously operating service to generate real business value. Consistency and uptime are essential for enterprise trust.
EVIDENCE
He states that AI is no longer about pilots, emphasizing the need for solutions that work every hour, every day, and that only such reliable services can be taken to market [147-155].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The shift from experimental pilots to continuously operating platforms is highlighted as essential for enterprise value [S27][S26].
MAJOR DISCUSSION POINT
Enterprise AI needs reliable, always‑on services
AGREED WITH
Divyesh Vithlani, Paul Hubbard
Argument 4
Treat AI as a core capability; productivity is an early indicator, not the sole metric
EXPLANATION
Hari suggests that AI should be viewed as a foundational capability rather than a project measured solely by ROI. Productivity gains are an early signal, but longer‑term outcomes such as cost reduction, quality improvement, and cycle‑time reduction are more meaningful.
EVIDENCE
He explains that while productivity is an early indicator, the ultimate benefits include lower costs, higher quality, and faster cycles, and that Wipro’s models help clients understand these end outcomes beyond simple productivity metrics [237-245].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI is framed as a foundational capability that drives broader outcomes beyond immediate productivity gains, aligning with human-capability and capacity themes [S23][S22].
MAJOR DISCUSSION POINT
AI as core capability, productivity as early signal
Argument 5
“Plus scores” track failures, hallucinations, and quality of outcomes
EXPLANATION
Hari introduces “plus scores” as a metric to monitor AI performance, capturing failure rates, hallucinations, and alignment with acceptable quality thresholds. This helps ensure AI outputs meet organizational standards.
EVIDENCE
He describes plus scores as tracking the number of failure instances, assessing whether they fall within acceptable vectors, and using them to evaluate quality, hallucinations, and overall task success [247-252].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
New measurement approaches, including “plus scores,” are suggested to capture failure rates, hallucinations, and quality thresholds [S28].
MAJOR DISCUSSION POINT
Plus scores for AI quality monitoring
Argument 6
Decision velocity will dramatically increase, reshaping organizational processes
EXPLANATION
Hari predicts that AI will accelerate decision‑making speed across organizations, eliminating slow, cumbersome processes. Faster decision velocity will become a competitive advantage.
EVIDENCE
He notes that AI will dramatically increase decision velocity, transforming slow organizational processes into rapid, efficient operations, and that this shift will be evident within the next few years [447-452].
MAJOR DISCUSSION POINT
AI‑driven acceleration of decision velocity
D
Divyesh Vithlani
7 arguments145 words per minute2320 words958 seconds
Argument 1
Platform‑first approach with layered ethical data and model governance
EXPLANATION
Divyesh explains that a platform‑first strategy, built with layers for data, models, knowledge, and context, embeds ethical AI, data governance, and fair use directly into the platform. This enables safe, scalable AI deployment across the enterprise.
EVIDENCE
He describes constructing a platform that integrates layers from data to models, embedding ethical AI and data-governance controls, allowing end-users to leverage AI as intuitively as opening an Excel file [130-138].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
A platform-first strategy with built-in ethical layers and traceability is promoted to embed responsible AI governance [S1][S10][S21].
MAJOR DISCUSSION POINT
Platform‑first with ethical governance layers
AGREED WITH
Erik Ekudden, Paul Hubbard, Hari Shetty
DISAGREED WITH
Erik Ekudden
Argument 2
Execution plane vs. control plane enables dynamic agent oversight
EXPLANATION
Divyesh differentiates between an execution plane that runs AI agents and a control plane that monitors and governs them. This separation allows real‑time oversight, onboarding/offboarding, and conflict management between agents and humans.
EVIDENCE
He outlines the two-plane architecture-execution for activity and control for supervision-detailing how agents receive guardrails, are monitored, and can be managed similarly to human staff, including conflict detection in real time [206-213] and [221-223].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Separating execution and control planes allows real-time monitoring and governance of AI agents, echoing industry-standard guardrail frameworks [S21][S24].
MAJOR DISCUSSION POINT
Two‑plane architecture for dynamic agent oversight
Argument 3
Platform architecture allows safe, enterprise‑scale AI deployment
EXPLANATION
Divyesh argues that a platform‑centric architecture, with built‑in safeguards and layered governance, enables enterprises to deploy AI at scale while maintaining trust and compliance. The platform abstracts complexity for end‑users.
EVIDENCE
He reiterates that the platform-first approach, with ethical layers and governance, unleashes AI power safely for business users, allowing AI to be used as naturally as any other task [130-138].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Platform-centric designs with ethical layers are presented as a way to safely scale AI across enterprises [S1][S10].
MAJOR DISCUSSION POINT
Platform enables safe enterprise AI scale
Argument 4
Dynamic oversight through control planes ensures accountable agent actions
EXPLANATION
Divyesh emphasizes that the control plane provides continuous monitoring and accountability for AI agents, ensuring their actions align with organizational policies and can be audited. This dynamic oversight is essential for responsible AI.
EVIDENCE
He describes how the control plane monitors every agent activity, manages onboarding/offboarding, and resolves conflicts between agents and humans, thereby ensuring accountability and real-time oversight [206-213] and [221-223].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Control-plane monitoring provides continuous accountability for AI agents, aligning with recommended guardrail practices [S21][S24].
MAJOR DISCUSSION POINT
Control‑plane based dynamic oversight
Argument 5
Banking ROI measured via micro‑productivity, faster response, and legacy modernization
EXPLANATION
Divyesh outlines that AI delivers ROI in banking through micro‑productivity gains, accelerated response times, and the modernization of legacy systems. These improvements translate into cost savings and competitive advantage.
EVIDENCE
He cites examples where AI improves micro-productivity, enables faster reactions to change, and modernizes legacy platforms that still dominate 90 % of banks, thereby creating tangible value and faster experimentation [334-340] and [413-416].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Banking relies on trust and AI to improve productivity, accelerate response times, and modernize legacy systems, supporting economic development goals [S1][S14][S22].
MAJOR DISCUSSION POINT
Banking ROI through productivity and legacy modernization
Argument 6
Platform‑centric guardrails mitigate AI risks in enterprise deployments
EXPLANATION
Divyesh asserts that embedding guardrails within a platform‑centric design reduces AI‑related risks, ensuring compliance, security, and ethical operation across the enterprise. Proper tooling and governance are key to risk mitigation.
EVIDENCE
He explains that by maintaining platform-centric guardrails and leveraging existing data-center and cloud controls, the organization can meet and mitigate AI risks, ensuring benefits outweigh concerns [355-362].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Embedding guardrails within a platform architecture is recommended to manage AI risk and ensure compliance [S21][S24].
MAJOR DISCUSSION POINT
Platform guardrails for AI risk mitigation
Argument 7
Banking will become seamless with AI avatars and instant cross‑border payments
EXPLANATION
Divyesh envisions a future where AI avatars and digital assets enable frictionless banking experiences, including near‑instant cross‑border transactions, transforming customer interactions.
EVIDENCE
He provides an example of combining AI with digital assets and stablecoins to reduce cross-border payment times from days to near-instant, illustrating how AI will reshape banking experiences [408-410] and further describes seamless, intuitive services enabled by AI avatars [437-445].
MAJOR DISCUSSION POINT
Seamless AI‑driven banking experiences
E
Erik Ekudden
9 arguments184 words per minute2305 words750 seconds
Argument 1
Secure, trusted network as backbone of AI trust
EXPLANATION
Erik highlights that the security and trustworthiness of telecom networks are fundamental to building overall AI trust. A secure network provides the guarantees needed for AI workloads.
EVIDENCE
He notes that networks are already secure and trusted, providing the guarantees required for AI inference and that trust and security are core principles for network evolution [80-82].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Secure, trusted telecom networks are identified as foundational for AI trust, with standards and inclusion-based trust building cited [S21][S20].
MAJOR DISCUSSION POINT
Network security as AI trust foundation
AGREED WITH
Paul Hubbard, Hari Shetty, Divyesh Vithlani
Argument 2
Networks evolving from passive carriers to active AI enablers
EXPLANATION
Erik describes the transition of telecom networks from merely transporting data to actively hosting AI inference workloads, becoming an intelligent fabric that supports distributed AI services.
EVIDENCE
He explains that the network is becoming the host for AI experiences, requiring scaling to handle inference workloads and marking a shift to an intelligent fabric [75-78] and earlier discussion of the network as a host for AI [58-60].
MAJOR DISCUSSION POINT
Network evolution to active AI fabric
DISAGREED WITH
Divyesh Vithlani
Argument 3
AI glasses demand low‑latency, reliable 5G/6G fabric
EXPLANATION
Erik points out that AI‑powered wearables like smart glasses require ultra‑low latency and high‑reliability connectivity, which only advanced 5G/6G networks can provide. This drives the need for a robust intelligent fabric.
EVIDENCE
He describes AI glasses that offload inference to the network, requiring reliable, low-latency connectivity, and stresses that the network must improve beyond current 5G to meet these demands [61-66] and [90-94].
MAJOR DISCUSSION POINT
AI wearables need high‑performance network fabric
Argument 4
Energy‑efficient hardware and software mitigate AI’s power use
EXPLANATION
Erik argues that to keep AI sustainable, both hardware and software must be designed for energy efficiency, including smaller models where possible and smarter hardware, reducing overall power consumption.
EVIDENCE
He outlines the need for energy-efficient hardware, software, and AI models, noting that moving inference to the edge and using smaller models can prevent a surge in energy use, and cites that networks consume about 1 % of total power while enabling a 10-15 % emissions reduction elsewhere [318-326].
MAJOR DISCUSSION POINT
Energy‑efficient AI hardware/software
Argument 5
Guardrails from telecom translate to AI agents, ensuring accountability
EXPLANATION
Erik suggests that the existing safety and security guardrails in telecom can be adapted to AI agents, providing a familiar accountability framework for AI services.
EVIDENCE
He states that telecom already has safety-security guardrails, and these can be translated one-to-one into the AI (identic) world to ensure accountability for AI agents [190-192].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Existing telecom safety and security guardrails can be adapted to AI agents to provide accountability frameworks [S21][S24].
MAJOR DISCUSSION POINT
Telecom guardrails applied to AI agents
AGREED WITH
Divyesh Vithlani, Paul Hubbard, Hari Shetty
Argument 6
Governance should be domain‑specific, avoiding premature over‑regulation
EXPLANATION
Erik warns against imposing blanket regulations on AI before innovation has matured, advocating for domain‑specific governance that mirrors existing telecom safeguards without stifling progress.
EVIDENCE
He cautions that regulating before innovation can hinder progress and recommends translating telecom guardrails to AI on a domain-by-domain basis, avoiding premature over-regulation [191-192].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Domain-specific, voluntary standards are advocated to prevent stifling innovation, warning against blanket regulation [S21][S24].
MAJOR DISCUSSION POINT
Domain‑specific AI governance
Argument 7
Intelligent fabric unlocks new business models and growth opportunities
EXPLANATION
Erik claims that an AI‑enabled intelligent network creates novel business models, allowing operators to offer tailored, mission‑critical services and generate significant revenue and cost‑savings.
EVIDENCE
He describes how AI on the network enables new outcomes, drives business growth, and can deliver 10-50 % efficiency gains, translating into billions of dollars of savings and new revenue streams [262-274] and further elaborates on modeling on top of the network to produce new outcomes [285-287].
MAJOR DISCUSSION POINT
Network AI drives new business models
Argument 8
AI‑native networks will provide real‑time, massive user experiences and support physical AI
EXPLANATION
Erik envisions AI‑native networks that can deliver real‑time, large‑scale user experiences and support emerging physical AI technologies such as robots and drones, requiring highly responsive and adaptable infrastructure.
EVIDENCE
He explains that AI-native networks must be responsive to fast changes, provide massive user experiences, and support physical AI like humanoids, drones, and other devices, emphasizing real-time adaptability [371-382].
MAJOR DISCUSSION POINT
AI‑native networks for real‑time massive experiences
Argument 9
Public sector may overestimate risk, potentially stalling innovation
EXPLANATION
Erik observes that governments sometimes over‑estimate AI risks, leading to excessive caution that can impede innovation, especially in public‑sector deployments.
EVIDENCE
He notes that the public sector may be overly cautious, over-estimating risk, which could hold back innovation, and contrasts this with the need for balanced risk management [346-348] and earlier remarks on premature regulation [191-192].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Over-cautious risk perception in the public sector is highlighted as a barrier to AI innovation, with calls for balanced risk management [S24][S20].
MAJOR DISCUSSION POINT
Over‑cautious risk perception in public sector
AGREED WITH
Paul Hubbard, Hari Shetty
DISAGREED WITH
Paul Hubbard
M
Mridu Bhandari
3 arguments133 words per minute1768 words795 seconds
Argument 1
Trust framed as one of the seven chakras for sustainable AI
EXPLANATION
Mridu positions trust as one of the seven foundational ‘chakras’—human capital, inclusion, trust, resilience, science, resources, and social good—that guide a sustainable AI future. Embedding trust at this pillar level ensures accountability and long‑term success.
EVIDENCE
She introduces the seven chakras of aligned global cooperation, explicitly listing trust among them as a concrete pillar for turning ambition into accountability [4] and frames the discussion around People, Planet, and Progress [1-3].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Trust is positioned as a core pillar for sustainable AI, reinforced by inclusion-based trust building and traceability concepts [S20][S10][S1].
MAJOR DISCUSSION POINT
Trust as a chakra for sustainable AI
Argument 2
Accountability embedded in the seven‑pillar framework for global cooperation
EXPLANATION
Mridu emphasizes that accountability for AI outcomes is woven into the seven‑pillar (chakras) framework, ensuring that each pillar—such as trust and social good—carries clear responsibility across societies and enterprises.
EVIDENCE
She references the seven chakras (human capital, inclusion, trust, resilience, science, resources, social good) as the structure for global cooperation and accountability throughout the discussion [4] and reiterates the People, Planet, Progress vision in her closing remarks [456-457].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Accountability, traceability, and citizen confidence are integral to national AI strategies, aligning with the seven-pillar framework [S17][S18][S20].
MAJOR DISCUSSION POINT
Accountability within seven‑pillar AI framework
Argument 3
Vision of People, Planet, Progress guided by seven chakras shapes the AI future
EXPLANATION
Mridu concludes that aligning AI development with the three guiding principles—People, Planet, and Progress—and the seven chakras will redefine competitiveness, rebuild public trust, and future‑proof institutions.
EVIDENCE
In her closing, she ties together People, Planet, Progress with the seven pillars, stating that this alignment will redefine competitiveness, rebuild trust, and future-proof institutions for decades ahead [456-457].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The People-Planet-Progress narrative is echoed in AI strategies that emphasize human development, environmental sustainability, and economic progress [S22][S23][S20].
MAJOR DISCUSSION POINT
People, Planet, Progress as AI guiding principles
Agreements
Agreement Points
Trust is the essential foundation for AI innovation and deployment
Speakers: Paul Hubbard, Erik Ekudden, Hari Shetty, Divyesh Vithlani
Trust as foundation for innovation Secure, trusted network as backbone of AI trust Consistent, hallucination‑free performance earns trust Platform‑first approach with layered ethical data and model governance
All speakers stress that trust-whether as a societal foundation, network security, reliable performance, or embedded platform governance-is a prerequisite for successful AI adoption [39-41][80-82][147-152][130-138].
POLICY CONTEXT (KNOWLEDGE BASE)
Trust is highlighted as a prerequisite for scaling AI systems in India’s responsible AI discourse [S45], European AI policy stresses “trustability” and traceability as core principles [S54][S56], and global discussions note the need for guardrails to avoid both over-trust and mistrust of AI [S42].
Robust governance frameworks and guardrails are needed for responsible AI at scale
Speakers: Divyesh Vithlani, Erik Ekudden, Paul Hubbard, Hari Shetty
Platform‑first approach with layered ethical data and model governance Guardrails from telecom translate to AI agents, ensuring accountability Governments shift from cautious to active posture, managing risk with guardrails Risk is manageable with the right tool‑set
The panel agrees that AI must be deployed within structured governance-platform-centric layers, execution/control planes, telecom-derived guardrails, and active government risk management-to ensure accountability and safety [130-138][190-192][351-353][344-345].
POLICY CONTEXT (KNOWLEDGE BASE)
Deshpande’s framework calls for process and governance guardrails that protect innovation while ensuring responsibility [S41]; UN-sponsored panels underline the necessity of responsible deployment frameworks for agentic AI [S43]; and EU approaches advocate balanced regulation that safeguards rights without stifling innovation [S56].
Public‑sector risk perception tends to be overly cautious, requiring balanced management
Speakers: Erik Ekudden, Paul Hubbard, Hari Shetty
Public sector may overestimate risk, potentially stalling innovation Governments shift from cautious to active posture, managing risk with guardrails Risk is manageable with the right tool‑set
Both Erik and Paul note that governments can over-estimate AI risk, while Hari emphasizes that risk is manageable, suggesting a need for calibrated oversight [346-348][351-353][344-345].
POLICY CONTEXT (KNOWLEDGE BASE)
Analyses define risk as likelihood × severity and note divergent risk perceptions across contexts, urging calibrated risk quantification [S57]; IGF-style reports highlight tension between optimistic and cautious public-sector views [S58]; practitioners observe governments may overestimate AI risk, potentially hindering adoption [S59]; European policy recommends intervention only when necessary, balancing caution with innovation [S56].
AI will dramatically increase decision‑making speed and organisational responsiveness
Speakers: Hari Shetty, Divyesh Vithlani, Paul Hubbard
Decision velocity will dramatically increase, reshaping organisational processes Banking ROI measured via faster response and legacy modernisation AI will help respond to change faster (implied)
Hari predicts a surge in decision velocity, Divyesh links AI to faster response times and legacy modernisation, and Paul’s remarks on rapid experimentation reinforce the view that AI accelerates organisational agility [447-452][334-340].
POLICY CONTEXT (KNOWLEDGE BASE)
UN Security Council discussions recognize AI’s capacity to enable ultra-fast decision-making in security contexts [S65]; diplomatic forums cite AI-driven predictive analytics to accelerate policy choices [S66]; broader summit narratives affirm AI’s transformative speed benefits for organisations [S58].
Enterprises must move from pilot projects to always‑on, reliable AI services
Speakers: Hari Shetty, Divyesh Vithlani, Paul Hubbard
Enterprise AI must move beyond pilots to reliable, always‑on services Platform‑first approach enables scale and reliability Trust as foundation enables continuous innovation
The speakers concur that AI should no longer be confined to pilots; instead, it must be delivered as continuous, production-grade services supported by trustworthy platforms [147-152][130-138].
POLICY CONTEXT (KNOWLEDGE BASE)
Industry leaders stress the transition from pilots to production-ready platforms as essential for scale, citing Wipro’s “proof over promise” approach [S46] and calls to move from fragmented data to interoperable systems [S45][S47].
Similar Viewpoints
All three stress a user‑centred, inclusive approach that meets people where they are and provides trustworthy infrastructure for AI adoption [44-46][130-138][49-53].
Speakers: Paul Hubbard, Divyesh Vithlani, Erik Ekudden
People‑first, democratic participation builds confidence Platform‑first approach empowers the entire organisation Secure, trusted network serves all users
Both propose a layered architecture that separates execution from oversight, allowing real‑time monitoring and accountability of AI agents [206-213][221-223][190-192].
Speakers: Divyesh Vithlani, Erik Ekudden
Execution plane vs. control plane enables dynamic agent oversight Guardrails from telecom translate to AI agents, ensuring accountability
Unexpected Consensus
AI as a catalyst for financial inclusion and faster cross‑border payments
Speakers: Paul Hubbard, Divyesh Vithlani
National AI strategy must be transparent, spread benefits, and protect citizens Banking will become seamless with AI avatars and instant cross‑border payments
While coming from different sectors, both agree that AI should be leveraged to deliver inclusive financial services, reducing payment times and reaching underserved populations [166-168][408-410].
POLICY CONTEXT (KNOWLEDGE BASE)
India’s banking-sector AI policy balances experimentation with systemic-risk controls to promote inclusive finance [S48]; the AI Impact Summit 2026 highlighted AI’s role in expanding financial services and streamlining cross-border transactions [S55]; broader development discourse links AI to financial-inclusion objectives in the Global South [S44].
Overall Assessment

The panel shows strong consensus on trust as the cornerstone of AI, the necessity of robust governance and guardrails, and the transformative impact of AI on speed and continuous service delivery. There is moderate agreement on risk perception and a shared vision of inclusive, user‑centred AI ecosystems.

High consensus on foundational principles (trust, governance, always‑on services) with medium consensus on risk management and sector‑specific impacts, suggesting that coordinated policy and platform‑centric strategies are likely to gain broad support across government, industry, and academia.

Differences
Different Viewpoints
Extent of risk overestimation and appropriate governmental posture toward AI
Speakers: Erik Ekudden, Paul Hubbard
Public sector may overestimate risk, potentially stalling innovation Governments shift from cautious to active posture, managing risk with guardrails
Erik argues that the public sector often overestimates AI risk, which can hold back innovation [346-348]. Paul counters that the Australian government has moved from a cautious stance to a more active posture, embracing risk while putting guardrails in place to manage it [351-353].
POLICY CONTEXT (KNOWLEDGE BASE)
Scholarly work defines risk as likelihood × severity and notes divergent risk perceptions, suggesting governments may over-estimate AI hazards [S57]; IGF-derived analyses document disagreements on risk prioritisation between public and private actors [S58]; industry commentary confirms perception of governmental over-caution [S59].
Preferred locus of AI integration and governance – network‑centric vs platform‑centric
Speakers: Erik Ekudden, Divyesh Vithlani
Networks evolving from passive carriers to active AI enablers Platform‑first approach with layered ethical data and model governance
Erik emphasizes that telecom networks should evolve into an intelligent fabric that actively hosts AI inference workloads, making the network the primary enabler of trustworthy AI [75-78]. Divyesh advocates a platform-first strategy that embeds ethical data and model governance layers, separating execution and control planes to oversee agents, positioning the platform as the central point for safe, scalable AI deployment [130-138].
Unexpected Differences
Public‑sector risk perception versus active governmental engagement
Speakers: Erik Ekudden, Paul Hubbard
Public sector may overestimate risk, potentially stalling innovation Governments shift from cautious to active posture, managing risk with guardrails
It is surprising that a telecom executive (Erik) and a government economist (Paul) diverge on whether the public sector is still overly cautious. Erik sees persistent over-estimation of risk that could impede progress [346-348], while Paul highlights a recent shift toward a more proactive stance with concrete guardrails [351-353]. This contrast was not anticipated given their respective domains.
POLICY CONTEXT (KNOWLEDGE BASE)
Reports from IGF and related forums describe a split between cautious risk perception and calls for proactive government involvement in AI governance [S58]; practitioner commentary indicates governments risk stalling innovation by being overly risk-averse [S59]; policy guidance recommends balanced engagement rather than passive caution [S56].
Overall Assessment

The panel largely agrees on the centrality of trust, people‑first approaches, and the need for robust governance. The main points of contention revolve around how risk is perceived and managed by the public sector and whether AI should be primarily embedded in telecom networks or delivered via enterprise platforms. These disagreements are moderate in intensity and reflect differing professional lenses rather than fundamental opposition.

Moderate – the disagreements are focused on implementation pathways and risk framing, which could influence policy coordination and industry‑government collaboration but do not undermine the shared commitment to trustworthy, inclusive AI.

Partial Agreements
All speakers concur that trust is a prerequisite for AI deployment—Paul frames trust as the foundation for innovation [39-41]; Erik stresses that secure networks provide the backbone of AI trust [80-82]; Hari notes that reliable, hallucination‑free performance builds trust over time [147-152]; Divyesh’s platform‑first design embeds trust through ethical governance layers [130-138]. However, they differ on where that trust should be instantiated (network vs platform).
Speakers: Paul Hubbard, Erik Ekudden, Hari Shetty, Divyesh Vithlani
Trust as foundation for innovation Secure, trusted network as backbone of AI trust Consistent, hallucination‑free performance earns trust Platform‑first approach with layered ethical data and model governance
Both emphasize a people‑centric approach: Paul calls for meeting citizens where they are and engaging them democratically [44-46], while Divyesh stresses empowering the entire organization through a platform that makes AI as intuitive as everyday tools, reflecting a user‑first mindset [130-138]. Their focus aligns on inclusivity, though one targets citizens broadly and the other internal enterprise users.
Speakers: Paul Hubbard, Divyesh Vithlani
People‑first, democratic participation builds confidence Platform‑first approach with layered ethical data and model governance
Takeaways
Key takeaways
Trust is the foundation for AI innovation; it must be built through people‑first, democratic participation and reliable, secure infrastructure. The network is evolving from a passive data carrier to an active, intelligent fabric that hosts AI inference (e.g., AI glasses) and must be secure, low‑latency, and energy‑efficient. A platform‑first approach with layered ethical, data, and model governance enables scalable, enterprise‑wide AI while maintaining guardrails and accountability. Proof over promise requires a problem‑first mindset, continuous‑operation models, and moving beyond pilot projects to always‑on services. Measuring AI value should treat AI as a core capability; productivity is an early signal, complemented by “plus scores” that track failures, hallucinations, and quality. Risk is manageable with proper toolsets and governance; governments may over‑estimate risk, but a balanced, cautious‑yet‑active posture is needed. AI‑native nations will be distinguished by capability, competence, curiosity, and the ability to adapt institutions and workforce to AI‑driven change. Sustainability can be achieved through energy‑efficient hardware, software, and models; distributed inference reduces overall energy impact. Future AI ecosystems will feature autonomous networks, physical AI (robots, drones), and AI avatars, dramatically increasing decision velocity and reshaping work, finance, and daily life.
Resolutions and action items
Adopt a platform‑first architecture with distinct execution and control planes for AI and agent oversight (proposed by Divyesh Vithlani). Implement layered ethical, data, and model governance within AI platforms to embed trust and compliance (Divyesh Vithlani). Leverage existing telecom guardrails as a baseline for AI agent accountability and extend them to AI services (Erik Ekudden). Use the AI CoLab model to foster cross‑sector collaboration among government, industry, academia, and NGOs for responsible AI deployment (Paul Hubbard). Measure AI outcomes using productivity metrics and “plus scores” that capture failures, hallucinations, and quality of results (Hari Shetty). Prioritize people‑first, participatory approaches when introducing AI services to build public confidence (Paul Hubbard). Invest in energy‑efficient hardware, software, and smaller inference models to mitigate AI’s power consumption (Erik Ekudden). Develop dynamic oversight mechanisms via control‑plane monitoring to continuously supervise agent actions (Divyesh Vithlani).
Unresolved issues
Specific standards or metrics for the proposed “plus scores” and how they will be operationalized across industries. Detailed roadmap for scaling AI‑enabled intelligent fabric (5G/6G) to support billions of edge devices and AI glasses. Concrete mechanisms for ensuring inclusive trust across diverse demographic groups, especially in rural and marginalized communities. How regulatory frameworks can evolve without stifling innovation—exact balance between oversight and flexibility remains open. Implementation details for dynamic, real‑time accountability across the full AI stack (network, cloud, edge, device). Clear guidance on transitioning legacy banking systems to AI‑native platforms while managing CAPEX constraints.
Suggested compromises
Use existing telecom security and safety guardrails as a starting point for AI agent regulation rather than imposing entirely new regulations (Erik Ekudden). Adopt a balanced risk posture: governments start cautiously but progressively shift to an active, risk‑managed approach as understanding improves (Paul Hubbard). Combine people‑first participatory design with technical guardrails to build trust without slowing innovation (Paul Hubbard). Treat AI as a capability rather than a pure ROI driver, allowing investment in foundational platforms while still delivering measurable productivity gains (Hari Shetty).
Thought Provoking Comments
AI is not about technological adoption. It’s all about what can generate public value, what generates public welfare.
Frames AI from a public‑policy/economic perspective rather than a purely technical race, reminding the audience that the ultimate metric is societal benefit.
Set the tone for the discussion on trust and responsibility, prompting subsequent speakers (e.g., Erik on network trust, Divyesh on platform governance) to ground their technical proposals in public value rather than hype.
Speaker: Paul Hubbard
We shouldn’t frame it as trust versus innovation; trust is the foundation that lets you make the innovation.
Challenges the common narrative that safety slows progress, proposing instead that trust enables faster, more sustainable innovation.
Shifted the conversation from a perceived trade‑off to a synergistic relationship, leading Erik to discuss how the network itself can embed trust and Hari to outline a “proof‑over‑promise” framework.
Speaker: Paul Hubbard
The network is becoming an intelligent fabric that hosts AI inference workloads – think AI glasses that off‑load processing to the edge.
Introduces a concrete evolution of infrastructure: from passive connectivity to an active, AI‑enabled platform, linking hardware, edge computing, and user experience.
Opened a new topic on the role of telecom in AI governance, inspired Divyesh to talk about platform layers and agents, and set up later sustainability discussions about energy‑efficient inference.
Speaker: Erik Ekudden
We take a platform‑first approach: build a layered AI platform (data, model, knowledge, context) with built‑in ethical and governance controls, so end‑users can use AI as naturally as opening Excel.
Provides a practical blueprint for scaling trustworthy AI in a regulated industry, emphasizing usability without sacrificing safeguards.
Guided the dialogue toward concrete implementation tactics, prompting Hari to articulate his “proof over promise” principles and Erik to discuss network‑level guardrails.
Speaker: Divyesh Vithlani
Proof over promise: start with the problem, not the model; ensure solutions work continuously; earn agentic trust through consistent performance.
Distills AI delivery into four actionable tenets, moving the conversation from abstract ideals to measurable outcomes.
Created a turning point that reframed the rest of the panel’s discussion around operational rigor, influencing Divyesh’s talk of performance appraisal for agents and Erik’s emphasis on reliability.
Speaker: Hari Shetty
When you introduce agents at scale, accountability follows a hierarchy of decision‑making – responsibility resides in the domain providing the service, and existing telecom guardrails can be translated one‑to‑one to the AI world.
Bridges the gap between traditional telecom regulation and emerging AI agent governance, offering a concrete governance model.
Prompted Divyesh to elaborate on dynamic oversight via execution and control planes, and reinforced the theme that existing infrastructure can be leveraged for AI accountability.
Speaker: Erik Ekudden
Agents get performance appraisals just like humans – we monitor token consumption, output quality, and even have an ‘Agent University’ for continual learning.
Novel analogy that human resource practices can be applied to autonomous AI agents, highlighting the need for ongoing governance and continuous improvement.
Deepened the conversation on operational oversight, leading Hari to mention “plus scores” for failure tracking and reinforcing the idea that trust is earned over time.
Speaker: Divyesh Vithlani
AI CoLab is a cross‑sector initiative that brings government, industry, academia, and NGOs together to solve real problems, not just to tinker with technology.
Emphasizes collaborative governance as essential for responsible AI, moving beyond siloed efforts.
Reinforced earlier points about public‑private partnership, gave a concrete example of how trust can be institutionalized, and set the stage for the forward‑looking “AI‑native nations” discussion.
Speaker: Paul Hubbard
What will separate AI‑native nations from AI‑dependent ones are capability, competence, and curiosity – not just compute or data‑centers.
Shifts focus from infrastructure to human capital and cultural factors, suggesting that long‑term competitiveness hinges on mindset and adaptability.
Prompted the panel to reflect on talent pipelines and education, influencing Erik’s sustainability remarks and Hari’s three‑step plan for CEOs.
Speaker: Paul Hubbard
Energy‑efficient hardware, software, and models will keep AI’s carbon footprint in check; distributed inference actually reduces emissions in other sectors by up to 15 %.
Counters the narrative that AI is inherently unsustainable, offering a balanced view that aligns AI growth with climate goals.
Steered the discussion toward sustainability, leading Paul and Divyesh to mention responsible deployment and risk management, and tying back to the opening theme of People, Planet, Progress.
Speaker: Erik Ekudden
Overall Assessment

The discussion was shaped by a series of pivotal remarks that moved the conversation from abstract aspirations to concrete, actionable frameworks. Paul Hubbard’s framing of AI as a public‑value endeavour and his insistence that trust underpins innovation set a foundational narrative. Erik Ekudden’s vision of the network as an “intelligent fabric” and his sustainability insights expanded the technical scope, while Divyesh Vithlani’s platform‑first strategy and Hari Shetty’s “proof over promise” principles supplied practical roadmaps for trustworthy deployment. The interplay of these comments—each prompting deeper elaboration from other panelists—created a dynamic flow that oscillated between policy, infrastructure, governance, and future competitiveness, ultimately delivering a cohesive vision of how People, Planet, and Progress can be aligned through the seven “chakras” of AI cooperation.

Follow-up Questions
What specific measurement frameworks, reporting mechanisms, and independent oversight structures should governments adopt to ensure accountable and responsible AI deployment?
The discussion highlighted the need for clear accountability at the national level, but concrete frameworks were not detailed, indicating a gap that requires further definition and research.
Speaker: Mridu Bhandari, Paul Hubbard
What standardized metrics can be used to evaluate AI ROI beyond productivity, such as trust scores, decision velocity, and risk mitigation?
While ROI was discussed, participants noted the lack of agreed‑upon metrics for trust, speed of decision‑making, and risk, suggesting a need for systematic measurement approaches.
Speaker: Mridu Bhandari, Hari Shetty
How can dynamic oversight of AI agents be operationalized in highly regulated industries, including mechanisms for performance appraisal, accountability, and termination of misbehaving agents?
The panel raised the concept of “agent university” and performance management for agents but did not outline concrete governance processes, indicating a research need.
Speaker: Mridu Bhandari, Divyesh Vithlani
What best practices and technological approaches can achieve energy‑efficient AI hardware, software, and model design to reconcile large‑scale AI expansion with sustainability goals?
The conversation acknowledged AI’s high energy demand and the importance of efficient hardware/software, yet specific strategies remain unexplored.
Speaker: Mridu Bhandari, Erik Ekudden
How can the AI CoLab model of cross‑sector collaboration be scaled, replicated, and evaluated for effectiveness in fostering responsible AI innovation globally?
The AI CoLab was presented as a promising partnership framework, but details on scaling, governance, and impact measurement were not provided.
Speaker: Mridu Bhandari, Paul Hubbard
What concrete use‑cases, performance metrics, and implementation pathways exist for AI‑driven improvements in cross‑border payments and open finance within the Indian context?
The panel suggested AI could accelerate payments, yet specific pilots, success criteria, and regulatory considerations were left open.
Speaker: Mridu Bhandari, Divyesh Vithlani
Beyond infrastructure, what policies and programs can nations adopt to develop the capability, competence, and curiosity needed to become AI‑native economies?
Capability, competence, and curiosity were identified as differentiators for AI‑native nations, but actionable national strategies were not detailed.
Speaker: Mridu Bhandari, Paul Hubbard
What architectural design principles and standards are required for AI‑native networks that can support massive, low‑latency AI workloads such as AI glasses, robotics, and edge inference at scale?
The shift to an intelligent fabric was discussed, but concrete network design guidelines and scalability benchmarks remain undefined.
Speaker: Mridu Bhandari, Erik Ekudden
How can “plus scores” be standardized across industries to monitor AI failures, hallucinations, and overall quality of AI outputs?
The concept of plus scores was introduced as a quality metric, yet a universal framework for calculation and benchmarking is lacking.
Speaker: Hari Shetty
What real‑time governance mechanisms are needed to detect and resolve conflicts between AI agents and human operators?
The panel mentioned conflict detection between agents and humans but did not specify detection algorithms, escalation protocols, or governance structures.
Speaker: Divyesh Vithlani
What practical tools and assessment criteria can boards use to measure AI thrust readiness and risk tolerance within their organizations?
Leaders expressed uncertainty about how to gauge AI readiness at the board level, indicating a need for ready‑to‑use assessment frameworks.
Speaker: Mridu Bhandari
What are the long‑term impacts of AI adoption on job roles and skill pipelines in the banking and financial services sector, and how should workforce planning adapt?
While AI’s transformative potential was highlighted, detailed analysis of workforce displacement, reskilling needs, and talent pipelines was not provided.
Speaker: Divyesh Vithlani
How can organizations quantitatively measure improvements in decision velocity attributable to AI, and what benchmarks should be used?
Decision velocity was identified as a key benefit, but specific measurement methods and industry benchmarks were not discussed.
Speaker: Hari Shetty
What remediation and accountability processes should be established when AI agents produce hallucinations or erroneous outputs, including possible “firing” of agents?
The notion of terminating agents for poor performance was raised, yet concrete policies for remediation and accountability are missing.
Speaker: Divyesh Vithlani
What migration pathways and integration strategies enable AI to be incorporated into legacy banking systems efficiently without disrupting operations?
The challenge of modernizing 90 % of banks’ legacy platforms with AI was noted, but detailed migration frameworks were not outlined.
Speaker: Divyesh Vithlani
How can policymakers balance the need for regulation with the risk of stifling AI innovation, especially in the telecom and network domain?
The panel warned against premature regulation, but did not propose a balanced regulatory approach or criteria for timing.
Speaker: Erik Ekudden

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Trusted Connections_ Ethical AI in Telecom & 6G Networks

Trusted Connections_ Ethical AI in Telecom & 6G Networks

Session at a glanceSummary, keypoints, and speakers overview

Summary

The inaugural session of the India AI Impact Summit 2026 focused on the convergence of artificial intelligence and telecommunications, highlighting AI as a transformative force across industries [6][7][14-16]. TRAI Chairman Anil Kumar Lahoti stressed that AI is no longer an add-on but a foundational layer that will become intrinsic to upcoming 6G networks, making the telecom infrastructure “AI-native” and a backbone of India’s AI ecosystem [28-34][31-33]. He cited concrete AI deployments that already improve network performance, predict faults, save energy and block hundreds of millions of spam calls daily, demonstrating tangible benefits of responsible AI use [38-43]. The chairman also warned that the scale of AI-driven decisions makes trust, transparency and accountability essential, outlining India’s risk-based regulatory framework, the 2023 AI recommendations, the 2024 sandbox for live testing, and the MANO-V vision for human-centric AI governance [46-53][54-56].


Ms Pallavi Mishra then introduced the first plenary on preparing telecom networks for the AI era, featuring panelists from Ericsson, Qualcomm, Nokia and Tejas Networks [74-81][82-84]. Ericsson’s Magnus Ewerbring noted that India already enjoys over 90 % population coverage with 5G and that AI has increased spectrum capacity by about 10 % and improved energy efficiency by roughly 33 %, with further gains expected as networks move toward level-5 autonomy in 6G [101-108][206-214]. Qualcomm’s Vinesh Sukumar explained that democratizing AI on edge devices requires hybrid edge-cloud architectures, keeping privacy-sensitive and latency-critical functions on the device while training and large-scale inference run in the cloud [119-128][219-229]. Nokia’s Pasi Toivanen stressed that capturing AI value demands a collaborative ecosystem, clear value-sharing models, and proactive security assessments embedded in the network itself [145-154][158-160].


Shantigram Jagannath from Tejas Networks argued that AI adoption must address both cost optimisation (CAPEX/OPEX) and new revenue streams, and that choosing between AI-first or bolt-on architectures depends on existing equipment and long-term investment cycles [169-184][185-190]. He also highlighted the need for trust and regulation to ensure equitable service, proposing network slicing and AI-driven management to prevent rural users from receiving lower bandwidth than urban users [188-196][255-266]. The moderator Ritu Ranjan Mittar queried how AI differs from traditional self-optimising networks, and Magnus confirmed that AI enables deeper data analysis, yielding measurable capacity and energy gains beyond existing SON capabilities [196-198][199-214]. Vinesh added that decisions involving user privacy and real-time responsiveness belong at the edge, while fleet management and model training remain cloud-centric, underscoring a hybrid approach [219-229].


Pasi suggested that most optimisation decisions should be pushed to the network itself, reducing reliance on regional data centres, whereas Shantigram emphasized that future AI agents will vastly outnumber human devices, requiring new business models and regulatory thinking [239-248][277-286]. The session concluded that AI will fundamentally reshape telecommunications, but its success will depend on coordinated industry innovation, robust governance, and inclusive deployment to ensure trusted, sustainable services for all users [62-64][55-56].


Keypoints


Major discussion points


AI is becoming the foundational, “AI-native” layer of telecom networks, especially with the upcoming 6 G era.


The Chairman emphasized that AI is no longer an add-on but the intelligence layer of telecom, and that future 6 G networks will be intrinsically AI-native rather than merely AI-enabled [28-34].


Current AI deployments are already delivering tangible benefits in Indian telecom.


AI is used for network performance optimisation, fault prediction, energy-efficiency gains, and fraud/spam mitigation – e.g., operators report significant energy savings and the blocking of ~400 million spam calls daily [38-44].


A risk-based regulatory framework is being put in place to ensure responsible AI use.


TRAI’s 2023 recommendations and the 2024 sandbox guidelines introduce a tiered approach (self-regulation for low-risk use cases, stricter obligations for high-risk ones) and align with the national AI-governance principles [46-48][49-53].


Technical challenges such as security, sustainability, and the edge-cloud split must be addressed.


The panel highlighted concerns about AI-driven attacks, the compute-intensive nature of AI and its energy impact, and the need to decide which functions stay on the edge versus the cloud [92-96][119-128].


Building an inclusive AI-telecom ecosystem and new business models is essential.


Speakers stressed the importance of collaborative ecosystems, trust-based “app-store” models for AI services, and ensuring affordable access for the bottom-of-the-pyramid while exploring revenue opportunities from AI-enabled services [145-152][163-170][184-190].


Overall purpose / goal of the discussion


The session was convened to launch the “Responsible AI in Telecom” track of the India AI Impact Summit 2026, to share how AI is already transforming India’s telecom infrastructure, to outline regulatory and governance measures that will guide safe and trustworthy AI adoption, and to set the agenda for deeper technical and policy deliberations on preparing networks, protecting consumers, and fostering an inclusive AI-driven ecosystem.


Overall tone and its evolution


The conversation began with a formal, celebratory tone-acknowledging TRAI’s milestones and the promise of AI-driven networks. It then shifted to an analytical, evidence-based tone as speakers presented concrete benefits and regulatory steps. Mid-session, the tone became more cautionary and problem-solving, focusing on security, sustainability, and architectural decisions. By the closing remarks, the tone turned collaborative and forward-looking, emphasizing ecosystem partnership and inclusive growth. Throughout, the discourse remained professional and constructive.


Speakers

Ms. Pallavi Mishra


– Role/Title: Moderator, event host (India AI Impact Summit 2026)


– Area of Expertise: Telecommunications policy, AI in telecom (implied from hosting role)


Shri Anil Kumar Lahoti


– Role/Title: Honorable Chairman, Telecom Regulatory Authority of India (TRAI)


– Area of Expertise: Telecom regulation, AI governance, policy leadership [S7][S8]


Shri Ritu Ranjan Mittar


– Role/Title: Member, TRAI; Session Moderator; Telecom policy expert with over three decades of experience


– Area of Expertise: Telecom networks, spectrum policy, regulatory frameworks, AI in telecom [S2]


Mr. Pasi Toivanen


– Role/Title: Representative, Nokia (speaker)


– Area of Expertise: Telecom network architecture, AI-enabled solutions, ecosystem collaboration [S1]


Mr. Shantigram Jagannath


– Role/Title: Technology Strategist, Tejas Networks (speaker)


– Area of Expertise: Telecom network design, AI-driven innovations, cost-revenue frameworks, ecosystem trust [S3]


Dr. Vinesh Sukumar


– Role/Title: Vice President, Product Management, Qualcomm


– Area of Expertise: Mobile AI, edge-cloud hybridization, AI inference on devices, privacy-aware AI deployment [S6]


Magnus Ewerbring


– Role/Title: Chief Technology Officer, Asia Pacific, Ericsson


– Area of Expertise: Telecom network automation, AI-native 5G/6G evolution, performance optimization, autonomous networks [S13]


Audience


– Role/Title: Attendee(s) from the session audience


– Area of Expertise: Not specified


Additional speakers:


Mr. Magnus Eberberg / Magnus Ewerbring – (same individual as Magnus Ewerbring listed above; appears under a variant spelling in the transcript).


No other speakers were identified outside the provided list.


Full session reportComprehensive analysis and detailed insights

The inaugural session of the India AI Impact Summit 2026 opened with Ms Pallavi Mishra welcoming a diverse audience of operators, OEMs, policymakers, academia and media, noting the event’s placement on the summit sidelines and the 29th anniversary of TRAI’s role in shaping India’s telecom landscape [5-9][11-13]. She framed the theme as the emergence of “self-healing” networks that can anticipate faults and deliver uninterrupted connectivity, stressing that this vision is already being realised through artificial intelligence [12-16].


TRAI Chairman Shri Anil Kumar Lahoti then delivered the inaugural address, describing AI as “the backbone for the intelligence era” and asserting that AI is no longer an optional add-on but an intrinsic, “AI-native” layer for forthcoming 6G systems [28-34][31-33]. He cited concrete deployments already in place: AI-driven predictive network management, fault detection, energy-efficiency optimisation, AI-and-blockchain-based filtering that blocks roughly 400 million spam calls and messages per day, the disconnection of about 2.1 million spam numbers, and a digital-consent acquisition framework piloted with banks to give consumers control over commercial communications [38-44]. To safeguard the massive impact of algorithmic decisions, he outlined TRI’s risk-based regulatory framework-recognising that “not all AI use cases are safe, but all AI use cases carry the same level of risk,” allowing low-risk cases to be self-regulated while imposing stricter obligations on high-risk ones-along with the 2024 sandbox for live AI testing and the human-centred MANOV vision that embeds safeguards by design [46-53][54-56]. He also announced that a second plenary will focus on building customer trust, covering governance, ethics, accountability and consumer protection [31-33].


After thanking the Chairman, Ms Mishra introduced the first plenary, “Preparing telecom networks for the AI era,” and announced a panel of senior technologists from Ericsson, Qualcomm, Nokia and Tejas Networks [74-81][82-84]. The session was moderated by Shri Ritu Ranjan Mittar, who opened with four thematic questions-(a) evolution of the access network, (b) core-network changes, (c) AI on handsets and its impact on the network, (d) AI-enabled security threats, and (e) compute-intensity & sustainability-and noted the presence of Dr Tangirala and start-up representatives, underscoring the multi-stakeholder nature of the discussion [88-97].


Panel presentations


* Magnus Ewerbring, Chief Technology Officer, Asia-Pacific, Ericsson, highlighted India’s exceptional 5G footprint-over 90 % population coverage, with some estimates approaching 99 %-providing a robust platform for AI-enabled services [101-103]. He reported that AI-enhanced link-adaptation algorithms have already increased effective spectrum capacity by about 10 % and improved energy efficiency by roughly 33 % [206-214], and outlined Ericsson’s roadmap toward TM Forum Level 4 autonomy by 2028 and Level 5 “AI-native” autonomy for 6G [111-112][199-205].


* Dr Vinesh Sukumar, Vice-President, Qualcomm, argued for a hybrid AI model that moves inference to the edge of devices (phones, wearables, glasses) for privacy-sensitive and latency-critical functions, while retaining large-scale training, fleet management and drift handling in the cloud [119-128][219-224]. He called for further research to enable dynamic edge-cloud coexistence [225-232][233].


* Pasi Toivanen, Nokia, stressed that AI’s true value emerges from a 360° ecosystem linking OEMs, regulators, operators and startups, with clear value-sharing mechanisms and proactive security assessments embedded in the network [145-154][158-160]. He advocated pushing the majority of optimisation decisions into the network fabric to reduce reliance on regional data centres, thereby improving efficiency and resilience [239-248].


* Shantigram Jagannath, Tejas Networks, contrasted an “AI-first, AI-native” architecture with a bolt-on approach, noting the impact on CAPEX/OPEX and long-term sustainability [180-183]. He proposed a telecom-platform “app-store” where simple AI models can be uploaded and accessed under appropriate trust, regulation and safety frameworks [188-192][185-190]. Jagannath also noted that network slicing can be created with a single click, enabling rapid allocation of bandwidth to different slices and supporting equitable service for rural users [255-266].


The moderator’s follow-up questions elicited concise answers: Magnus clarified that AI offers deeper data analytics than traditional SON, delivering the cited capacity and energy gains [199-214]; Dr Sukumar reiterated that privacy-critical tasks belong at the edge while training and fleet management remain in the cloud [219-224]; Pasi emphasized pushing optimisation as far into the network as possible [239-248]; Jagannath highlighted the single-click slicing capability for equitable bandwidth distribution [262-266].


An audience member raised a practical concern about introducing AI across India’s 118 crore mobile connections without service disruption. Pasi responded that an end-to-end ecosystem approach-coordinating OEMs, regulators and startups-can avoid piecemeal patches and ensure smooth evolution [272-275]. Jagannath suggested that the simplest remedy is to procure additional equipment to handle the increased AI-driven traffic, illustrating a divergence between ecosystem-centric coordination and hardware-centric scaling [255-256][272-275].


Across the discussion, a strong consensus emerged that AI is a transformative driver for Indian telecom, delivering tangible efficiency gains, forming the backbone of the forthcoming AI-native 6G era, and requiring a risk-based, multi-stakeholder regulatory framework to preserve trust, accountability and inclusivity [28-34][38-44][46-53][54-56][145-154][199-214]. Moderate disagreements persisted regarding (i) optimal AI workload placement (edge vs. network vs. cloud) [219-224][239-248][206-214], (ii) AI-first versus bolt-on architectures [180-183][31-34], and (iii) scaling AI across the massive subscriber base (ecosystem coordination versus additional hardware) [272-275][255-256].


Key take-aways


1. AI will become an intrinsic layer of telecom, especially with 6G.


2. Current deployments already yield ~10 % capacity and ~33 % energy improvements and block hundreds of millions of spam communications daily.


3. TRI’s risk-based framework (low-risk self-regulation, high-risk obligations), sandbox and MANOV vision provide the governance backbone.


4. A hybrid edge-cloud model is needed, with privacy-sensitive tasks at the edge and large-scale analytics in the cloud.


5. Sustainability must be addressed through network-centric optimisation and careful compute management.


6. New business models such as AI-model marketplaces and AI-driven revenue streams are emerging.


7. Equity and inclusion require AI-enabled network slicing and trust mechanisms to protect rural and bottom-of-the-pyramid users.


Action items identified were the continuation of the sandbox programme, industry commitment to TM Forum Level 4 by 2028, exploration of AI-first versus bolt-on pathways, development of transparent value-sharing ecosystems, and formulation of concrete security and sustainability metrics. Unresolved issues include precise edge-cloud workload allocation criteria, detailed safeguards against AI-powered attacks, mechanisms to guarantee fair bandwidth allocation, and economic models for the projected explosion of AI agents (potentially 500 crore) that will generate new traffic patterns.


Ms Mishra closed the session, thanking the panelists and participants, expressing confidence that responsible AI-underpinned by evolving regulatory frameworks and collaborative industry effort-will drive a positive transformation of India’s telecom sector, and invited attendees to the forthcoming discussions on customer-trust and governance [294-295].


Session transcriptComplete transcript of the session
Ms. Pallavi Mishra

Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. being organized on the sidelines of India AI Impact Summit 2026. Today, we are gathered to discuss the new elements of AI and telecommunication. This event is organized by Telecom Regulatory Authority of India, TRI, in collaboration with India AI under Ministry of Electronics and IT. Today, interestingly, on 20th February, TRI marks 29 years of its journey in shaping India’s telecommunication landscape. Representatives from telecom operators, technology OEMs, policymakers, government, academia and media are present here. Aap sabhi ka hardik abhinandan hai. Kalpna kijiye ek aisa telecom network jo khud ko heal kar sakhe that can detect faults even before we know them and deliver seamless connectivity to billions without interruption.

This is not a science fiction. This is the power of AI in telecommunication. Today, AI is transforming industries. And as we look ahead, AI is all set to become even more transformative. From predictive network management to intelligent customer experiences, the possibilities are humongous. Now, it’s my proud privilege to invite Shri Anil Kumar Lahoti ji, Honorable Chairman TRAI, whose dynamic leadership continues to provide direction and strength to the telecom regulatory ecosystem of our country. Chairman sir needs no introduction. His vision has been instrumental in steering TRAI through a rapidly evolving digital landscape. I respectfully request Chairman sir to kindly deliver his inaugural address.

Shri Anil Kumar Lahoti

Distinguished leaders from the technology companies, from telecom service providers and industry associations, representatives from government, my colleague members from TRI and other colleagues from TRI, ladies and gentlemen. Good afternoon to all of you. It’s my privilege to welcome all of you to this session on Responsible AI in Telecom. This is a session by the side of India AI Impact Summit. During last few days, we have been listening to the world leaders from governments, technology companies, academia and civil society. AI is now and here. In this context, the very composition of this gathering reflects a shared responsibility of recognition. that artificial intelligence is no longer an emerging add -on to telecommunication. It’s a foundational capability shaping how networks are designed, operated and experienced by users.

Artificial intelligence and telecommunications complement each other to form the backbone for the intelligence era. Telecom networks are emerging as the primary carriers of AI, while AI itself is becoming the intelligence layer of telecom. In the upcoming 6G technology, AI will no longer be an application layer. It will be intrinsic. The telecom networks will be AI native. In this sense, telecom networks are no longer mere data carriers, but these are central pillar of India’s AI infrastructure. Our nationwide fiber backbones and mobile broadband networks constitute one of the most widely distributed digital infrastructures in the world, operating within mature operational and regulatory frameworks. India’s scale gives special significance to this convergence. With over 1 .3 billion telecom subscribers and over 1 billion data users, India operates telecom networks at a scale, where AI -driven automation is no longer optional.

It is indispensable. AI is already being deployed to optimize network performance, predict faults, improve energy efficiency, enhance customer experience, and combat fraud and spam communications. These deployments demonstrate how AI can improve service quality, resilience, and consumer safety when applied responsibly at the network level. India is already witnessing clear gains from the responsible use of AI in the telecom sector. Operators are stating significant energy saving with use of AI. Due to the effectiveness of AI and blockchain -based filtering operators are now flagging or blocking nearly 400 million suspected spam calls or messages each day. Enhanced enforcement and improved oversight of service providers has already led to the disconnection of about 2 .1 million spam numbers. The authority is also advancing the rollout of a digital consent acquisition framework following successful pilot runs with the banks to ensure consumers have digital control over consent for commercial communications.

At the same time, the scale at which AI systems operate is also increasing. The impact of AI in telecom also amplifies their impact. automated decisions taken by algorithms can affect millions of users simultaneously this makes trust the central pillar of AI adoption in telecommunication efficiency gains cannot come at the cost of transparency, accountability or consumer rights as telecom is an essential service public confidence must remain at the core of AI enabled transformation the government of India has been proactive in addressing this balance the India AI mission and the recently articulated AI governance guidelines emphasize a huge role in the development of AI and the recent articulated AI governance guidelines emphasize a huge role in the development of AI and the recent articulated AI governance guidelines emphasize a huge role in the development of AI emphasize a huge role in the development of AI and the recent articulated AI governance guidelines emphasize a huge role in the development of AI emphasize a huge role in the development of AI emphasize a huge role in the development of AI and the recent articulated AI governance guidelines one that encourages innovation while embedding safeguards by design.

These principles are particularly relevant for telecom, where AI systems interact continuously with the citizen, enterprises and public institutions. TRI has been aligned with this approach. In July 2023, TRI issued recommendations on leveraging artificial intelligence and big data in the telecommunications sector, proposing a risk -based regulatory framework for AI in telecom. This approach recognizes that not all AI use cases are safe, but that all AI use cases carry the same level of risk. While low -risk applications may be guided through self -regulation, high -risk use cases, especially those directly affecting consumers, require stronger obligations around transparency, explainability, and human oversight. In April 2024, TRI further facilitated this approach through its recommendations on the regulatory sandbox, enabling live network testing of AI -enabled solutions, including those relevant for 5G and future 6G networks within defined safeguards.

This reflects our regulatory philosophy of enabling innovation while ensuring that public interest remains protected. The MANOV vision announced yesterday by the Honorable PM of India emphasizes a human -centric framework for ethical, accountable, inclusive AI governance. The principles of the Mana vision are equally fundamental to AI governance in telecommunication. Coming back to the agenda of today’s program, the two plenary sessions, we have planned to capture this responsibility very well. The first session featuring technology developers will focus on preparing telecom networks for the AI era and examine how networks must evolve to become more intelligent, autonomous and resilient, while remaining secure and sustainable. The second session with representatives from telecom service providers and GSMA will address building customer trust through AI -driven operations, highlighting governance, ethics, accountability, and customer protection in an environment where AI -based decisions increasingly shape everyday connectivity.

As AI -driven telecom operations scale across borders, issues of interoperability, standards, and ethical alignment become global concerns. India’s experience of deploying AI in telecom at population scale offers valuable lessons, while international cooperation remains essential. To address shared challenges, let me conclude with this thought. AI will undoubtedly shape the future of telecommunications. But it is the way we design, govern and deploy AI that will determine whether this future is trusted, inclusive and resilient. TRI remains committed to working with all stakeholders, industry, policymakers and international partners to ensure that AI in telecom serves both innovation and public good. I wish this session fruitful deliberations and look forward to the insights that will emerge from today’s discussions. Thank you.

Ms. Pallavi Mishra

Thank you very much, Chairman, sir, for your inspiring address. You have illuminated how regulatory frameworks and policies are evolving AI -driven telecom. Sir, your words make us believe that this transformation is moving forward in a positive way. We are delighted to hear your perspective, sir. Heartful gratitude to all the esteemed speakers and guests. The inaugural session has set a vibrant context for our upcoming discussions. Now our first plenary session will begin. Our first session is on preparing telecom networks for AI era. In this session, our experts will discuss AI adoption in telecom, transparency, security, safety, sustainable AI networks, and embedding responsibility by design. To moderate this insightful discussion, we are honored to invite Sridhar Ranjan Mittal, member TRAI, an eminent telecom policy expert with over three decades of experience in telecom networks, global standards, and spectrum policy.

We are honored to have distinguished panel of industry leaders. I welcome on dies. Our first panelist, Mr. Magnus Eberberg, Chief Technology Officer for Asia Pacific at Ericsson, a global telecom innovator who had played a key role in developing region’s long -term technology vision from 5G deployment to 6G readiness. Joining us next is Dr. Vinesh Sukumar, Vice President, Product Management from Qualcomm, a seasoned product leader with over 20 years of experience in large -scale AI, deep learning, and mobile technologies across global telecom ecosystem. We also welcome Mr. Parsi Tovnen of Nokia, who leads strategic engagement with governments and industry on AI and connectivity initiatives, driving large -scale ecosystem collaboration in cloud. AI and AI RAN. Our next distinguished speaker is Mr.

Shanti Gram Jagannath from Tejas Networks, a technology strategist leading wireless products, network management system, and AI -driven innovations. I request all the panelists to join for a quick photograph session on the demand of the organizers. You all may stand for a moment. Thank you, sirs. We look forward to a thoughtful exchange on how AI -driven innovations are being used in the future. We are redefining telecom capabilities from our panelists. now I hand over the stage to our moderator Sri Rutheranjan Mithrasar to start the session

Shri Ritu Ranjan Mittar

Thank you Madam Sri Lahoti Chairperson TRAI my colleague member Dr. Tangirala doyens of the industry OEMs your staff accompanying you my colleagues from TRAI industry associations young officers representatives of start -ups so it’s a very important session on how the telecom networks will actually evolve to AI we saw a lot of use cases in the last 2 -3 days related to farming education healthcare but today the focus is telecom and the first session, this session is on the network, the next session as you’ve been informed is the subscriber so as a telecom network we are introduced to term access network, so that is one thing we would like to understand how your access network is evolving with the AI, we all know that in the 6G, AI and communication is one of the important use cases one of the 6 cases similarly going to core what kind of changes do you envisage when you implement AI, especially with respect to core, chairperson spoke about the benefits that are already accruing for the network management when the network management is getting into the AI is getting into the network management management.

Another thing I would also request you to dwell on is that ultimately the AI is going to come on the handsets. So once AI is going to come in on the handsets, what kind of a challenge it will throw to the network? Another important aspect is also raised by chairperson is the security. Are are we going to be challenged by the AI being used for attacking the networks? And what kind of steps we intend to take? Another one thing with the AI is it is compute intensive. So the sustainability as also is listed is going to be important. So it will be important to know what kind of steps the OEMs are envisaging to take care of the sustainability part of it.

You will be all are kind of signatories to the or the countries signatories to the sustainable development goals of the UN so this aspect also is very important now without taking too much time I will, everybody wants to listen from the experts here, I will first like to invite Mr. Magnus Everbring from Ericsson to kindly share

Magnus Ewerbring

Thank you very much, it’s a great pleasure to be here now, bottom of my heart I’m very impressed by this event and the messages we’ve heard and I think we just heard at the onset of this very much central message to leverage AI fully for ordinary people, consumers, for industries, for enterprise and government functions. It needs to use compute resources and that we often attribute to the data centers and indeed they will be there. But also we need to connect them. And here I think India comes out being very much in the pole position having a well over 90 % population coverage I’ve heard numbers even up to 99 % population coverage for 5G today. And not only that, but truly well performing networks.

That is a fundamental platform to drive innovation on and to drive innovation on together with AI. And looking then for the next five years to come, that indeed is what we will see. And the nations that in a few years time end of this decade are at the leading edge with the best of what 5G can do connecting the data centers with the AI applications in devices will be at the cutting edge and will have an advantage. And they are the ones having the least step into 6G. And again I think India is just in a supreme pole position to be there. The networks already today use AI. They use that to a large degree but still I argue it’s only the beginning.

We use it as the systems are being configured becoming more and more autonomous. The goal in the industry is still a good challenge but reachable is to reach what we call level 4 in TM Forum. By 2028 many mobile operators aspire to be there. and that’s beautiful, that’s very good but it is a big undertaking AI is being part to reach that level taking the next step I argue is what we really do with 6G, that’s to reach level 5 and be fully autonomous to cut the rope, cut the ties a bit and let it run and that we have in mind now as we gradually set the standard for 6G 6G shall be AI native and that will then be a baseline for bringing on the knowledge we’ve gained in 5G and to take the next step with 6G lastly then, networks for AI AI on the application side and here I argue it’s important to go society wide.

India is building its digital stack in an impressive way. Leverage AI in that. Industries come in, develop applications on top of it. We’ll drive efficiency locally and also will be export possibilities. So leverage the 5G systems with AI and cloud and then drive the way into 6G. Thank you.

Dr. Vinesh Sukumar

Thank you. I got We at Qualcomm have been trying to really democratize AI and then try to see if we can translate AI to be resident on devices. These devices could be personal devices like phones, could be a laptop, could be your smart watches, your smart glasses, anything that you can think of. But doing that kind of AI inference on the edge is not easy, and especially when you want to go towards more personalized inference. I always joke with my colleagues that AI historically lacks common sense. How do you really translate that to something that’s meaningful to the user needs a lot of investment. In India, we’re seeing this significantly change. We’re seeing a lot of players putting a lot of focus, and how do we get it attached to the user and make a more important connection that drives a lot of these experiences.

At the same time, it’s also very critical of how do we look at coexistence between what runs on the cloud and what runs on the edge. It’s what we call a concept of hybridization. The concept of hybridization is the idea of a system that is built around the idea of a system that is built around the idea of a system that is built around the idea of a system that is built around the idea of a system that is built around the idea of a system that is built around the idea of a system that is built around the idea of a system that is built around the idea of a system that is built around the idea of a system that is built around the idea of a system that is built around the idea of a system that is built around the idea of a system that is built around the idea of a system that is built around the idea of a system that is built around the idea of a system that is built around the idea of a system that is built around the idea of a system that is built around the idea of a system that is built around the idea of a system that is built around the idea of a system that is built around the idea of a system that is built around the idea of a system that is built around the idea of a system that is built Hybrid AI, you know, working with our network operators is also not an easy concept.

It’s always been a challenge to understand, you know, which of these experiences would be transitioned towards the cloud, how do you make that decisions, and what runs on the edge. I think, you know, there’s a lot of research activity happening in this space, and India is definitely leading in this front. I totally expect in the next couple of, you know, months, we would see a strong transition where there’s a, you know, fundamental element of hybrid where you can have both edge and cloud coexist. And last but not

Shri Ritu Ranjan Mittar

Thank you, Mr. Sukumar. I would like to invite Mr. Pasi from Nokia now.

Mr Pasi Toivanen

Namaste. My sincere thanks for the opportunity to be here. It’s exciting. It’s exciting for the sake of this AI Summit, but it’s also very personal for me. I have been coming to the beautiful country of India since 1999, and it has been fascinating to see this country evolving all these years, over these years, in different locations. We were talking earlier today already in how many places I have been experiencing this evolution and development. It’s fascinating. So thanks for the opportunity to be here. It’s very important. I don’t know what to add after such a wonderful keynote by the chairman of the other opening talks. I think it is obvious that AI will also impact telecommunication networks, the network evolution, the business models, the innovation, everything around it.

So instead of going deeper in the technology discussion, because they were very well covered by the previous panelists, industry leaders, I would focus this couple of minutes for how. Not what. But how. How to capture the best of AI era. And for me, it comes around who is able to build the most value -rich, most welcoming, most compelling ecosystems. Who is able to gather like -minded technology players, regulators, government agencies to think together the opportunity, the risks, the challenges, that whole 360 of AI. I don’t believe that any player, even though they would be the smartest people on the planet, are able to… fully capture the 360 by playing alone. Ecosystem it is. How you are able to proactively define the overall value of this AI evolution.

And then transparently and proactively… agree how that value is distributed, how that value is maximized within that ecosystem. It might sound ideological. It might sound a little bit naive, but I’m a firm believer that it is key for success. It’s a key to capture and deliver all the opportunities within this AI era. It is the only way to address the security risks, which are now going to be different than any time earlier. I mean, we are going to have different access network. We are going to have richer number of applications, applications which are transferring more data, which increased amount of data is contributing for the security risks. We have to address that topic together, end to end.

And let’s do it. India can show the direction forward. For whole world. There is a tradition for great. collaboration, great innovation, so let’s do it. Thank you.

Shri Ritu Ranjan Mittar

Thank you so much, Mr. Pasi. I would now like to invite Mr. Shantaram from Tejas.

Mr. Shantigram Jagannath

Yes. Hi, good afternoon, everyone. Am I audible? Not audible? Now it’s okay. So my colleagues and my boss warned me that, you know, if you’re coming number four, then you may not have much to say, so think of something different. So there are a few things that I would like to say. A few things, you know, being from here, born, brought up, everything in India, making stuff for India, so I’ll try to, I’ll try to anchor my… few comments in terms of what we want to do here. I think the fundamental problem that, I don’t know if you happen to read this book by Sir C .K. Prahlad, where he talked about the economy at the bottom of the pyramid.

So we carry with us a responsibility of solving problems for the bottom of the pyramid, primarily. And Indian telecom especially has that additional responsibility of making sure that access is provided to, and at a very low cost, it is provided to pretty much everybody in the country. Now, in this context, if you want to accelerate the adoption of AI, I leave you with a framework of thought. This is what I use to basically figure out what it takes. So you can either be looking at cost, or you can be looking at revenue. Or both. So when telecom operators have to figure out what are they going to invest in, then these are the sort of the two guidelines or markers that they can use.

When it comes to cost, it is optimization of either the CAPEX or optimization of OPEX in a simplistic sense. OPEX would be operational efficiency. There’s a lot of literature which is available. And my friends from the global OEMs are far ahead in implementation of some of these. We in India are chasing it as well. In terms of the hardware, it leaves you with, again, two choices for architecture. Do you find a way to completely do an AI -first, AI -native architecture? Or should we find a path that has a bolt -on capability? Because, you know, you have equipment that’s already in the field, and there are a lot of investments that we are making, you know, very fresh.

unlike some of the more mature networks in the West where the capital cycle has already gone through you know 4g 5g and it’s it’s paid off over 5 10 years already but in India we have made fresh investments even as early as last year now that kind of equipment if it has to be leveraged for the next 10 years how do you deliver AI that is one challenge that we’ll have to solve so there will be a choice you know do you do AI first or do you do a bolt -on if you are looking at the revenue side of the equation again in simplistic sense there are two sort of big buckets there is a product enhancement where the telecom network itself is enhanced in terms of efficiency there’s a lot of work happening in 3gpp, bouth at 6G alliance, A lot of thought is going in in terms of how do you make, how do you bring in efficiency into the product, essentially optimizing the cost per bit, so to speak, and doing it in a way that is a lot smarter than what we did before.

And on the other side, you have a possibility of generating revenue by providing AI through the telecom network, which Pasi also kind of referred to. So that I see as a possibility of a two -way business model, which is an opportunity that’s available, where on one side you have all the users, you have communities, you have people in our villages, the farmers, etc., who are all on one side of it. And on the other side, you have startups, companies that are building models, companies that are building specific agentic applications. and in between you have the telecom player. And there’s a possibility if we do the frameworks right, and what do I mean by frameworks? The very top thing in terms of framework is trust.

There has to be trust, there has to be some amount of regulation, there has to be some amount of safety that comes with regulation. And allow people to dynamically upload, and it’s like an app store model. So the telecom network essentially becomes a platform where simple models, easy models can be actually uploaded and made accessible to all the users of the telecom network. It doesn’t have to be only the bottom of the pyramid, but of course the bottom and all the layers above. So I’ll leave you with this thought,

Shri Ritu Ranjan Mittar

So thank you so much Mr. Shantaram for sharing your thoughts. Now for the next few minutes I will first have a round of questions with the panel here and then we will throw the session open for questions from the floor. So Mr. Magnus the first question is that we are talking of optimization with the AI but we always used to say that 4G, 5G networks are self -sorn. The concept of son was already there, self -optimization networks. So what does AI fundamentally change now?

Magnus Ewerbring

Okay, thank you. Well, it’s always a journey. We are… For every move we make, we learn, we get more insights and then we take that and move further on. So in that sense, although we’re done… more and more autonomous parts of the systems in the past. It doesn’t say we can do more in the future. And we should do more, of course. Now, with AI, we get a very powerful tool to analyze a lot of data and to apply that knowledge onto a set of, in the system, some algorithms. And we’ve had some fantastic observations there. One was just the, in a very part that’s been optimized for decades on how we do the link adaptation, how we control the communication between the system and the device, we’ve managed to optimize the capacity by 10%.

So imagine a water… a loading spectrum of 100 megahertz. Then you had the equivalent of 110 megahertz. by applying this optimized algorithm. And that’s the cutting edge of what we can do today. I’m sure we can do improvements tomorrow also on that. How much? I don’t dare to state. In energy efficiency, as has been discussed there also, we, in another part of the system, applied AI analysis and make that part of the ongoing processing. And energy efficiency was optimized by 33%. For that part. And that’s an enormous savings applying that on a pan -India network, of course. So, again, it is about to, wherever you are, how far you’ve been, continue to do research to understand the future potential, and then step -by -step apply that with the knowledge that you have derived.

And you will continue to take steps and climb the ladder. Thank you.

Shri Ritu Ranjan Mittar

so next question to you Mr. Sukumar so in the telecom hardware and software space which decisions should be pushed to the edge and what decisions should be centralized

Dr. Vinesh Sukumar

it’s a great question by the way I think it’s really going to come down to elements of key performance indicators I would suppose if you’re looking at areas where you want to focus more on data privacy user privacy better responsiveness, better data management a predictable end to end performance to a large extent that’s going to be happening on the edge and this could look at experiences on data plane loops L1, L2 user kernel space operations anything to do with PII information or user privacy privacy, all that would be edge resident. And as you go more towards cloud, I would say the emphasis is going to be a lot more around fleet management, anything to do with AI, ML training as such.

There’s a concept which historically on the ML land was called MLOps from, you know, from training data all the way to inferencing and monitoring. We have a new model which looks at if there is areas there where things are breaking because of drift, how do we go fix that? That’s where I think cloud definitely helps. Now at the same time, it’s not very binary equation, which is one is edge, one is cloud. There’s also concepts of hybrid, where it’s going to be coexistence. As I was mentioning before in my talk, is that you have to find ways how edge can complement the cloud. And it could be elements of personalization, where you want to drive cloud.

What’s the edge? and you want to go towards more complex scenarios, then you position towards the cloud. A hybrid is in the very early stage. To a large extent, these days routers are very static in nature, meaning those workloads and experiences are predefined, saying X, Y, Z runs on the edge, you know, and A, B, C runs on the cloud. But the most difficult challenge has been is how do these routers can be intelligent enough when you happen to have a multi -turn conversation at some point of time positioned towards the cloud. I think that is something of a huge research topic, and I’m hoping in the next couple of months we’ll see some interesting results.

Thank you.

Shri Ritu Ranjan Mittar

So the next question for you, Mr. Pasi. So I’m sure a lot of development is already taking with the telecom OEMs. Now, what are the decisions which are taken off -net? What kind of decisions do you expect will be taken off -net? Thank you. and what are the decisions during the operation of the network while using AI you think you would be taking?

Mr Pasi Toivanen

Wow. And we have only this four minutes? Okay, okay. But on a serious note, I think jury is still out for this one. So how we are able to… And again, going back to my earlier comment, how we holistically address the topic, how we go through the overall value and that ecosystem and related functionality. I believe more will be done by the network itself. When we design it correctly, it’s able to do many of the security vulnerability assessments by itself. It’s… alerting upstream? Is it going to the edge or further? Let’s see. But I think it needs to be very intense dialogue between the network and edge. More decisions are traveling further to the regional data centers, more we are contributing to inefficiency and hence also the complexity.

So I would, my planning assumption is that I would push as much of the optimization automation to the network itself and then the limited cases to edge and less and less to the actual regional data center, to put it short.

Shri Ritu Ranjan Mittar

Thank you for that. Shantaram, let me come to the ethical part of it. So let’s say we’re a base station serves urban area and also part of it serves the rural area. So how can we make sure that with the AI at the background the customer in a rural area is not deprived of the bandwidth vis -a -vis with respect to the urban area. So what kind of steps do you, I’m sure you will be looking at those things that the bandwidth is not constrained to an area or to a set of subscribers but what do you what are your thoughts? How can we check these things in a network?

Mr. Shantigram Jagannath

Okay. So in just the easy answer is to buy more equipment. But But that’s a great question. It has always been, you know, this question has been live for many, you know, almost decades. I know we went through a case where, you know, we had net neutrality and those debates also happening. So it’s not different from those types of debates. I think while we look at access to AI, there is access to the central AI, right, which has to go through a backhaul capacity and so on. And obviously there one has to create different types of network slices for different types of use cases. And today’s technology, at least the way that we administer networks, it is quite possible to do that.

And it is possible to do that with single clicks. And with AI and us bringing in operational AI, it is, you know, it can even do it more efficiently. It can do it more efficiently without you having to think too much. you can just say that you know I need to create this kind of a bandwidth for this type of an application and the network management you know the assisted network management can actually go and do that for you. Now this coupled with having a lot of edge access for AI. So you know I’ll give I’ll share an example. In the US they recently launched an application where the telecom network can actually sense your voice metrics and identify you through that.

So imagine that kind of an application here in India where you know your identity etc. can actually be verified by the telecom operator not just by your number or by a digital information but by analog information that you are actually communicating and these types of applications can actually be launched. on the edge of the network so short answer step one try to have a lot of a GI and step to use lot more sophisticated network management capability to clearly separate out different types of

Shri Ritu Ranjan Mittar

well thank you so much we can have a do the note that the times up but still we can have one or two quick questions yes mr.

Audience

good afternoon respected panelists it was a wonderful discussion but as I will try to keep myself short as I was hearing all that, we are having around 118 crores of mobile connections in India, built up over a huge network of say fiber and wireless and lead circuits and everything and once we are trying to introduce AI in that, as we know it’s not AI native, we have to build up in the form of external like apps and all and any single minute of disruption causes a huge resentment and the loss of the you know, all the time and resources, so how do we to actually progress from this 118 crores and we want to feed through the AI what is the vision in front of us that we can really carry forward from here, I will look for any of the panelists, thank you

Mr Pasi Toivanen

Perhaps I can start thanks for it, it’s a wonderful question and sorry if I sound like a broken record but without thinking the network evolution end -to -end, you are not able to address it. So it comes to this ecosystem of players that you are able to model what the change is introducing to the network and optimize the network end -to -end. I think it is the only way. Otherwise, you are going to put patch fixes here and there based on certain application behaviors and you are not able to evolve the whole network.

Shri Ritu Ranjan Mittar

Do you like to also substitute?

Mr. Shantigram Jagannath

So I think essentially what you are saying is that it is a journey. And how do we sort of chart out that journey? There are two, three thoughts on this. I think one is that today we are telecom networks are mostly catering to human users. Of course there are enterprises etc. Three, four years, five years down the road I do expect AI users to be starting to dominate. So the business model and the regulation has to support that kind of an evolution. I think that is step one. We need to sort of think through what is, you know, how do we handle, so today we have 118 crores mobile phones. Five years from now we might actually have 500 crores of AI agents which are doing various things but they are still communicating either with each other or with central data repositories and so on.

And we need to basically figure out how do you charge for it? What is the, what is the economics of this? You know, how is it all going to work? Who is going to pay for all of that activity? So there is a lot of policy thought process that has to go in. On the physical side, we are building more and more and denser and denser fiber optics which are carrying 8 teras, 20 teras and so on and so forth. And in anticipation of something like this. So I don’t know if that answered your question but thank you so much.

Ms. Pallavi Mishra

Thank you. Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (20)
Factual NotesClaims verified against the Diplo knowledge base (3)
Confirmedhigh

“The inaugural session of the India AI Impact Summit 2026 was organized on the sidelines of the main summit.”

The knowledge base notes that the event was organized on the sidelines of the India AI Impact Summit 2026, confirming the report’s statement. [S90]

Confirmedhigh

“TRAI Chairman Shri Anil Kumar Lahoti delivered remarks at the summit, representing TRAI.”

The closing-ceremony transcript identifies Anil Kumar Lahoti as the Chairman of the Telecom Regulatory Authority of India (TRAI) and records his speech at the summit, confirming his participation and role. [S8]

Additional Contextmedium

“AI‑driven predictive network management, fault detection and energy‑efficiency optimisation are already being deployed in India’s telecom sector.”

Other summit materials describe telecom operators using AI for network planning, performance optimisation, predictive analysis and resource management, providing broader context for the specific deployments mentioned. [S91] and [S92]

External Sources (99)
S1
Trusted Connections_ Ethical AI in Telecom &amp; 6G Networks — – Dr. Vinesh Sukumar- Mr Pasi Toivanen Dr. Sukumar advocates for edge processing for privacy and responsiveness concern…
S2
Trusted Connections_ Ethical AI in Telecom &amp; 6G Networks — -Shri Ritu Ranjan Mittar- Member TRAI, telecom policy expert with over three decades of experience in telecom networks, …
S3
Trusted Connections_ Ethical AI in Telecom &amp; 6G Networks — – Mr. Shantigram Jagannath- Magnus Ewerbring
S4
Trusted Connections_ Ethical AI in Telecom &amp; 6G Networks — – Magnus Ewerbring- Ms. Pallavi Mishra
S5
https://dig.watch/event/india-ai-impact-summit-2026/transforming-health-systems-with-ai-from-lab-to-last-mile — Last we saw was in G20. Hopefully, it brings back memories. Yes. Happy ones. I’d like to keep it that way. She has had e…
S6
Trusted Connections_ Ethical AI in Telecom &amp; 6G Networks — – Shri Anil Kumar Lahoti- Dr. Vinesh Sukumar
S7
Trusted Connections_ Ethical AI in Telecom &amp; 6G Networks — -Shri Anil Kumar Lahoti- Honorable Chairman, Telecom Regulatory Authority of India (TRAI), telecom regulatory expert wit…
S8
Closing Ceremony — ### Anil Kumar Lahoti – Chairman, India’s Telecom Regulatory Authority – **Anil Kumar Lahoti**: Chairman of the Telecom…
S9
Leaders TalkX: Towards a safer connected world: collaborative strategies to strengthen digital trust and cyber resilience — – **Anil Kumar Lahoti**: Role/Title: Not specified (India representative), Area of expertise: Cyber resilience and cross…
S10
WS #280 the DNS Trust Horizon Safeguarding Digital Identity — – **Audience** – Individual from Senegal named Yuv (role/title not specified)
S11
Building the Workforce_ AI for Viksit Bharat 2047 — -Audience- Role/Title: Professor Charu from Indian Institute of Public Administration (one identified audience member), …
S12
Nri Collaborative Session Navigating Global Cyber Threats Via Local Practices — – **Audience** – Dr. Nazar (specific role/title not clearly mentioned)
S13
Trusted Connections_ Ethical AI in Telecom &amp; 6G Networks — -Magnus Ewerbring- Chief Technology Officer for Asia Pacific at Ericsson, global telecom innovator involved in developin…
S14
Telecommunications infrastructure — Network operators increasingly rely on AI for a wide range of tasks, fromnetwork planning(e.g. using algorithms to ident…
S15
Connecting the Unconnected in the field of Education Excellence, Cyber Security &amp; Rural Solutions and Women Empowerment in ICT — Anil Kumar Bhardwaj: I’ll make sure I limit myself to three minutes. Mr. Seizo Onoe, Director TSB, Mr. Niraj Verma, Admi…
S16
Designing Indias Digital Future AI at the Core 6G at the Edge — I am part of the 6G use case group, work very closely with Shokji and I think many things are already in place. We draft…
S17
Artificial intelligence as a driver of digital transformation in industries (HSE University) — The analysis offers a comprehensive examination of artificial intelligence (AI) and its impact on various sectors. One s…
S18
WS #110 AI Innovation Responsible Development Ethical Imperatives — Dr Zhang Xiao: Thank you everyone. I’m glad to be involved in this interesting discussion and I have three points to sha…
S19
Ethics and AI | Part 6 — The EU Act categorizes AI systems into different risk levels—unacceptable, high-risk, and low-risk—each with correspondi…
S20
HIGH LEVEL LEADERS SESSION IV — There needs to be human oversight, transparency, and explainability This approach ensures that technology is used as a …
S21
Artificial intelligence (AI) – UN Security Council — Algorithmic transparency is a critical topic discussed in various sessions, notably in the9821st meetingof the AI Securi…
S22
AI Automation in Telecom_ Ensuring Accountability and Public Trust India AI Impact Summit 2026 — I mean, if a mobile operator arbitrarily starts turning off SIM cards because they think maybe that traffic looks a bit …
S23
Building Indias Digital and Industrial Future with AI — As India advances in digital public infrastructure and its AI ambitions, the key is how we ensure these systems remain t…
S24
India launches AI-driven consumer protection initiatives — The Indian government haslaunchedseveral initiatives to strengthen consumer protection, focusing on leveraging technolog…
S25
Open Forum #33 Building an International AI Cooperation Ecosystem — Participant: ≫ Distinguished guests, dear friends, it is a great honor to speak to you today on a topic that is reshapin…
S26
Multistakeholder Partnerships for Thriving AI Ecosystems — For instance, the National Skilling Mission, the skilling mission that is undertaken by NASCOM, which is the IT industry…
S27
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — Namaste. Honorable Minister Vaishnav, Your Excellency’s colleagues, let me begin by thanking our host, Prime Minister Mo…
S28
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Cristiano Amon — “So 6G is going to provide an evolution of connectivity, faster speed, lower latency, higher coverage.”[20]. “The bigges…
S29
WS #362 Incorporating Human Rights in AI Risk Management — – Some form of regulatory framework is needed to ensure widespread adoption of rights-respecting practices
S30
WS #187 Bridging Internet AI Governance From Theory to Practice — Vint Cerf: First, I have to unmute. So thank you so much, Alex. I always enjoy your line of reasoning. Let me suggest a …
S31
State of play of major global AI Governance processes — Juha Heikkila:Thank you very much, and thank you very much indeed for the invitation to be on this panel. So indeed the …
S32
Do we really need specialised AI regulation? — The Apex: AI useis where AI’s societal, legal, and ethical consequences come into sharp focus. Whether it’s deepfakes, b…
S33
AI is here. Are countries ready, or not? | IGF 2023 Open Forum #131 — In conclusion, AI presents a unique opportunity for human progress and the achievement of the SDGs. However, careful con…
S34
The State of Digital Fragmentation (Digital Policy Alert) — Digital fragmentation can have implications at the technical level, thereby exacerbating potential risks. Policy fragmen…
S35
Bridging the Digital Divide: Inclusive ICT Policies for Sustainable Development — Challenges and Risk Mitigation Environmental sustainability must be integrated into ICT development Human rights | Dev…
S36
WS #103 Aligning strategies, protecting critical infrastructure — Intersectionality of technological landscape complicates policy approaches
S37
Artificial Intelligence &amp; Emerging Tech — In conclusion, the meeting underscored the importance of AI in societal development and how it can address various chall…
S38
Omnipresent Smart Wireless: Deploying Future Networks at Scale — Bocar A. BA.:I think it’s a great opportunity, and everybody in the room worldwide can attest that we have lived the pan…
S39
Smaller Footprint Bigger Impact Building Sustainable AI for the Future — Africa is one of the most energy -constrained regions. It’s also a continent where adoption is becoming very frequent. W…
S40
Navigating the Double-Edged Sword: ICT’s and AI’s Impact on Energy Consumption, GHG Emissions, and Environmental Sustainability — In summary, Colombia’s comprehensive approach to energy transition is manifested through shifts in hydrocarbon explorati…
S41
Green AI and the battle between progress and sustainability — AI is increasingly recognised for its transformative potential and growing environmental footprint across industries. Th…
S42
Planetary Limits of AI: Governance for Just Digitalisation? | IGF 2023 Open Forum #37 — One argument raised is the need to rethink the ideology and narrative of growth and development. There is a call to move…
S43
Open Forum #33 Building an International AI Cooperation Ecosystem — Risk-based regulatory approaches are needed but implementation remains challenging
S44
Can (generative) AI be compatible with Data Protection? | IGF 2023 #24 — The analysis examines the importance of principles and regulation in the field of artificial intelligence (AI). It highl…
S45
Secure Finance Risk-Based AI Policy for the Banking Sector — The conversation framed AI governance not as a constraint on innovation but as an enabler of sustainable, trustworthy AI…
S46
Trusted Connections_ Ethical AI in Telecom &amp; 6G Networks — The panel discussion featured distinguished experts from major telecommunications equipment manufacturers, each offering…
S47
Report by the Commission on the Measurement of Economic Performance and Social Progress — The second figure more or less reflects the reference scenario of the Stern review. Even if it projected large negative …
S48
Efforts to improve energy efficiency in high-performance computing for a Sustainable Future — The demand for high-performance computing (HPC) has surged due to technological advancements like machine learning, geno…
S49
How AI Drives Innovation and Economic Growth — Rodrigues emphasizes that while early AI discussions were dominated by fear about job displacement and technological thr…
S50
Data first in the AI era — The path forward requires sustained collaboration across sectors, attention to power imbalances and capacity building ne…
S51
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — Wai Sit Si Thou from UN Trade and Development presented a framework focusing on infrastructure, data, and skills as key …
S52
Comprehensive Report: China’s AI Plus Economy Initiative – A Strategic Discussion on Artificial Intelligence Development and Implementation — One of the most striking revelations came from Yutong Zhang’s discussion of Moonshot AI’s resource efficiency in develop…
S53
AI for Bharat’s Health_ Addressing a Billion Clinical Realities — Artificial intelligence | Data governance He explains that remote, low‑connectivity scenarios benefit from edge deploym…
S54
Research Publication No. 2014-6 March 17, 2014 — – (1) Policy objectives : Our cases studies illustrate that the public sector can develop and implement cloud-relevant …
S55
Designing Indias Digital Future AI at the Core 6G at the Edge — Power consumption concerns are driving data centers toward edge deployment Simple inferencing workloads will be handled…
S56
Building Indias Digital and Industrial Future with AI — “Today’s mobile networks are becoming intelligent, programmable and trusted layers of the national infrastructure.”[1]. …
S57
Telecommunications infrastructure — Network operators increasingly rely on AI for a wide range of tasks, fromnetwork planning(e.g. using algorithms to ident…
S58
Beyond development: connectivity as human rights enabler | IGF 2023 Town Hall #61 — On the other hand, various barriers to connectivity are observed. Infrastructure limitations, such as the backhaul and m…
S59
Regional Leaders Discuss AI-Ready Digital Infrastructure — These key comments collectively transformed what could have been a typical technology-focused discussion into a more nua…
S60
High-level AI Standards panel — The standards ecosystem should be expanded to include connections with broader AI governance discussions and internation…
S61
Building Scalable AI Through Global South Partnerships — These key comments fundamentally shaped the discussion by moving it beyond superficial celebrations of AI achievements t…
S62
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Cristiano Amon — Amon highlights India’s unique positioning to benefit from this AI transformation, noting the country’s successful mobil…
S63
India’s AI market set to surge to over $130 billion by 2032 — The AI market in India hasexpandedfrom roughly $2.97 billion in 2020 to $7.63 billion in 2024, and is projected to reach…
S64
Big Tech boosts India’s AI ambitions amid concerns over talent flight and limited infrastructure — Majorannouncementsfrom Microsoft ($17.5bn) and Amazon (over $35bn by 2030) have placed India at the centre of global AI …
S65
GPAI: A Multistakeholder Initiative on Trustworthy AI | IGF 2023 Open Forum #111 — Abhishek Singh:Thank you, thank you Inma. I must straightaway mention that one key value that we get as being part of th…
S66
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Cristiano Amon — The equipment was different. The use case is different. We’re heading to the next big transformation of the telecom sect…
S67
Designing Indias Digital Future AI at the Core 6G at the Edge — Ashok Kumar from the Department of Telecom established the foundational premise that 6G represents a fundamental departu…
S68
Trusted Connections_ Ethical AI in Telecom &amp; 6G Networks — And let’s do it. India can show the direction forward. For whole world. There is a tradition for great. collaboration, g…
S69
Building Indias Digital and Industrial Future with AI — Thank you, Julian. Thanks for the opening remarks. Am I audible? Looks like yes. So let’s begin. We have a fantastic pan…
S70
AI Automation in Telecom_ Ensuring Accountability and Public Trust India AI Impact Summit 2026 — And the reason for it is in the scam economy, regulation cannot move as fast as scammers. Scammers are not bound by geog…
S71
From KW to GW Scaling the Infrastructure of the Global AI Economy — The success of this transformation will depend on continued collaboration between global technology providers and local …
S72
Global telecommunication and AI standards development for all — The country has actively contributed to defining telecommunications standards, as evidenced by its development of the 5G…
S73
WS #145 Revitalizing Trust: Harnessing AI for Responsible Governance — Lucia Russo: Maybe I’ll go first. Yes, you’re totally right. We are seeing many policies and regulations emerging. A…
S74
WS #362 Incorporating Human Rights in AI Risk Management — – Some form of regulatory framework is needed to ensure widespread adoption of rights-respecting practices
S75
WS #187 Bridging Internet AI Governance From Theory to Practice — Vint Cerf: First, I have to unmute. So thank you so much, Alex. I always enjoy your line of reasoning. Let me suggest a …
S76
AI and ethics in modern society — Humanity’s rapid advancements in robotics and AI have shifted many ethical and philosophical dilemmas from the realm of …
S77
AI is here. Are countries ready, or not? | IGF 2023 Open Forum #131 — In conclusion, AI presents a unique opportunity for human progress and the achievement of the SDGs. However, careful con…
S78
Broadband from Space! Can it close the Digital Divide? | IGF 2023 WS #468 — Cooperation and interoperability among space-based providers are key factors for success. Despite concerns about the env…
S79
UNSC meeting: Scientific developments, peace and security — Algeria:I thank Switzerland for the excellent choice of the theme of our briefing today. And we listen carefully to the …
S80
Scramble for Internet: you snooze, you lose | IGF 2023 WS #496 — In conclusion, the internet’s original purpose was to connect the scientific and academic community, but it quickly evol…
S81
Main Session on GDC: A multistakeholder perspective | IGF 2023 — The internet’s growth is expected to continue, but challenges with capacity, infrastructure, integrity, and security mus…
S82
Open Forum #33 Building an International AI Cooperation Ecosystem — Dai Wei: Distinguished guests, ladies and gentlemen, good day to you all. I’m delighted to join you in this United Natio…
S83
Building Inclusive Societies with AI — Ecosystem collaboration is essential to solve informal sector problems Industry partnerships essential for curriculum d…
S84
Open Internet Inclusive AI Unlocking Innovation for All — But even with today’s SOTA model, you can get to about maybe a rupee. Right now, the question is, even at a rupee, now y…
S85
Open Forum #82 Catalyzing Equitable AI Impact the Role of International Cooperation — Cina Lawson: Thank you very much, so the first comment I make is that AI has to work for us. It means that we have to ma…
S86
Keynote Adresses at India AI Impact Summit 2026 — -Ashwini Vaishnav- Minister (India) -Participant- Event moderator/host And so we are here to listen to our distinguish…
S87
Leaders’ Plenary | Global Vision for AI Impact and Governance Morning Session Part 1 — Honourable Prime Minister Modi, Excellencies, dear colleagues, ladies and gentlemen. It is a great honour for me to be i…
S88
Powering AI Global Leaders Session AI Impact Summit India — -Sam Altman: CEO and co-founder of OpenAI (mentioned but did not speak in this transcript) -Speaker: Role/title not spe…
S89
Panel 5 – Ensuring Digital Resilience: Linking Submarine Cables to Broader Resilience Goals — Audience: Thank you. Very insightful session. Thank you very much, Ms. Moser, for that very insightful aspect of use…
S90
https://dig.watch/event/india-ai-impact-summit-2026/trusted-connections_-ethical-ai-in-telecom-6g-networks — Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. being orga…
S91
Workshop 6: Perception of AI Tools in Business Operations: Building Trustworthy and Rights-Respecting Technologies — Coriz provided concrete examples of how the telecommunications sector is already implementing AI solutions. These applic…
S92
WSIS Action Line C2 Information and communication infrastructure — AI technologies can help lower the costs of operating networks while improving their efficiency in both urban and rural …
S93
What is it about AI that we need to regulate? — What is it about AI that we need to regulate?The discussions across the Internet Governance Forum 2025 sessions revealed…
S94
Shaping AI to ensure Respect for Human Rights and Democracy | IGF 2023 Day 0 Event #51 — Francesca Rossi:Thank you very much for this invitation and for the opportunity to participate in this panel. So many of…
S95
Digital Trust 2025 — The second session will examine how different sectors build trust over time, based on their practices and government pol…
S96
Building Trust through Transparency — Another perspective shifts the focus from trust to trustworthiness. The speaker contends that trustworthiness should be …
S97
AI-driven Cyber Defense: Empowering Developing Nations | IGF 2023 — He introduces a panel of experts from different fields
S98
Panel Discussion AI &amp; Cybersecurity _ India AI Impact Summit — The moderator opens, transitions, and closes the session, guaranteeing that speakers are introduced, the discussion proc…
S99
Using AI to tackle our planet’s most urgent problems — Amazon’s Chief Technology Officer Werner Vogels delivered a presentation on leveraging artificial intelligence to addres…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
Shri Anil Kumar Lahoti
11 arguments93 words per minute1023 words654 seconds
Argument 1
AI enables predictive network management, fault detection, energy savings, and spam filtering (Shri Anil Kumar Lahoti)
EXPLANATION
The chairman highlighted that AI is already being used to anticipate network faults, optimise performance, reduce energy consumption and filter spam, demonstrating concrete operational benefits for telecom operators. These applications show how AI can improve service quality, resilience and consumer safety when deployed responsibly.
EVIDENCE
He noted that AI is deployed to optimise network performance, predict faults, improve energy efficiency, enhance customer experience and combat fraud and spam communications, with operators reporting significant energy savings and the ability to flag or block nearly 400 million suspected spam calls or messages each day, and the disconnection of about 2.1 million spam numbers [38-44].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Network operators increasingly rely on AI for planning, performance optimisation, fault detection and energy efficiency improvements, confirming the operational benefits described [S14].
MAJOR DISCUSSION POINT
Predictive management & efficiency
AGREED WITH
Magnus Ewerbring, Mr Pasi Toivanen
Argument 2
AI will become intrinsic to 6G, making networks AI‑native (Shri Anil Kumar Lahoti)
EXPLANATION
The chairman explained that in the upcoming 6G era AI will no longer be an add‑on application layer but will be embedded within the network architecture itself, resulting in AI‑native telecom networks. This shift will transform networks into the primary carriers of AI intelligence.
EVIDENCE
He stated that in 6G AI will no longer be an application layer, it will be intrinsic, and telecom networks will be AI native, turning them into a central pillar of India’s AI infrastructure [31-34].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI-core and edge integration for 6G is highlighted in TRAI’s white-paper work and industry studies, emphasizing AI-native network architectures [S16][S17][S1].
MAJOR DISCUSSION POINT
AI‑native 6G networks
AGREED WITH
Magnus Ewerbring, Mr Shantigram Jagannath
DISAGREED WITH
Mr Shantigram Jagannath, Mr Magnus Ewerbring
Argument 3
TRI’s risk‑based regulatory framework and sandbox for AI trials (Shri Anil Kumar Lahoti)
EXPLANATION
TRAI introduced a risk‑based approach that differentiates low‑risk AI use cases, which can be self‑regulated, from high‑risk applications that require stronger obligations such as transparency and human oversight. A regulatory sandbox was also created to allow live testing of AI solutions under defined safeguards.
EVIDENCE
He referenced the July 2023 recommendations that proposed a risk-based regulatory framework for AI in telecom and the April 2024 recommendations that facilitated a regulatory sandbox for live network testing of AI-enabled solutions, including those for 5G and future 6G networks [49-53].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The EU AI Act’s risk-based classification provides a comparable regulatory model, supporting TRAI’s risk-based approach and sandbox concept for live AI testing [S19][S1].
MAJOR DISCUSSION POINT
Regulatory sandbox & risk‑based framework
AGREED WITH
Mr Pasi Toivanen, Ms. Pallavi Mishra
Argument 4
Human‑centric “MANOV” vision emphasizing ethics, accountability, and safeguards (Shri Anil Kumar Lahoti)
EXPLANATION
The MANOV vision, announced by the Prime Minister, calls for a human‑centric AI governance framework that embeds ethical safeguards, accountability and inclusivity by design, guiding AI deployment across sectors including telecom.
EVIDENCE
He mentioned that the MANOV vision emphasizes a human-centric framework for ethical, accountable, inclusive AI governance and that its principles are fundamental to AI governance in telecommunications [54-56].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
TRAI’s ethical AI discussions and global responsible-AI imperatives stress human-centric governance, ethics and accountability, aligning with the MANOV vision [S1][S18][S20].
MAJOR DISCUSSION POINT
Human‑centric AI governance
Argument 5
Transparency, explainability, and human oversight required for high‑risk AI (Shri Anil Kumar Lahoti)
EXPLANATION
For AI applications that directly affect consumers, the chairman stressed the need for higher regulatory obligations, including clear transparency, explainability of decisions and human oversight, to protect consumer rights and maintain trust.
EVIDENCE
He explained that high-risk use cases require stronger obligations around transparency, explainability, and human oversight, whereas low-risk applications may be guided through self-regulation [50-52].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
International guidelines for high-risk AI call for transparency, explainability and human oversight, mirroring the obligations outlined by TRAI [S20][S21].
MAJOR DISCUSSION POINT
High‑risk AI governance
Argument 6
Trust and consumer protection are central to AI adoption in essential telecom services (Shri Anil Kumar Lahoti)
EXPLANATION
The chairman underscored that because telecom is an essential public service, AI‑driven efficiency gains must not compromise transparency, accountability or consumer rights; trust must remain the cornerstone of AI transformation.
EVIDENCE
He highlighted that trust is the central pillar of AI adoption in telecom, emphasizing that efficiency gains cannot come at the cost of transparency, accountability or consumer rights, and that public confidence must remain at the core of AI-enabled transformation [46-48].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI Automation sessions stress accountability and public trust, while India’s AI-driven consumer-protection initiatives demonstrate the focus on safeguarding users [S22][S24].
MAJOR DISCUSSION POINT
Trust & consumer protection
Argument 7
India’s extensive fiber backbone and mobile broadband networks provide a strong foundation for AI deployment in telecom.
EXPLANATION
The chairman highlighted that the country’s nationwide fiber and mobile infrastructure is among the most widely distributed digital assets globally, creating an enabling environment for AI‑driven services.
EVIDENCE
He noted that “Our nationwide fiber backbones and mobile broadband networks constitute one of the most widely distributed digital infrastructures in the world, operating within mature operational and regulatory frameworks” [35-36].
MAJOR DISCUSSION POINT
Infrastructure readiness for AI
Argument 8
A digital consent acquisition framework is being rolled out to give consumers control over commercial communications.
EXPLANATION
TRAI is advancing a mechanism that allows users to grant or withdraw consent digitally, ensuring that AI‑enabled marketing respects privacy and user choice.
EVIDENCE
He mentioned “The authority is also advancing the rollout of a digital consent acquisition framework following successful pilot runs with the banks to ensure consumers have digital control over consent for commercial communications” [44].
MAJOR DISCUSSION POINT
Consumer consent and privacy
Argument 9
AI‑driven telecom operations across borders raise interoperability and standards challenges that require international cooperation.
EXPLANATION
As AI scales globally, the need for common standards, interoperable protocols, and ethical alignment becomes critical, prompting collaboration beyond national boundaries.
EVIDENCE
He stated “As AI-driven telecom operations scale across borders, issues of interoperability, standards, and ethical alignment become global concerns” [59-60].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for trusted, interoperable AI systems and global cooperation to avoid fragmentation are documented in discussions on international AI ecosystems [S23][S25].
MAJOR DISCUSSION POINT
Cross‑border AI interoperability
Argument 10
TRI is committed to multi‑stakeholder collaboration to ensure AI serves both innovation and the public good.
EXPLANATION
The regulator emphasizes working with industry, policymakers, and international partners to balance innovation with public interest safeguards.
EVIDENCE
He concluded “TRI remains committed to working with all stakeholders, industry, policymakers and international partners to ensure that AI in telecom serves both innovation and public good” [64-65].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Multistakeholder partnerships and collaborative strategies are highlighted as essential for thriving AI ecosystems and balancing innovation with public interest [S26][S9].
MAJOR DISCUSSION POINT
Collaborative governance
Argument 11
AI‑driven automation is indispensable for managing India’s massive subscriber base and network scale
EXPLANATION
The chairman stresses that with over 1.3 billion telecom subscribers, AI is no longer optional but essential to handle the scale of operations, optimise performance and maintain service quality.
EVIDENCE
He notes that with over 1.3 billion telecom subscribers and over 1 billion data users, AI-driven automation is no longer optional and is indispensable for network management [37-38].
MAJOR DISCUSSION POINT
Necessity of AI at scale
M
Magnus Ewerbring
6 arguments125 words per minute764 words365 seconds
Argument 1
AI improves link‑adaptation capacity by 10% and energy efficiency by 33% (Magnus Ewerbring)
EXPLANATION
Magnus reported that AI‑driven optimisation of link‑adaptation algorithms increased effective spectrum capacity by 10%, while AI analysis of processing components delivered a 33% improvement in energy efficiency, showcasing tangible performance benefits.
EVIDENCE
He gave specific figures: optimisation of link adaptation raised capacity from 100 MHz to an equivalent of 110 MHz (a 10% gain) and AI-based analysis of a system component improved energy efficiency by 33% [206-214].
MAJOR DISCUSSION POINT
Performance and energy gains
AGREED WITH
Shri Anil Kumar Lahoti, Mr Pasi Toivanen
Argument 2
AI‑driven optimization yields 10% capacity gain and 33% energy savings (Magnus Ewerbring)
EXPLANATION
Reiterating earlier points, Magnus emphasized that AI can deliver a 10% increase in network capacity and a 33% reduction in energy consumption, reinforcing the business case for AI‑enabled network optimisation.
EVIDENCE
He restated the same quantitative improvements: 10% capacity increase through link-adaptation optimisation and 33% energy efficiency gain from AI analysis [206-214].
MAJOR DISCUSSION POINT
Optimization benefits
Argument 3
India’s 90‑99% 5G population coverage positions the country to leverage AI for large‑scale innovation.
EXPLANATION
The speaker pointed out that near‑universal 5G coverage creates a strategic advantage for deploying AI‑enabled services and applications at scale.
EVIDENCE
He said “India comes out being very much in the pole position having a well over 90 % population coverage… up to 99 % population coverage for 5G today” [101-103].
MAJOR DISCUSSION POINT
Strategic advantage of extensive 5G coverage
Argument 4
The industry aims to reach TM Forum Level 4 by 2028 and Level 5 for AI‑native 6G networks.
EXPLANATION
Magnus outlined a roadmap where operators target TM Forum maturity Level 4 by 2028, then move to Level 5 with fully autonomous, AI‑native 6G networks.
EVIDENCE
He explained “The goal … is to reach what we call level 4 in TM Forum. By 2028 many mobile operators aspire to be there… the next step … level 5 and be fully autonomous… 6G shall be AI native” [111-112].
MAJOR DISCUSSION POINT
Roadmap to AI‑native network maturity
Argument 5
Current AI use in networks is only the beginning, with further performance gains expected.
EXPLANATION
He emphasized that while AI already powers many network functions, substantial future improvements are anticipated as AI integration deepens.
EVIDENCE
He remarked “The networks already today use AI… I argue it’s only the beginning” [109-110].
MAJOR DISCUSSION POINT
Future potential of AI in telecom
Argument 6
Leveraging AI on India’s digital stack can boost local efficiency and create export opportunities
EXPLANATION
Eberbring argues that integrating AI into India’s existing digital infrastructure will not only improve operational efficiency domestically but also position Indian firms to export AI‑enabled services globally.
EVIDENCE
He mentions that India is building its digital stack impressively, leveraging AI to drive local efficiency and also creating export possibilities for AI-enabled services [113-117].
MAJOR DISCUSSION POINT
Economic benefits of AI integration
D
Dr. Vinesh Sukumar
5 arguments202 words per minute882 words261 seconds
Argument 1
Democratizing AI on devices through hybrid edge‑cloud models (Dr. Vinesh Sukumar)
EXPLANATION
Dr. Sukumar described Qualcomm’s effort to bring AI inference to a wide range of edge devices—phones, laptops, wearables—while coordinating with cloud resources through a hybrid model, aiming to make AI accessible and personalized at the device level.
EVIDENCE
She explained that Qualcomm is working to democratise AI by enabling inference on personal devices such as phones, laptops, smart watches and glasses, noting the challenges of edge inference and the importance of hybridisation between cloud and edge for personalized services [119-128].
MAJOR DISCUSSION POINT
Edge‑cloud hybridisation
Argument 2
Edge AI for privacy, low latency; cloud for fleet management and model training (Dr. Vinesh Sukumar)
EXPLANATION
She argued that privacy‑sensitive and latency‑critical tasks should run on the edge, whereas cloud platforms are better suited for fleet management, AI/ML model training and broader analytics, highlighting a complementary division of labour.
EVIDENCE
She stated that edge AI handles privacy, low-latency needs and user-centric data, while cloud handles fleet management, AI/ML training, and MLOps, emphasizing a hybrid approach rather than a binary split [219-224].
MAJOR DISCUSSION POINT
Edge vs. cloud responsibilities
AGREED WITH
Mr Pasi Toivanen, Shri Ritu Ranjan Mittar
DISAGREED WITH
Mr Pasi Toivanen, Mr Magnus Ewerbring
Argument 3
Edge AI currently lacks common‑sense, requiring significant investment to become user‑centric.
EXPLANATION
She noted that AI models on devices often miss contextual understanding, and bridging this gap demands considerable research and financial resources.
EVIDENCE
She said “AI historically lacks common sense… needs a lot of investment” [122-124].
MAJOR DISCUSSION POINT
Limitations of current edge AI
Argument 4
Determining which workloads belong on the edge versus the cloud is a complex research challenge that demands hybrid strategies.
EXPLANATION
The speaker highlighted the difficulty of deciding workload placement and called for ongoing research into hybrid architectures that balance latency, privacy, and scalability.
EVIDENCE
She described “Hybridization… challenge to understand, which of these experiences would be transitioned towards the cloud… research activity… expect a strong transition where there’s a fundamental element of hybrid” [128-132].
MAJOR DISCUSSION POINT
Hybrid edge‑cloud workload allocation
Argument 5
Qualcomm aims to democratise AI across a broad spectrum of consumer devices, requiring extensive cross‑device integration
EXPLANATION
Sukumar describes Qualcomm’s effort to bring AI inference to phones, laptops, wearables and glasses, highlighting the challenge of making AI work seamlessly across diverse hardware platforms.
EVIDENCE
She explains that Qualcomm is trying to democratise AI by enabling inference on personal devices such as phones, laptops, smart watches and smart glasses, noting the difficulty of achieving this across many device types [119-124].
MAJOR DISCUSSION POINT
Cross‑device AI deployment
M
Mr Pasi Toivanen
7 arguments118 words per minute687 words348 seconds
Argument 1
Ecosystem‑wide safeguards and collaborative governance are essential (Mr Pasi Toivanen)
EXPLANATION
Pasi stressed that capturing AI value requires a 360° ecosystem involving OEMs, regulators, startups and other stakeholders, and that collaborative governance is needed to address security risks and ensure responsible AI deployment.
EVIDENCE
He spoke about the importance of a 360° ecosystem of technology players, regulators and government agencies to capture AI value, highlighted security risks, and called for proactive, transparent agreement on value distribution and safeguards [135-158].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Multistakeholder partnerships and collaborative governance frameworks are emphasized as key to capturing AI value while managing security risks [S26][S9].
MAJOR DISCUSSION POINT
Collaborative ecosystem governance
AGREED WITH
Shri Anil Kumar Lahoti, Ms. Pallavi Mishra
Argument 2
Push most optimization decisions to the network itself, reducing reliance on regional data centers (Mr Pasi Toivanen)
EXPLANATION
Pasi suggested that the majority of AI‑driven optimisation should be performed directly within the telecom network, limiting the need to send decisions to edge devices or regional data centres, thereby simplifying architecture and improving efficiency.
EVIDENCE
He indicated that most optimisation and automation decisions should be pushed to the network itself, with limited cases to the edge and even fewer to regional data centres, emphasizing intense dialogue between network and edge [239-248].
MAJOR DISCUSSION POINT
Network‑centric optimisation
Argument 3
Capturing AI value requires a 360° ecosystem of OEMs, regulators, and startups (Mr Pasi Toivanen)
EXPLANATION
Reiterating his earlier point, Pasi emphasized that no single entity can fully capture AI benefits; a coordinated ecosystem of OEMs, regulators and startups is essential for responsible AI deployment.
EVIDENCE
He again highlighted that capturing AI value needs a 360° ecosystem of OEMs, regulators, government agencies and startups, stressing collaborative governance and security risk mitigation [135-158].
MAJOR DISCUSSION POINT
360° ecosystem necessity
Argument 4
Incremental journey and ecosystem coordination are key to safely evolve legacy networks (Mr Pasi Toivanen)
EXPLANATION
Pasi argued that evolving legacy networks safely requires an end‑to‑end ecosystem approach rather than piecemeal patch fixes, advocating for holistic coordination among all stakeholders.
EVIDENCE
He responded to the audience question by stating that without an end-to-end ecosystem approach, only patch fixes would be possible, and that a holistic ecosystem is the only way to evolve the whole network [272-275].
MAJOR DISCUSSION POINT
Incremental ecosystem evolution
Argument 5
AI can autonomously perform security vulnerability assessments within the network.
EXPLANATION
When designed appropriately, AI‑enabled network elements can detect and flag security weaknesses without human intervention.
EVIDENCE
He stated “When we design it correctly, it’s able to do many of the security vulnerability assessments by itself” [244-245].
MAJOR DISCUSSION POINT
AI‑driven security assessment
Argument 6
Effective AI deployment requires an intense dialogue between the network core and edge components.
EXPLANATION
Coordinated communication between centralized network functions and edge devices is essential to ensure AI decisions are correctly propagated and acted upon.
EVIDENCE
He noted “… it needs to be very intense dialogue between the network and edge” [246-247].
MAJOR DISCUSSION POINT
Network‑edge coordination
Argument 7
Pushing AI‑driven optimisation to the network core reduces reliance on regional data centres and improves overall efficiency
EXPLANATION
Toivanen suggests that most optimisation decisions should be handled directly within the telecom network, limiting the need to route decisions to edge devices or regional data centres, thereby streamlining operations.
EVIDENCE
He states that the majority of optimisation and automation decisions should be pushed to the network itself, with limited cases to the edge and even fewer to regional data centres, emphasizing an intense dialogue between network and edge [239-248].
MAJOR DISCUSSION POINT
Network‑centric AI optimisation
M
Mr Shantigram Jagannath
10 arguments0 words per minute0 words1 seconds
Argument 1
Decision on AI‑first vs. bolt‑on architecture impacts CAPEX/OPEX and long‑term sustainability (Mr Shantigram Jagannath)
EXPLANATION
Jagannath outlined the strategic choice between building an AI‑first, AI‑native architecture versus adding AI as a bolt‑on to existing equipment, noting that this decision influences capital and operational expenditures as well as future sustainability.
EVIDENCE
He described the two architectural choices: a completely AI-first, AI-native architecture or a bolt-on capability that leverages existing field equipment, emphasizing the impact on CAPEX and OPEX [180-183].
MAJOR DISCUSSION POINT
AI‑first vs bolt‑on architecture
AGREED WITH
Shri Anil Kumar Lahoti, Magnus Ewerbring
DISAGREED WITH
Mr Magnus Ewerbring, Shri Anil Kumar Lahoti
Argument 2
AI can generate new revenue streams via a telecom‑platform “app‑store” for AI models (Mr Shantigram Jagannath)
EXPLANATION
Jagannath proposed that telecom networks could become platforms where AI models are uploaded and accessed like an app store, creating new monetisation opportunities for operators and ecosystem partners.
EVIDENCE
He explained that the telecom network could act as a platform where simple AI models are uploaded and made accessible to all users, akin to an app-store model, generating revenue while ensuring trust and safety [188-192].
MAJOR DISCUSSION POINT
AI app‑store revenue model
Argument 3
Balancing cost optimization with revenue opportunities; choosing AI‑first or bolt‑on paths (Mr Shantigram Jagannath)
EXPLANATION
He discussed the need to weigh cost‑saving measures (CAPEX/OPEX optimisation) against potential new revenue from AI services, suggesting that the choice of architecture (AI‑first vs bolt‑on) should reflect both efficiency and business opportunities.
EVIDENCE
He noted that operators consider cost optimisation (CAPEX/OPEX) and revenue generation, referencing both the cost-optimization discussion and the AI-first versus bolt-on architectural choices [173-176][180-183].
MAJOR DISCUSSION POINT
Cost vs revenue trade‑off
Argument 4
Ensure rural users receive fair bandwidth through AI‑enabled network slicing (Mr Shantigram Jagannath)
EXPLANATION
Jagannath argued that AI can dynamically create network slices to allocate bandwidth equitably, preventing rural users from being disadvantaged compared to urban subscribers.
EVIDENCE
He described using AI-driven network slicing to allocate bandwidth for different applications, noting that such slicing can be performed with a single click and more efficiently with operational AI, thereby ensuring fair access for rural areas [260-266].
MAJOR DISCUSSION POINT
Equitable bandwidth allocation
Argument 5
AI must serve the bottom‑of‑pyramid with low‑cost access and inclusive services (Mr Shantigram Jagannath)
EXPLANATION
He emphasized the responsibility of Indian telecom to provide affordable, inclusive AI‑enabled services to the poorest segments of society, aligning with the bottom‑of‑the‑pyramid concept.
EVIDENCE
He referenced the need to solve problems for the bottom of the pyramid, ensuring low-cost access for everyone in India, and highlighted telecom’s additional responsibility to provide affordable services to all [168-171].
MAJOR DISCUSSION POINT
Inclusion for underserved populations
AGREED WITH
Ms. Pallavi Mishra, Shri Ritu Ranjan Mittar
Argument 6
Future AI agents will vastly increase traffic; policies must evolve to handle new economics and usage patterns (Mr Shantigram Jagannath)
EXPLANATION
Jagannath projected that AI agents could outnumber human devices, creating massive traffic and new business models, and called for policy and regulatory frameworks to address the economics, charging mechanisms and sustainability of such usage.
EVIDENCE
He projected that while today there are 118 crore mobile phones, in five years there could be 500 crore AI agents, raising questions about charging, economics, and who will pay for the activity, and noted the need for policy thought and dense fiber infrastructure to support this future [281-291].
MAJOR DISCUSSION POINT
Future AI traffic & policy
Argument 7
AI enables automated network slicing with single‑click operations to ensure equitable bandwidth allocation.
EXPLANATION
He explained that AI‑driven orchestration can create and adjust network slices instantly, allowing operators to balance capacity between urban and rural users without manual intervention.
EVIDENCE
He said “it is quite possible to do that with single clicks… AI can do it more efficiently… you can create this kind of bandwidth for this type of application and the network management… can actually go and do that for you” [262-265].
MAJOR DISCUSSION POINT
Automated network slicing for fairness
Argument 8
AI‑driven voice‑based identity verification can enhance security and user authentication in telecom networks.
EXPLANATION
He cited a US example where voice metrics are used for identity verification and suggested similar AI‑enabled biometric solutions could be deployed in India.
EVIDENCE
He noted “In the US they recently launched an application where the telecom network can actually sense your voice metrics and identify you… could be launched here in India” [268-269].
MAJOR DISCUSSION POINT
AI for biometric authentication
Argument 9
An AI‑powered “app‑store” model can be used to monitor and enforce regulatory compliance across network services.
EXPLANATION
By allowing simple AI models to be uploaded and distributed via the telecom platform, operators can ensure that applications adhere to safety, trust, and regulatory standards.
EVIDENCE
He described “the telecom network essentially becomes a platform where simple models… can be uploaded and made accessible… with trust, regulation, safety” [188-192].
MAJOR DISCUSSION POINT
Regulatory compliance via AI marketplace
Argument 10
Expanding dense fibre infrastructure is essential to accommodate the future surge in AI‑generated traffic
EXPLANATION
Jagannath points out that as AI agents proliferate, the resulting traffic will require robust fibre networks, and therefore continued investment in high‑capacity fibre optics is critical.
EVIDENCE
He notes that India is building more and denser fibre optics capable of carrying 8 terabits to 20 terabits, anticipating the massive traffic that future AI agents will generate [291-293].
MAJOR DISCUSSION POINT
Infrastructure readiness for AI traffic
S
Shri Ritu Ranjan Mittar
4 arguments119 words per minute743 words374 seconds
Argument 1
Moderator highlights need to address AI impact on access networks, security threats, and sustainability (Shri Ritu Ranjan Mittar)
EXPLANATION
As moderator, Mittar raised key operational concerns: how AI will affect access networks, handset integration, potential security attacks, the compute‑intensive nature of AI, and the importance of sustainability in AI deployments.
EVIDENCE
He asked about AI’s evolution in access networks, challenges from AI on handsets, security threats from AI-based attacks, the compute-intensive nature of AI and its sustainability implications, urging experts to address these points [88-97].
MAJOR DISCUSSION POINT
Operational challenges & sustainability
AGREED WITH
Dr. Vinesh Sukumar, Mr Pasi Toivanen
Argument 2
The compute‑intensive nature of AI raises sustainability concerns for telecom operators.
EXPLANATION
He warned that AI’s high processing demands could increase energy consumption, making sustainability a key consideration in AI‑enabled network designs.
EVIDENCE
He observed “Another one thing with the AI is it is compute-intensive. So the sustainability as also is listed is going to be important” [94-96].
MAJOR DISCUSSION POINT
Sustainability of AI workloads
Argument 3
AI could be weaponised to launch attacks on telecom networks, necessitating proactive security safeguards.
EXPLANATION
The moderator raised the risk that malicious actors might exploit AI to compromise network integrity, calling for pre‑emptive defensive measures.
EVIDENCE
He asked “Are we going to be challenged by the AI being used for attacking the networks? And what kind of steps we intend to take?” [92-93].
MAJOR DISCUSSION POINT
AI‑driven cyber threats
Argument 4
AI embedded in handsets could introduce new security threats to telecom networks
EXPLANATION
Mittar raises the concern that as AI capabilities move onto user devices, they may become vectors for attacks on the network, requiring proactive defensive measures.
EVIDENCE
He asks whether AI on handsets will pose challenges to the network and whether AI could be used for attacking networks, prompting a discussion on required safeguards [89-93].
MAJOR DISCUSSION POINT
AI‑driven cyber threats from devices
A
Audience
1 argument134 words per minute149 words66 seconds
Argument 1
Scaling AI across 118 crore connections requires an end‑to‑end ecosystem approach to avoid disruption (Audience)
EXPLANATION
An audience member questioned how AI can be rolled out across India’s massive telecom base without causing service interruptions, emphasizing the need for a coordinated, end‑to‑end ecosystem strategy.
EVIDENCE
The audience asked how to progress AI deployment across 118 crore connections without disruption, stressing the need for an end-to-end ecosystem approach [271].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
End-to-end ecosystem coordination is recommended for large-scale AI deployment in telecom, ensuring stability and inclusive value capture [S26][S9].
MAJOR DISCUSSION POINT
Large‑scale AI deployment challenge
DISAGREED WITH
Mr Pasi Toivanen, Mr Shantigram Jagannath
M
Ms. Pallavi Mishra
4 arguments56 words per minute601 words638 seconds
Argument 1
Opening remarks affirm confidence that responsible AI will drive positive transformation in telecom (Ms. Pallavi Mishra)
EXPLANATION
Mishra thanked the chairman and expressed confidence that the evolving regulatory frameworks and responsible AI initiatives will positively transform India’s telecom sector.
EVIDENCE
She thanked the chairman, noted that regulatory frameworks and policies are evolving for AI-driven telecom, and expressed belief that the transformation is moving forward positively [67-73].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Responsible AI initiatives and ethical frameworks are presented as drivers of positive change in the telecom sector, supporting the optimistic outlook [S26][S1].
MAJOR DISCUSSION POINT
Positive outlook on responsible AI
AGREED WITH
Shri Anil Kumar Lahoti, Mr Pasi Toivanen
Argument 2
The inaugural plenary must focus on transparency, security, safety, sustainability and responsibility‑by‑design for AI‑driven telecom networks
EXPLANATION
Mishra outlines that the first session will discuss AI adoption together with key pillars such as transparency, security, safety, sustainable networks and embedding responsibility by design, signalling the need for holistic governance of AI in telecom.
EVIDENCE
She states that the session will cover AI adoption, transparency, security, safety, sustainable AI networks and embedding responsibility by design as the agenda for the first plenary session [74-76].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Guidelines for AI in telecom stress transparency, security, safety, sustainability and responsibility-by-design, aligning with the plenary agenda [S20][S21][S22][S23].
MAJOR DISCUSSION POINT
Governance and responsible AI design
Argument 3
AI can enable self‑healing telecom networks that detect and fix faults before they affect users
EXPLANATION
Mishra envisions a telecom network that can automatically identify and remediate problems, effectively ‘healing’ itself without human intervention, thereby improving reliability and service continuity.
EVIDENCE
She asks the audience to imagine “a telecom network that can heal itself… that can detect faults even before we know them,” framing this as a realistic capability of AI in telecom [11-13].
MAJOR DISCUSSION POINT
Self‑healing network capability through AI
Argument 4
AI is a transformative force that expands possibilities for telecom, from predictive management to intelligent customer experiences
EXPLANATION
Mishra emphasizes that AI is already reshaping industries and will further revolutionize telecom by enabling predictive network management and personalized customer interactions, creating vast new opportunities.
EVIDENCE
She states, “Today, AI is transforming industries… From predictive network management to intelligent customer experiences, the possibilities are humongous,” underscoring AI’s broad impact on telecom services [14-16].
MAJOR DISCUSSION POINT
Broad transformative potential of AI in telecom
M
Mr. Shantigram Jagannath
3 arguments138 words per minute1391 words602 seconds
Argument 1
Scaling AI‑driven services requires additional network equipment and capital investment
EXPLANATION
Jagannath notes that the simplest way to accommodate the growing demand of AI‑enabled telecom services is to procure more hardware, implying that significant CAPEX will be needed to sustain AI rollout.
EVIDENCE
He states, “Okay. So in just the easy answer is to buy more equipment,” indicating that expanding AI capabilities will hinge on acquiring new network assets [255-256].
MAJOR DISCUSSION POINT
Infrastructure investment for AI rollout
Argument 2
AI deployment must respect net‑neutrality principles to ensure equitable treatment of traffic across regions
EXPLANATION
Jagannath draws a parallel between current AI access debates and past net‑neutrality discussions, suggesting that AI‑driven services should be governed by the same fairness rules to avoid discrimination between urban and rural users.
EVIDENCE
He remarks, “I know we went through a case where… we had net neutrality and those debates also happening. So it’s not different from those types of debates,” linking AI access fairness to net-neutrality concerns [257-259].
MAJOR DISCUSSION POINT
Fairness and non‑discrimination in AI‑enabled telecom
Argument 3
AI access depends on sufficient backhaul capacity, requiring upgrades to core and transport infrastructure
EXPLANATION
Jagannath points out that AI services must traverse central AI nodes and backhaul links, so expanding AI usage necessitates strengthening the core network and transport layers.
EVIDENCE
He explains, “while we look at access to AI, there is access to the central AI, right, which has to go through a backhaul capacity,” highlighting the need for backhaul enhancements to support AI traffic [260-261].
MAJOR DISCUSSION POINT
Backhaul and core network readiness for AI
Agreements
Agreement Points
AI delivers concrete operational efficiency gains such as improved capacity, energy savings and predictive network management
Speakers: Shri Anil Kumar Lahoti, Magnus Ewerbring, Mr Pasi Toivanen
AI enables predictive network management, fault detection, energy savings, and spam filtering (Shri Anil Kumar Lahoti) AI improves link‑adaptation capacity by 10% and energy efficiency by 33% (Magnus Ewerbring) Push most optimization decisions to the network itself, reducing reliance on regional data centres (Mr Pasi Toivanen)
All three speakers agree that AI is already being used to optimise network performance – increasing effective spectrum capacity, cutting energy consumption and enabling predictive fault detection – and that such optimisation should be embedded directly in the telecom network rather than handled by external data-centres. [38-44][206-214][239-248]
POLICY CONTEXT (KNOWLEDGE BASE)
Telecom operators are already leveraging AI for network planning, capacity optimization and energy reduction, as documented in industry analyses of AI-driven performance improvements [S57] and broader discussions of AI’s energy savings potential [S41].
Future 6G networks will be AI‑native, with AI embedded as an intrinsic layer rather than an add‑on
Speakers: Shri Anil Kumar Lahoti, Magnus Ewerbring, Mr Shantigram Jagannath
AI will become intrinsic to 6G, making networks AI‑native (Shri Anil Kumar Lahoti) The industry aims to reach TM Forum Level 4 by 2028 and Level 5 for AI‑native 6G networks (Magnus Ewerbring) Decision on AI‑first vs. bolt‑on architecture impacts CAPEX/OPEX and long‑term sustainability (Mr Shantigram Jagannath)
The regulator, an industry CTO and a telecom strategist all stress that the next generation (6G) will embed AI at the core of the architecture, requiring AI-first design choices and new maturity levels. [31-34][111-112][180-183]
POLICY CONTEXT (KNOWLEDGE BASE)
The vision of AI-native 6G aligns with discussions at the IGF and industry panels emphasizing AI as an intrinsic layer of future networks rather than a bolt-on, highlighted in the Ethical AI in Telecom & 6G Networks session [S46] and edge-centric 6G design studies [S55].
AI deployment must be governed by a risk‑based regulatory framework and coordinated through a multi‑stakeholder ecosystem
Speakers: Shri Anil Kumar Lahoti, Mr Pasi Toivanen, Ms. Pallavi Mishra
TRI’s risk‑based regulatory framework and sandbox for AI trials (Shri Anil Kumar Lahoti) Ecosystem‑wide safeguards and collaborative governance are essential (Mr Pasi Toivanen) Opening remarks affirm confidence that responsible AI will drive positive transformation in telecom (Ms. Pallavi Mishra)
Both the regulator and industry leaders call for a risk-based approach, sandbox testing and broad ecosystem collaboration, while the moderator highlights the overall confidence that responsible AI, under such governance, will transform the sector. [49-53][135-158][67-73]
POLICY CONTEXT (KNOWLEDGE BASE)
Risk-based AI governance and multi-stakeholder coordination are advocated in international AI policy forums, notably the call for risk-based regulatory approaches in building an AI cooperation ecosystem [S43] and the principle-based ecosystem framework [S44].
AI‑enabled networks must ensure equitable, inclusive service for all users, especially rural and bottom‑of‑pyramid populations
Speakers: Ms. Pallavi Mishra, Mr Shantigram Jagannath, Shri Ritu Ranjan Mittar
AI can enable self‑healing telecom networks that detect and fix faults before they affect users (Ms. Pallavi Mishra) AI must serve the bottom‑of‑pyramid with low‑cost access and inclusive services (Mr Shantigram Jagannath) How can we make sure that a rural area is not deprived of bandwidth… (Shri Ritu Ranjan Mittar)
The opening speaker envisions self-healing, universal networks; the Tejas Networks strategist stresses low-cost, inclusive AI services for the poorest; and the moderator explicitly asks how AI can avoid disadvantaging rural subscribers. All converge on the need for universal, fair AI-driven connectivity. [11-13][168-171][252-254]
POLICY CONTEXT (KNOWLEDGE BASE)
Ensuring inclusive connectivity is a recurring theme in global policy, with IGF deliberations framing connectivity as a human right and stressing rural inclusion [S58], and UN-led frameworks promoting locally-driven, inclusive AI adoption [S51].
Workload placement between edge, network core and cloud should follow a hybrid, context‑driven approach
Speakers: Dr. Vinesh Sukumar, Mr Pasi Toivanen, Shri Ritu Ranjan Mittar
Edge AI for privacy, low latency; cloud for fleet management and model training (Dr. Vinesh Sukumar) Push most optimization decisions to the network itself, reducing reliance on regional data centres (Mr Pasi Toivanen) Moderator highlights need to address AI impact on access networks, security threats, and sustainability (Shri Ritu Ranjan Mittar)
Qualcomm’s VP outlines a clear split of responsibilities (edge for privacy-sensitive tasks, cloud for training); Nokia’s representative advocates moving optimisation into the network; and the moderator raises the broader operational challenges, indicating shared recognition of a nuanced, hybrid deployment model. [219-224][239-248][88-97]
POLICY CONTEXT (KNOWLEDGE BASE)
Hybrid workload placement strategies are recommended in technical guidance that differentiates edge for low-latency inference and cloud for heavy training, as outlined in edge-vs-cloud deployment analyses [S55] and case studies on remote, low-connectivity scenarios favoring edge processing [S53].
Similar Viewpoints
Both the regulator and the industry CTO stress that AI delivers measurable performance and energy benefits for telecom operators, underpinning the case for wider AI adoption. [38-44][206-214]
Speakers: Shri Anil Kumar Lahoti, Magnus Ewerbring
AI enables predictive network management, fault detection, energy savings, and spam filtering (Shri Anil Kumar Lahoti) AI improves link‑adaptation capacity by 10% and energy efficiency by 33% (Magnus Ewerbring)
Both highlight that AI‑driven optimisation should be performed as close to the data source as possible – either within the network core or at the edge – while reserving cloud resources for broader management tasks. [219-224][239-248]
Speakers: Mr Pasi Toivanen, Dr. Vinesh Sukumar
Push most optimization decisions to the network itself, reducing reliance on regional data centres (Mr Pasi Toivanen) Edge AI for privacy, low latency; cloud for fleet management and model training (Dr. Vinesh Sukumar)
Both stress that a coordinated ecosystem (OEMs, regulators, startups) is needed not only for safety but also to unlock new business models such as AI‑model marketplaces. [135-158][188-192]
Speakers: Mr Pasi Toivanen, Mr Shantigram Jagannath
Ecosystem‑wide safeguards and collaborative governance are essential (Mr Pasi Toivanen) AI can generate new revenue streams via a telecom‑platform “app‑store” for AI models (Mr Shantigram Jagannath)
Unexpected Consensus
Quantitative performance gains (10% capacity increase and 33% energy reduction) are accepted as realistic outcomes by both regulator and industry
Speakers: Shri Anil Kumar Lahoti, Magnus Ewerbring
AI enables predictive network management, fault detection, energy savings, and spam filtering (Shri Anil Kumar Lahoti) AI improves link‑adaptation capacity by 10% and energy efficiency by 33% (Magnus Ewerbring)
While the regulator’s remarks are largely qualitative, the fact that the regulator’s narrative aligns with the industry’s specific quantitative figures (10% capacity, 33% energy) shows an unexpected level of concrete agreement on measurable benefits. [38-44][206-214]
Both a telecom‑centric AI‑first architecture and a network‑centric optimisation approach are advocated as the preferred path forward
Speakers: Mr Shantigram Jagannath, Mr Pasi Toivanen
Decision on AI‑first vs. bolt‑on architecture impacts CAPEX/OPEX and long‑term sustainability (Mr Shantigram Jagannath) Push most optimization decisions to the network itself, reducing reliance on regional data centres (Mr Pasi Toivanen)
Jagannath argues for an AI-first, AI-native design, while Pasi pushes optimisation into the network core; the convergence on keeping the bulk of AI logic inside the network rather than at the edge or cloud was not explicitly anticipated. [180-183][239-248]
POLICY CONTEXT (KNOWLEDGE BASE)
Both telecom-centric AI-first and network-centric optimisation approaches are reflected in industry perspectives that describe telecom operators as programmable infrastructure layers [S46] and in network-level AI optimisation use cases for planning and performance [S57].
Overall Assessment

There is a strong, cross‑cutting consensus that AI is already delivering tangible efficiency gains, will become intrinsic to future 6G networks, and must be deployed under a risk‑based, multi‑stakeholder governance framework that safeguards inclusion, security and sustainability.

High consensus – the regulator, industry leaders, and the moderator repeatedly echo the same themes, indicating a unified direction for responsible AI integration in India’s telecom sector. This alignment suggests that policy, standards and commercial initiatives are likely to progress in a coordinated manner, accelerating AI‑driven transformation while maintaining trust and equity.

Differences
Different Viewpoints
Placement of AI decision‑making (edge vs. network core vs. cloud)
Speakers: Dr. Vinesh Sukumar, Mr Pasi Toivanen, Mr Magnus Ewerbring
Edge AI for privacy, low latency; cloud for fleet management and model training (Dr. Vinesh Sukumar) Push most optimisation decisions to the network itself, limiting edge and regional data‑centre involvement (Mr Pasi Toivanen) AI is already used in the network to optimise link‑adaptation and energy efficiency (Mr Magnus Ewerbring)
Dr. Sukumar argues that privacy-sensitive and latency-critical functions should run on the edge while broader fleet-management and AI/ML training belong in the cloud [219-224]. Pasi contends that the majority of AI-driven optimisation should be performed directly within the telecom network, reducing reliance on edge devices and regional data centres [239-248]. Magnus focuses on network-level AI applications that improve capacity and energy use but does not address edge versus cloud placement, emphasizing network-centric gains [206-214]. These differing views illustrate a lack of consensus on where AI workloads should reside.
POLICY CONTEXT (KNOWLEDGE BASE)
The debate over where AI decisions should reside mirrors documented considerations of edge, core and cloud placement, with recommendations for context-aware distribution in edge-centric deployment studies [S55] and remote scenario analyses [S53].
AI‑first (AI‑native) versus bolt‑on architecture for integrating AI into telecom networks
Speakers: Mr Shantigram Jagannath, Mr Magnus Ewerbring, Shri Anil Kumar Lahoti
Decision on AI‑first vs. bolt‑on architecture impacts CAPEX/OPEX and long‑term sustainability (Mr Shantigram Jagannath) In 6G AI will be intrinsic and networks will be AI‑native, setting a baseline for future deployments (Mr Magnus Ewerbring) AI will become intrinsic to 6G, making networks AI‑native (Shri Anil Kumar Lahoti)
Jagannath highlights a strategic choice between building a completely AI-first, AI-native architecture or adding AI as a bolt-on to existing equipment, stressing cost and sustainability implications [180-183]. Magnus and Lahoti both project that future 6G networks will be AI-native, implying an AI-first approach [31-34][31-34]. The tension lies between a forward-looking AI-native vision and the practical consideration of retrofitting existing infrastructure.
POLICY CONTEXT (KNOWLEDGE BASE)
The AI-first versus bolt-on architecture discussion is echoed in panel discussions on ethical AI integration in telecom, which contrast native AI layers with add-on solutions [S46] and outline edge-centric design principles for AI-native 6G [S55].
How to scale AI across India’s 118 crore mobile connections without service disruption
Speakers: Audience, Mr Pasi Toivanen, Mr Shantigram Jagannath
Scaling AI across 118 crore connections requires an end‑to‑end ecosystem approach to avoid disruption (Audience) A holistic ecosystem of OEMs, regulators and startups is essential; piecemeal patch fixes are insufficient (Mr Pasi Toivanen) The simplest answer is to buy more equipment to handle AI‑driven traffic (Mr Shantigram Jagannath)
An audience member stresses the need for an end-to-end ecosystem to roll out AI at scale without interruptions [271]. Pasi echoes this, arguing that only a coordinated ecosystem can safely evolve legacy networks [272-275]. Jagannath counters with a more hardware-centric view, suggesting that procuring additional equipment is the straightforward solution [255-256]. These positions diverge on whether coordination or capital investment is the primary path forward.
POLICY CONTEXT (KNOWLEDGE BASE)
Scaling AI to serve over 118 crore mobile users is addressed in national digital strategy briefings that highlight India’s mobile-first ecosystem and the need for resilient AI infrastructure [S62], as well as investment announcements underscoring capacity expansion plans [S64].
Addressing the compute‑intensive nature of AI and its sustainability impact
Speakers: Shri Ritu Ranjan Mittar (moderator), Mr Magnus Ewerbring, Mr Shantigram Jagannath, Mr Pasi Toivanen
AI is compute‑intensive; sustainability will be important (Shri Ritu Ranjan Mittar) AI‑driven analysis achieved a 33 % improvement in energy efficiency (Mr Magnus Ewerbring) Buy more equipment to support AI, implying increased resource use (Mr Shantigram Jagannath) Push optimisation to the network itself to improve efficiency and reduce reliance on data centres (Mr Pasi Toivanen)
The moderator flags AI’s high compute demands and the need for sustainable deployment [94-96]. Magnus reports significant energy-efficiency gains from AI-enabled optimisation [212-214], while Jagannath suggests expanding hardware capacity, which could raise energy consumption [255-256]. Pasi proposes network-centric optimisation to enhance efficiency and limit extra compute load [239-248]. The disagreement centers on whether AI’s sustainability challenge is best met through efficiency gains, hardware expansion, or architectural optimisation.
POLICY CONTEXT (KNOWLEDGE BASE)
The compute-intensive nature of AI and its sustainability implications are highlighted in analyses of Green AI and high-performance computing energy use, which discuss large model resource demands and environmental impact [S41][S48].
Unexpected Differences
Hardware‑centric expansion versus ecosystem‑centric coordination for AI rollout
Speakers: Mr Pasi Toivanen, Mr Shantigram Jagannath
A holistic ecosystem of OEMs, regulators and startups is essential; piecemeal patch fixes are insufficient (Mr Pasi Toivanen) The simplest answer is to buy more equipment (Mr Shantigram Jagannath)
While both speakers aim to scale AI across the massive Indian subscriber base, Pasi’s emphasis on coordinated ecosystem development contrasts sharply with Jagannath’s straightforward call for additional hardware procurement. The divergence is unexpected given their shared industry background, revealing differing strategic priorities.
POLICY CONTEXT (KNOWLEDGE BASE)
The tension between hardware-centric rollout and ecosystem-centric coordination reflects policy discussions advocating risk-based, multi-stakeholder AI ecosystems [S43] and principle-based governance at the ecosystem level [S44].
Energy‑efficiency gains versus compute‑intensity concerns
Speakers: Mr Magnus Ewerbring, Shri Ritu Ranjan Mittar (moderator)
AI‑driven analysis achieved a 33 % improvement in energy efficiency (Mr Magnus Ewerbring) AI is compute‑intensive; sustainability will be important (Shri Ritu Ranjan Mittar)
Magnus highlights concrete energy‑saving outcomes from AI, whereas the moderator stresses the broader compute‑intensive nature of AI that could offset such gains. The tension between reported efficiency improvements and overarching sustainability worries was not anticipated.
POLICY CONTEXT (KNOWLEDGE BASE)
Balancing energy-efficiency gains with the high compute demands of AI models is a recurring theme in sustainability literature, noting both potential reductions in energy use through AI optimisation and the substantial electricity consumption of large-scale AI workloads [S41][S48].
Overall Assessment

The discussion shows strong consensus on AI’s strategic importance for India’s telecom sector, but notable disagreements on technical implementation—specifically where AI workloads should reside (edge vs. network vs. cloud), whether to adopt AI‑first or bolt‑on architectures, the best method to scale AI across 118 crore connections (ecosystem coordination vs. hardware expansion), and how to reconcile AI’s compute‑intensity with sustainability goals.

Moderate disagreement: while all participants share the same overarching goal of leveraging AI, the divergent views on architecture, deployment strategy and sustainability indicate that policy and industry coordination will be required to align technical choices and investment priorities. These disagreements could affect the speed and effectiveness of AI integration in India’s telecom infrastructure.

Partial Agreements
All speakers concur that AI is a transformative and essential technology for India’s telecom sector, promising performance gains, new services and economic opportunities. However, they diverge on the pathways to realise these benefits—whether through network‑level optimisation, edge‑cloud hybridisation, new revenue models, or ecosystem governance.
Speakers: Shri Anil Kumar Lahoti, Mr Magnus Ewerbring, Dr. Vinesh Sukumar, Mr Shantigram Jagannath, Mr Pasi Toivanen
AI enables predictive network management, fault detection, energy savings and spam filtering (Shri Anil Kumar Lahoti) AI improves link‑adaptation capacity by 10 % and energy efficiency by 33 % (Mr Magnus Ewerbring) Democratising AI on devices through hybrid edge‑cloud models (Dr. Vinesh Sukumar) AI can generate new revenue streams via a telecom‑platform “app‑store” for AI models (Mr Shantigram Jagannath) Ecosystem‑wide safeguards and collaborative governance are essential (Mr Pasi Toivanen)
Takeaways
Key takeaways
AI is becoming a core, transformative capability for telecom, moving from an add‑on to an intrinsic, AI‑native layer especially with the upcoming 6G era. AI already delivers tangible benefits in India: predictive network management, fault detection, energy savings (up to 33% reported), and large‑scale spam filtering (≈400 million spam calls/messages blocked daily). Regulatory bodies (TRI) are adopting a risk‑based framework, sandbox environment and the human‑centric MANOV vision to ensure transparency, explainability, accountability and consumer protection for high‑risk AI use cases. Technical implementation requires a hybrid edge‑cloud approach: edge for privacy‑sensitive, low‑latency decisions; cloud for fleet management, model training and large‑scale analytics; and a gradual shift toward AI‑first or bolt‑on architectures depending on CAPEX/OPEX considerations. Sustainability is a key driver; AI‑driven optimization can increase spectral capacity by ~10% and reduce energy consumption significantly, but compute intensity must be managed. New business models are emerging, such as telecom‑platform “app‑store” for AI models, revenue sharing with startups, and ecosystem‑wide value creation through collaboration among OEMs, regulators, operators and innovators. Equity and inclusion are essential: AI‑enabled network slicing and trust mechanisms must ensure rural and bottom‑of‑the‑pyramid users receive fair bandwidth and affordable services. Future challenges include scaling AI across >118 crore connections, handling security threats from AI‑powered attacks, and preparing for a massive increase in AI agents (potentially 500 crore) that will generate new traffic and economic models.
Resolutions and action items
TRI will continue to apply its risk‑based regulatory framework and expand the AI sandbox for live network testing of AI‑enabled 5G/6G solutions. Stakeholders (OEMs, operators, regulators, startups) are urged to collaborate on an end‑to‑end ecosystem approach to AI integration, avoiding piecemeal patch fixes. Industry participants committed to advancing AI‑native network designs (level‑4 TM Forum by 2028, progressing toward level‑5 for 6G). Operators and OEMs to explore AI‑first versus bolt‑on architectural paths, balancing CAPEX/OPEX and long‑term sustainability. Development of trust frameworks, transparency standards and “app‑store” style platforms for AI model distribution was highlighted as a priority.
Unresolved issues
Precise criteria for deciding which functions should reside at the edge versus the cloud remain open. Concrete mechanisms to guarantee that rural users are not disadvantaged in bandwidth allocation by AI‑driven network slicing need further definition. Specific security safeguards against AI‑generated attacks on the network were discussed but not detailed. How to manage the massive increase in AI agents and the associated economic model (pricing, billing, cost recovery) is still under debate. The optimal balance between AI‑first native architecture and bolt‑on upgrades for existing equipment lacks a clear roadmap. Implementation details for sustainability metrics and compute‑intensity mitigation were not finalized.
Suggested compromises
Adopt a hybrid edge‑cloud model rather than a binary edge‑only or cloud‑only approach, allowing dynamic workload placement. Use AI‑driven network slicing with configurable policies to ensure equitable bandwidth distribution across urban and rural areas. Pursue both AI‑first native designs for new deployments and bolt‑on upgrades for legacy equipment, selecting the approach based on cost‑benefit analysis. Combine regulatory oversight (transparency, explainability) with industry‑led self‑regulation for low‑risk AI use cases, reserving stricter obligations for high‑risk scenarios. Encourage ecosystem collaboration (OEMs, regulators, startups) to share value and risks, rather than each player attempting to capture the entire AI value chain alone.
Thought Provoking Comments
AI and telecommunications complement each other to form the backbone for the intelligence era. Telecom networks are emerging as the primary carriers of AI, while AI itself is becoming the intelligence layer of telecom. In the upcoming 6G technology, AI will no longer be an application layer – it will be intrinsic. The telecom networks will be AI‑native.
This statement reframes AI from being a peripheral add‑on to being a foundational characteristic of future networks, setting a strategic vision for 6G and beyond.
It established the overarching theme of the session, prompting panelists to discuss concrete steps toward AI‑native infrastructure and influencing subsequent remarks about network autonomy, standards (e.g., TM Forum levels) and the need for regulatory foresight.
Speaker: Shri Anil Kumar Lahoti (Chairman, TRAI)
Trust is the central pillar of AI adoption in telecommunications. Automated decisions taken by algorithms can affect millions of users simultaneously; efficiency gains cannot come at the cost of transparency, accountability or consumer rights.
Highlights the ethical dimension of large‑scale AI deployment, reminding stakeholders that technical progress must be balanced with governance.
Shifted the conversation from purely performance‑oriented benefits to governance concerns, leading to questions about explainability, human oversight, and later prompting the panel to address fairness (e.g., rural vs urban bandwidth) and ecosystem‑wide trust.
Speaker: Shri Anil Kumar Lahoti (Chairman, TRAI)
Our goal is to reach Level 4 in the TM Forum maturity model by 2028 and then, with 6G, move to Level 5 – a fully autonomous, AI‑native network. This is the next big step after the current self‑optimising networks.
Provides a concrete, industry‑wide roadmap that quantifies the ambition for AI‑driven autonomy, linking technical milestones to business timelines.
Prompted the moderator’s follow‑up question about what AI changes compared to existing self‑optimising networks, and led Magnus to cite measurable gains (10% capacity, 33% energy efficiency), deepening the technical discussion.
Speaker: Magnus Eberberg (CTO, Ericsson)
AI historically lacks common sense. Translating AI to the edge for personalized inference is hard, and we need a hybrid approach where some functions run on the edge and others in the cloud. Deciding which belongs where is a major research challenge.
Identifies a fundamental limitation of current AI systems and frames the edge‑cloud split as a nuanced, research‑intensive problem rather than a simple binary choice.
Steered the dialogue toward the practicalities of deployment, leading to further questions about edge vs. cloud decisions and eliciting detailed responses from both Dr. Sukumar and later panelists about MLOps, privacy, and hybrid architectures.
Speaker: Dr. Vinesh Sukumar (Vice President, Qualcomm)
The real value will be captured by ecosystems that bring together technology players, regulators, and governments. We must proactively define, distribute, and maximise value across the ecosystem; this collaborative model is essential to address security risks and to realise the AI era.
Shifts focus from isolated technological solutions to a holistic, multi‑stakeholder ecosystem approach, emphasizing governance, shared risk, and value‑sharing.
Influenced subsequent remarks about end‑to‑end network evolution, prompted the audience question about scaling AI across 118 crore connections, and reinforced the moderator’s emphasis on collaborative frameworks.
Speaker: Mr. Pasi Toivanen (Nokia)
We have a responsibility to serve the bottom of the pyramid. AI adoption must be evaluated through cost‑optimization or revenue‑generation lenses, and we need to decide between AI‑first native architecture versus bolt‑on solutions to protect low‑cost universal access.
Brings socio‑economic considerations into the technical debate, linking AI strategy to affordability and inclusive access for the vast Indian population.
Introduced the theme of inclusive design, leading to later discussion on network slicing for rural vs. urban users and the ethical question of bandwidth fairness raised by the moderator.
Speaker: Mr. Shanti Gram Jagannath (Tejas Networks)
When we design the network correctly, it can perform security vulnerability assessments autonomously and push optimization decisions to the network itself, reducing reliance on regional data centres.
Proposes a concrete architectural shift toward decentralized, self‑protecting networks, addressing both security and efficiency concerns.
Prompted the moderator to ask about decisions taken off‑net and reinforced the narrative that AI should be embedded within the network fabric rather than as an external overlay.
Speaker: Mr. Pasi Toivanen (Nokia)
We need to think about a future where AI agents, not just human users, dominate traffic – potentially 500 crore AI agents. This raises new business‑model and regulatory questions about charging, economics, and network capacity.
Projects a forward‑looking scenario that expands the scope of AI impact beyond current human‑centric usage, highlighting upcoming policy challenges.
Extended the conversation from immediate technical implementations to long‑term strategic planning, influencing the audience’s question about scaling AI across the massive subscriber base.
Speaker: Mr. Shanti Gram Jagannath (Tejas Networks)
Overall Assessment

The discussion was shaped by a series of pivotal remarks that moved the dialogue from a high‑level celebration of AI’s potential to a nuanced exploration of implementation, governance, and inclusivity. Chairman Lahoti’s framing of AI‑native networks and the centrality of trust set the strategic backdrop. Technical leaders (Magnus, Dr. Sukumar) then grounded the vision with concrete roadmaps and highlighted the edge‑cloud dilemma, while Pasi Toivanen and Shanti Gram Jagannath introduced ecosystem‑centric and bottom‑of‑the‑pyramid perspectives that broadened the conversation to include economic, regulatory, and societal dimensions. These comments triggered targeted questions from the moderator and audience, steering the session toward actionable challenges—such as network autonomy, security, fairness, and future AI‑agent traffic—thereby deepening the analysis and ensuring the debate remained both forward‑looking and grounded in India’s unique scale and diversity.

Follow-up Questions
How is the access network evolving with AI?
Understanding AI integration in the access layer is crucial for improving network performance, scalability, and preparing for 6G deployments.
Speaker: Shri Ritu Ranjan Mittar (moderator)
What changes are expected in the core network when AI is implemented?
Core network transformation impacts latency, resource management, and the overall efficiency of telecom services.
Speaker: Shri Ritu Ranjan Mittar (moderator)
What challenges will AI on handsets pose to the network?
AI-enabled devices may generate new traffic patterns and processing demands, requiring network adaptations to maintain QoS and reliability.
Speaker: Shri Ritu Ranjan Mittar (moderator)
Will AI be used to attack telecom networks, and what defensive steps are planned?
AI can be weaponized for cyber‑attacks; identifying mitigation strategies is essential for safeguarding critical communications infrastructure.
Speaker: Shri Ritu Ranjan Mittar (moderator)
How will OEMs address sustainability given AI’s compute‑intensive nature?
AI workloads increase energy consumption; sustainable design and energy‑efficient hardware are needed to meet UN SDG commitments.
Speaker: Shri Ritu Ranjan Mittar (moderator)
What does AI fundamentally change compared to existing self‑optimizing (SON) networks?
Clarifies the added value of AI over traditional SON, informing operators about expected performance gains and investment justification.
Speaker: Shri Ritu Ranjan Mittar (moderator) to Magnus Eberberg
Which decisions should be pushed to the edge and which should remain centralized in telecom hardware/software?
Determining the edge‑vs‑cloud split affects latency, privacy, data management, and overall network efficiency.
Speaker: Shri Ritu Ranjan Mittar (moderator) to Dr. Vinesh Sukumar
What decisions are taken off‑network and what decisions will be taken off‑network during AI‑enabled operation?
Understanding off‑network automation informs architecture design and the distribution of intelligence across the network hierarchy.
Speaker: Shri Ritu Ranjan Mittar (moderator) to Mr. Pasi Toivanen
How can we ensure rural users are not deprived of bandwidth compared to urban users when AI manages resources?
Addresses equity, net‑neutrality, and service‑level fairness concerns in AI‑driven network slicing and resource allocation.
Speaker: Shri Ritu Ranjan Mittar (moderator) to Mr. Shantigram Jagannath
How can AI be introduced across 118 crore mobile connections without causing disruption, and what is the vision for scaling AI in such a massive network?
Large‑scale rollout poses operational risk; a clear roadmap is needed to maintain service continuity while leveraging AI benefits.
Speaker: Audience member (unidentified)
Research on hybrid AI models that dynamically balance edge and cloud inference, including decision‑making criteria for workload placement.
Hybrid architectures can optimise performance, privacy, and resource utilisation, but require systematic study to define optimal split strategies.
Speaker: Dr. Vinesh Sukumar
Developing intelligent routers capable of multi‑turn conversations and adaptive decision‑making between edge and cloud environments.
Such routers are essential for seamless AI services; current static routing limits flexibility, necessitating advanced research.
Speaker: Dr. Vinesh Sukumar
Designing AI‑first versus bolt‑on architectures for existing telecom equipment, assessing lifecycle, upgrade paths, and cost implications.
Operators need guidance on whether to retrofit legacy gear or invest in AI‑native hardware to maximize ROI and future‑proof networks.
Speaker: Mr. Shantigram Jagannath
Creating ecosystem frameworks for value distribution, trust, and regulation among OEMs, operators, regulators, and startups.
Collaborative value‑sharing models are vital to capture the full potential of AI while managing risks and ensuring fair returns for all stakeholders.
Speaker: Mr. Pasi Toivanen
Implementing regulatory sandbox approaches for live network testing of AI solutions, including safety and compliance metrics.
Sandboxes enable controlled innovation, allowing stakeholders to validate AI applications while protecting public interest.
Speaker: Shri Anil Kumar Lahoti
Developing AI governance guidelines focused on transparency, explainability, and human oversight specific to telecom applications.
Trustworthy AI requires clear accountability mechanisms to protect consumer rights and maintain confidence in essential services.
Speaker: Shri Anil Kumar Lahoti
Establishing interoperability standards and ethical alignment for AI‑driven telecom across borders.
Global coordination is needed to ensure seamless operation, avoid fragmentation, and uphold shared ethical principles.
Speaker: Shri Anil Kumar Lahoti
Defining sustainability metrics for AI‑enabled telecom networks aligned with UN Sustainable Development Goals.
Measuring and reducing the environmental impact of AI deployments supports national and international climate commitments.
Speaker: Shri Anil Kumar Lahoti
Formulating economic models for AI traffic, including pricing and charging mechanisms for AI agents versus human users.
Future revenue streams depend on clear policies for monetising AI‑generated data traffic and services.
Speaker: Mr. Shantigram Jagannath
Assessing security risks of AI‑generated attacks on telecom infrastructure and developing mitigation frameworks.
Proactive security research is essential to protect networks from sophisticated AI‑driven threats.
Speaker: Shri Anil Kumar Lahoti (and panelists)
Investigating AI‑driven network slicing and dynamic bandwidth allocation techniques to ensure fairness between different regions and user groups.
Technical solutions are needed to prevent bias and maintain equitable service quality across diverse geographies.
Speaker: Mr. Shantigram Jagannath
Quantifying the ROI of AI‑driven fault prediction and proactive maintenance at massive scale.
Demonstrating tangible cost savings and performance gains will drive wider adoption of AI in telecom operations.
Speaker: Shri Anil Kumar Lahoti

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Toward Collective Action_ Roundtable on Safe & Trusted AI

Toward Collective Action_ Roundtable on Safe & Trusted AI

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel examined what “safe and trusted AI” means for Africa, current progress, and collaborative pathways [5-8].


Ambassador Tigo warned AI that creates dependency, extracts data, and concentrates value abroad erodes agency and creates digital neocolonialism [33-36].


Professor Shock flagged short-term risks of misinformation and gendered disinformation during elections, which can erode public trust [48-55][60-64].


Dr Okolo pointed out the scarcity of Africa-specific AI incident data, citing unnoticed AI-graded exam problems in Nigeria and South Africa [72-76].


Gaffley said a survey showed three-quarters of South Africans know little about AI, leading GCG to launch courses, scholarships, and a free MOOC [141-151].


Shock emphasized that AI must reflect local languages and contexts to empower users and preserve agency [155-162].


The panel noted a lack of AI policies and talent, urging an all-in cooperative approach over competition [101-108][111-119].


Tigo identified scientists, governments, and citizens as key actors and urged capacity-building so each can evaluate, regulate, and safely use AI [173-191].


He suggested embedding safety benchmarks in procurement, creating agile oversight, and protecting data sovereignty through local alternatives and negotiation tools [241-254][255-257].


Okolo advocated for independent AI safety institutes to reduce reliance on foreign donors and tailor standards to African values [260-267].


Existing collaborations such as MasaKani, Deep Learning in Daba, GOAI Africa, and the African Compute Initiative illustrate shared resources boosting research [215-218].


The panel concluded that trustworthy, locally relevant AI needs coordinated governance, capacity building, and inclusive policies to empower citizens and prevent neocolonial exploitation [215-218][343-351].


Keypoints


Major discussion points


Risk of digital neocolonialism and loss of agency – The Ambassador warned that AI systems that create dependency, extract African data and concentrate value abroad erode human agency and amount to “digital neocolonialism,” even posing an existential threat to the continent [33-36].


Misinformation, disinformation and malicious AI agents – Professor Shock highlighted the current surge of election-related misinformation and targeted disinformation (often gender-based), noting that AI amplifies these attacks and that single malicious actors can now build autonomous agents to spread false content [48-55][60-64].


Low public awareness and capacity gaps – Mark Gaffley’s survey showed that roughly 75 % of Africans know very little about AI, learning mainly from informal channels, underscoring the need for widespread education, short courses, scholarships and a free MOOC to build the skills required to define and demand trustworthy AI [141-146][147-151].


Call for pan-African collaboration and shared infrastructure – Multiple panelists stressed that competition must be replaced by cooperation: building regional compute resources (the African Compute Initiative), leveraging existing grassroots groups, and creating networks across academia, civil society, government and the private sector to empower local researchers [201-204][215-218].


Policy, procurement safeguards and data sovereignty – The Ambassador and Dr. Chinasa pointed out the absence of continent-wide AI policies, the need for safety benchmarks in procurement contracts, agile regulatory mechanisms, and strategies for data localisation to keep AI development under African control [101-108][110-118][241-254][259-266].


Overall purpose / goal


The session was convened to answer three interlinked questions: what “safe and trusted AI” means for Africa, what progress has already been made, and which collaborative pathways can advance AI governance, safety and capacity building on the continent [5-8].


Tone of the discussion


The conversation began with a formal, introductory tone. It quickly shifted to a concerned and urgent mood when panelists described risks such as neocolonial exploitation and misinformation [33-36][48-55]. As the dialogue progressed, the tone became constructive and hopeful, focusing on education, capacity-building initiatives, and collaborative infrastructure [141-151][201-204][215-218]. Toward the end, the tone turned pragmatic and policy-oriented, emphasizing concrete steps for procurement, regulation, and negotiation with global tech firms [101-108][241-254][259-266]. Throughout, the panel maintained a collaborative spirit, repeatedly urging collective action over competition.


Speakers

Ambassador Philip Tigo – His Excellency Ambassador, Special Technology Envoy of the Government of Kenya; serves as a special envoy on technology for the President of Kenya and provides policy perspectives on AI safety and governance. [S1][S2]


Michelle Malonza – Co-moderator of the session; affiliated with the Center of Global AI Governance (GCG) as a panelist and contributes to discussions on AI trust and capacity building. [S4]


Speaker 2 – Moderator/chair of the panel; leads the Q&A, introduces questions, and guides the flow of the discussion. [S6][S7]


Mark Gaffley – Director of Legal and Operations at the Center of Global AI Governance (GCG); speaks on public awareness, capacity building, and policy implications of AI in Africa. [S9]


Dr. Chinasa Okolo – Founder of Technicultura; Policy AI Specialist at the United Nations Office for Digital and Emerging Technologies; provides insights on AI incident databases, advocacy, and African AI governance. [S10][S11][S12]


Speaker 1 – Opening host/moderator who introduces the panel, outlines the agenda, and closes the session with logistical information. [S13][S15]


Professor Jonathan Shock – Associate Professor in the Department of Mathematics and Applied Mathematics at the University of Cape Town; Director of the UCT AI Initiative; discusses risks, misinformation, and agency in AI systems. [S16]


Audience – Members of the audience who ask questions during the Q&A segment.


Additional speakers:


Zach – Mentioned as the person who will start the first set of questions; appears to act as a co-moderator or facilitator.


Prashok – Referred to as a participant who would take a question from the audience.


John – Briefly addressed by the moderator; no further context provided.


Iman – Named at the very end as the next person to take over after the panel discussion.


Full session reportComprehensive analysis and detailed insights


The session began with the moderator, Michelle Malonza, introducing the African-led research team – Marie-Ira Ducunda, Gatoni, Michel Malonza and the AI Safety South Africa initiative – and outlining the three interlinked questions that would guide the discussion: what “safe and trusted AI” means for the continent, what progress has already been made, and which collaborative pathways should be pursued [5-8][9-14][15-18]. She also gave brief housekeeping instructions, directing participants to the QR-code and Slido for live questions and noting the event schedule.


The moderator framed “safe and trusted AI” as technology that delivers the outcomes users desire and asked the panel to identify undesirable results in the African context. Ambassador Philip Tigo warned that AI systems that create dependency, extract African data and concentrate value abroad erode human agency and constitute a form of digital neocolonialism that could pose an existential threat to the continent [30-32][33-36].


Professor Jonathan Shock then highlighted the most pressing short-term risk: a rapid breakdown of public trust caused by misinformation and targeted disinformation during elections in Ghana, South Africa and Nigeria. He distinguished misinformation (unintentional errors) from disinformation (deliberate, often gender-based campaigns) and noted that AI-enabled agents now allow single malicious actors to launch large-scale, automated attacks [48-55][60-64].


Dr Chinasa Okolo pointed out a critical data gap: existing AI incident databases return “African American” when “Africa” is queried, making it difficult to locate continent-specific harms. She cited concrete but under-reported cases where AI-graded examinations in Nigeria and South Africa produced erroneous scores that students could not contest, illustrating how AI failures can go unnoticed without proper monitoring [72-76].


Addressing the capacity deficit, Mark Gaffley presented findings from a public-awareness survey embedded in the South African Social Attitudes Survey, which showed that nearly three-quarters of respondents knew very little about AI and relied on informal channels such as social media for information [141-145]. In response, the Global Centre for Governance (GCG) has launched short courses on AI ethics and human-rights implications, offered scholarships for African women, and is preparing a free MOOC that uses relatable imagery to broaden access to AI knowledge [147-151].


Professor Shock expanded on the empowerment dimension, arguing that AI must enhance agency by being understandable and culturally relevant. He stressed that models lacking local language and contextual nuance cannot truly empower users, and advocated for human-in-the-loop designs that preserve decision-making authority [155-162][226-233].


Ambassador Tigo noted that, despite the emergence of several AI strategies on the continent, most countries still lack concrete AI policies and the technical talent to evaluate models, especially in the public sector where AI is often dismissed as a “charge-EPT” (a term he used to describe low fluency) [101-108][111-119]. He urged an “all-in” cooperative effort that transcends competition, arguing that fragmented attempts waste resources and undermine collective progress [201-207].


Building on this, Tigo identified three interdependent personas – scientists, governments and citizens – each requiring capacity building. Scientists need access to models for safety evaluation; governments must develop the expertise to hold multinational firms accountable; and citizens should be included in safe environments to prevent manipulation by malicious agents [176-191]. He linked these personas to the broader goal of developing indigenous models that reflect African cultures and data, thereby reducing reliance on external providers such as OpenAI, Anthropic or Gemini [186-191].


To translate these ideas into practice, Tigo advocated embedding safety benchmarks directly into procurement contracts, creating agile oversight mechanisms that can adapt to the rapid evolution of AI, and developing negotiation playbooks that give policymakers market insight and bargaining power against trillion-dollar companies [241-254][255-257]. Dr Okolo complemented this by recommending the establishment of independent AI safety institutes – akin to the US National Institute of Standards and Technology – that could certify models, test a range of products and operate without dependence on multilateral lenders or philanthropic donors [260-267].


The panel also highlighted existing collaborative infrastructure. Professor Shock cited grassroots initiatives such as MasaKani, Deep Learning in Daba, GOAI Africa and Sasanke Biotic, and described the newly announced African Compute Initiative, which will provide a shared high-performance computing platform for researchers across the continent, exemplifying a network-effect rather than competition with big-tech firms [215-218][215-218].


When discussing the deployment of AI in critical infrastructure, both Shock and Gaffley warned against a “move-fast-and-break-things” approach. They recommended human-in-the-loop, transparent systems and the preservation of analogue alternatives to avoid over-reliance on AI, especially where failures could jeopardise essential services [226-233][224-225]. Ambassador Tigo added that AI should be used to optimise development outcomes – for example, improving energy-grid efficiency – rather than being adopted for its own sake, thereby ensuring that technology serves concrete development goals [318-326].


The discussion moved to a Q&A segment. An audience member asked whether AI-generated media should be mandatorily water-marked; Professor Shock responded that watermarks are a short-term mitigation that can be circumvented by determined actors and should be complemented by broader provenance-tracking mechanisms [S1]. A second question addressed the digital divide; Mark Gaffley offered a philosophical view that the digitally excluded constitute a reservoir of creativity that should be preserved, while Ambassador Tigo stressed the need for basic connectivity, electricity and literacy before AI can be meaningfully applied, and suggested using AI to accelerate, not replace, foundational development projects such as hospitals and schools [S2]. A philosophical audience query about what socio-economic structure AI would choose was answered by Mark, who remarked that an AI-driven system would likely aim for “very efficient” outcomes and strict time-keeping [S3]. Finally, a question on how younger generations can influence AI policy prompted Dr Okolo to suggest open feedback periods, targeted research, legal analysis and the creation of informal channels where formal mechanisms are absent [S4].


In closing, the moderator thanked the panel members, invited participants to gather for a group photo, and reminded everyone of the post-event social gathering at Café Lota.


Session transcriptComplete transcript of the session
Speaker 1

The first share of the research team, I believe, is here with us today, including Marie -Ira Ducunda. We have Gatoni as well, and Michel Malonza, who will also be moderating with us today. And then we’ve got AI Safety South Africa, where we’re working on building local capacity to work on AI safety alongside evaluations research. So together, our organization represents a growing ecosystem in African -led efforts on AI governance, safety, and capacity building. As you all must know, today we are exploring three interlinked questions. What does safe and trusted AI actually mean for the African context? What progress has already been made on the continent and by whom? And what are the most promising pathways for collaborations going forward?

And to explore those questions, we’ve got an amazing panel that I’m honored to introduce. We’ve got Dr. Chinasa Okolo on my left. who is the founder of Technicultura and a policy AI specialist at the UN Office for Digital and Emerging Technologies. And then we have Ambassador Philip Tigo that serves as a special envoy on technology for the President of the Republic of Kenya. And then we have Professor Jonathan Shock who is an associate professor in the Department of Mathematics and Applied Maths at UCT and the director of the UCT AI Initiative. And finally we also have Mark Gaffley who is the director of legal and operations at the Center of Global AI Governance. Hopefully we’ll also have Dr.

Kola Ideson that will join us in the next few minutes, who is the research director at Research ICT Africa. And in the next 47 minutes or so. We all spent about 30 minutes on the panel, followed by about 15 minutes for panel discussions. And then we’ll just conclude with some brief remarks to pull the threads together of what is discussed tonight. A few little housekeeping things before we start. So in the slide behind me, if you have not registered on NUMA, we’d love to stay connected and be in touch. And AI Safety South Africa and ELENA have exciting programs that you’d want to know about. So please scan this QR code on the top left of the screen.

With that link, you can leave us your contact details and also give us feedback on the event. And on the top right, you’ll see the link to Slido, which is the platform that we’ll use for Q &A. So you can just scan the code and then you’ll be redirected to a platform where you can leave your questions. And also avoid the questions of the two things we should prioritize. in the Q &A section. Okay, that’s all the points I had to share. So without further ado, let’s get into it. I’ll hand it over to you, Michelle. I believe Zach will be starting with the first couple of questions, then I’ll take over after him.

Speaker 2

Okay, thank you. So I’ll be moderating part of the session and my colleagues, Michelle, will be taking part of the questions. Afterward, then we’ll progress to the Q &A. So I will start with the foundation, Safe and Trusted AI, which is like we can consider broadly as kind of AI that delivers the outcome we want. So I want to start with you, Ambassador, please. In the context of Africa in particular, what AI -driving outcome will we consider undesirable?

Ambassador Philip Tigo

I think and it’s quite interesting I’ve been having this discussion of safety today the whole day I think in the context of Africa I think the first thing I want to be very careful is that the African continent is not homogenous right so I’ll give a very specific Kenyan understanding of this but I think it could potentially be something that is shared in the country I think the first part of this conversation is that largely that if AI systems are creating a dependency rather than building capacity or capability I think for me that’s undesirable because the erosion of human agency especially for a continent that is still trying to aspire is a problem if AI systems are extractors of African data if capturing our African markets and there’s a concentration of value outside the continent while leaving our institutions as mere implementers or users then I think for me as I said it’s digital neocolonialism I think that’s it the second part of course is that if these continue to be built without our knowledge, wisdom, cultures, it creates an existential threat.

It’s almost a civilization extinction story that then for me is just not undesirable. I think it goes beyond, it’s unacceptable. So those would be my two quick responses.

Speaker 2

Okay, thank you. So, Prof Jonathan, I will move over to you. So of the possible outcome and risks and some of what Ambassador please mention, what do you see as a trade -off like short and long -term risks? And which one shall kind of like likely kind of like consider now and those that we can consider in the future?

Professor Jonathan Shock

Sure, thank you very much for the question. So I agree with Ambassador Tigo in terms of these ideas of neocolonialism. And the bias is inherent in the models and the context. I think these things are all extremely important. And I think these things are all extremely important. And I think these things are all extremely important. I think there’s something else which we I think there’s something else which we I think there’s something else which we I think there’s something else which we I think there’s something else which we I think there’s something else which we I think there’s something else which we we have to be very aware of, which is happening right now. In fact, it happened before AI came along.

And AI is allowing this to happen at a scale that at the moment we already see disruptions, but I think there’s real risk of a complete breakdown in trust. And that’s misinformation and disinformation. We’re seeing already around times of elections within Africa, within Ghana, within South Africa, within Nigeria, that misinformation, but also disinformation, and I disambiguate those by misinformation being, it might be that people are spreading things that they just don’t know is correct, but disinformation is really targeted campaigns. And what we’re seeing is that those targeted campaigns are often gendered, that it’s often against female politicians, that technology -facilitated gender -based violence is a massive issue against politicians, but more broadly. But I think that for me, one of the real things…

is the breakdown in trust that we’re seeing in society. We’ve seen already with social media how echo chambers form. AI is really allowing that to happen at scale by malicious actors who can focus in on particular election periods and destabilize what’s happening. To me, in the short term, that’s really worrying. I think it’s quite difficult to talk about the long term. We can think about what might happen in the next few months, but thinking about the long term threats, people have talked about existential threats in terms of AI getting out of control. I think that’s something that’s extremely important to study, but I think that within particular contexts there are things that are real that are happening now that we have to worry about and try to mitigate.

I think that’s really important. The other thing that I think is happening at the moment that I don’t hear a lot of people within the space talk about, within the policy space maybe talk about, is the issue of agents. And the fact that now a single malicious actor can design their own agent to carry out a misinformation campaign or a disinformation campaign. I think just over the last few months, we’ve seen that possibility come to light. And I think that’s a real worry and something that we need to understand. It’s not just now about the big tech firms. Of course, they have a major role to play in this. But I think now an individual actor can produce software that millions

Speaker 2

Okay, thank you. So, Dr. Chinasa, I’ll move over to you. So, given that the kind of current development of frontier AI leaks that is kind of forcing some of the leaks we are talking about, how can Africa monitor and mitigate those leaks, given that they are kind of like most of the existing development is outside of the context?

Dr. Chinasa Okolo

Yeah, great question. And this reminds me, I actually talked to an Alita researcher last year when I was at the… The peers at AI… Action Summit about some work that they were interested on doing, like an AI incident database. And I think this is actually very important because when I look at current databases, and they’re really comprehensive for the most part, but honestly, when I look up or type in Africa, for example, it reverts back to African American. And I’m based in the U .S., and that’s helpful for me to know, obviously, because I get coded as African American there, but finding this just basic information about AI harms on the continent is still very hard if you’re not tuned in.

I get stuff on Twitter that comes up all the time. There are a couple cases with some African universities, particularly in Nigeria, and also in South Africa had issues with AI being used to automatically grade standardized exams and students having issues with trying to rebut some of those scores that they received. And so that did not make mainstream news, probably in the countries, but not just generally. And so I think that this is a really important one. So we understand how AI impacts. It affects the African continent. and also communities on the continent, and then also that governments can respond accurately to crafting regulations that can serve the needs of communities and also ensure that the responsible parties are held responsible for the harms that they’re causing on different communities.

Speaker 2

Yeah, thank you. So just a kind of like follow -up on that. So you mentioned like holding kind of like responsible, kind of accountable. So like is there anything in particular like maybe our stakeholders can do, in that regard, kind of like is there any short -term or like long -term effort that we can do?

Dr. Chinasa Okolo

Yeah, it’s hard to say because, you know, as you can tell by my accent, I am American, I’m also Nigerian, and so I do understand a little bit of intricacies between both countries and the U .S. are a little bit more formal ways for advocacy. Like you can actually write directly to your congressman. You can call their office. Most often you won’t get them directly, but you’ll get their staff members, and they often respond. Like people. Right. them for basic issues like, oh, I can’t get my passport in time. Please help expedite this. And, oh, there’s this issue happening at my school. Please help with this. And so, honestly, I’m not very aware of similar pathways across African countries.

But I think that this civil society advocacy, particularly grouping together, you know, forming these coalitions can have a lot of power. It’s just, again, like there are a lot of incentives in place for governments to suppress this, and we’ve seen this turn into violence, particularly against youth. And so I am aware of this, and I don’t want to recommend this so people get harmed. But I think there are ways that, you know, again, this coalition voting can be successful.

Ambassador Philip Tigo

I wanted to jump into that because you talk about policy. And I think, and let’s be real, and that’s why I think I, when Irina asked me to come to this through my colleague Stephanie, I thought it would be important because this is a very Africa -centric discussion. I’ve been into all the global ones. I think five today. I think let’s be very clear. But if we have a couple of AI strategies in the continent, we do not necessarily have AI policies in the continent. So there’s already no mechanism to do this. And that’s AI in general. We’re not even talking specific about safety. Secondly, we do not have necessarily the talent to do this in the continent.

I think that’s why what you guys are doing is important. And I say talent in the other spaces, not even in public sector. When you go into public sector, unfortunately my colleagues just think AI is charge EPT. Let’s be honest. So there’s basically a fluency question. Safety is so far in the scale that they’re not even thinking about it. So I think for me, the sense that I kind of have in this is that it needs to be an all -in effort. And this is where my sense in the African continent is where that dichotomy between civil society and governments disappears. Because if it’s about existential risk to the continent, and I say existential risk, it’s about existential risk in terms of harms to society.

I’m not talking about… I mean, a few scientists like us can talk about models and harms to the model. the chances of an AI pressing a nuclear button in Africa, come on. And that’s my point. So we have to even redefine what existential risk for Africa on AI means. And I think this is where we really have to break from that. And we can have a few of our scientists doing the existential risks models, models running rogues and science fiction. I think that’s important work. But the risk that he’s mentioned is real, right? Threats to democracy, threats to harmony of society are real risks. And then this is how then you begin to build guardrails from a point of understanding of what is really relevant to the African continent.

Otherwise, we get lost in the other conversation that really chances of happening, nil, but important. But these are the risks. Chances of happening, high, but less prioritized. Good data, folks.

Speaker 2

Okay. Thank you so much for that contribution. if you have a question, please use the QR codes and type your question. We’ll come back to that, but I’ll be moving over to my colleagues who will take over with the rest of the question. Michelle.

Michelle Malonza

Just to join the conversation that you’re already having, I think so far we’ve talked a lot about what we don’t want and the kind of risks that Africa should be focusing on versus the rest of the world, and now I’d like us to talk about how we define what we want the systems to look like and what trustful systems would look like. And so I’d like to start with Mark talking about what his work at GCG has revealed so far about what Africa should think about what they want from these systems.

Mark Gaffley

Cool. Thank you. Thank you for the question, Michelle, and obviously for the opportunity to speak this afternoon. I see the answer to this question as twofold. So first is the answer that addresses what we actually want to define from what we want from AI. As a high -level response, I would describe that as the desires of African citizens on the ground. especially our local communities and the marginalized and vulnerable amongst them who don’t necessarily have a voice or a seat at the decision -making table. The second response is the more likely scenario, in my view, that we remain subject to the whims, benevolent or otherwise, of those practitioners who are able to scale the most useful and not necessarily the most beneficial AI tools for our people.

Irrespective of whether those practitioners are based within national borders, across the broader continent or in foreign jurisdictions around the world. When I consider these responses in the context of GCG’s work, two things come to mind. The first is the results from a public awareness and perceptions of AI survey we released in September last year. The survey was a module in the annual South African Social Attitudes Survey, which is nationalized. The survey revealed that nearly 75 % of respondents knew very little about AI. And for those who did know about AI, most of their learning was through informal and unstructured channels, including through social media and television. These findings may reveal that African populations are some way away from being able to define what they want from AI, because quite simply the majority of citizens are unaware that technology even exists.

This drives the need for creating awareness and educating our peers on AI, so that when the time does come to interact with it, they can make informed and meaningful decisions about what they want. On this, GCG’s other work I’d like to highlight are the various short courses we run on ethical and human rights implications of artificial intelligence through accredited universities in South Africa. These courses attract interest from all over the world, and for each iteration we’ve received applications in the thousands. As part of these offerings, we are also prioritizing awarding scholarships, scholarships for African women as part of our Women in Focus series. Why this work is important to the question is that the courses, even if incrementally, are slowly moving the needle on the figure I mentioned earlier, equipping participants with the skills to pass on knowledge to their peers about the many benefits and risks related to AI technologies.

Finally, as a further effort towards equipping Africans to be able to define their own wants and needs, we have an online MOOC launching imminently that will offer our course content freely to the public using relatable caricatures and imagery, which I hope will further drive this objective of equipping Africans to understand and make their own informed decisions about what AI technologies to allow into their lives and what outcomes they want those tools to achieve for them.

Michelle Malonza

Thank you. I think that’s really interesting because it ties right to what Ambassador was saying, that in order to know what you want as Africans, you have to know that the technology exists, and what AI technologies exist, and what AI technologies exist, and what AI technologies exist, and what AI technologies exist, and what exact technology we are talking about when we see AI. So maybe I should let the rest of the panel… I don’t know if I… say what they think Africans want, and then we’ll go into

Professor Jonathan Shock

So I think, you know, I don’t want to speak to what an individual person wants, but I think that what we all want is empowerment. We all want agency. And so there is a possibility that we can think about AI as a way to give agency, and I spoke about agents before, and I mean agency in a slightly different sense, for people to understand the possibilities that they have. And to increase that range of possibilities so that people can make choices. And so knowing that there is something out there that can give you, empower you, is great, but it has to be able to empower you within a context. And, you know, we’ve spoken many times about the, you know, the lack of context, local context within these models, the lack of language, contextual language information.

And until those things have been fixed, it’s not actually going to empower people. So to me, it has to be about making sure that the model… understands local context, and then making sure that it’s actually giving people agency to make decisions. I think that’s really important.

Dr. Chinasa Okolo

Awesome. Yeah, so I’ll try to be a little bit nuanced about this because, again, I’m Nigerian -American. I grew up right in the middle of the United States. I have been fortunate to travel across the continent very frequently over the past couple years or so, but in going off of what Jonathan said, I would say that I do see really just an opportunity, one, to contribute to equitable governance structures and mechanisms, but also even just an opportunity to actually participate equitably in AI development more broadly. That’s what I see a lot of young Africans want, particularly one, because the epidemic of underemployment is very stark on the continent, and then also just generally that these systems have the power to change the world and have changed the world already, and so I think that this is something.

A lot of our conversations are on AI safety, can also provide new avenues for African researchers, scientists, engineers, to really contribute new research that we’re still missing. Because particularly when we consider the U .S. context or even these prominent AI safety or fairness conferences, a lot of the work on bias is rooted in race, for example. Again, which is a Western construct. And so if we understand how AI impacts people from different castes, from different tribes, religions, gender, and the intersection of all of these, I think this will, one, advance the field as a whole. But, again, also provide more opportunities for these governance structures that are needed within African context.

Ambassador Philip Tigo

Sorry. No, I think a couple of things. And I take this from a persona approach because, again, I think Africa and the communities are a little bit different. And I think I’ll take the three important ones. One, I think it’s basically our scientists, right? I think our scientists, for me, need us. because you cannot talk about benchmarks evaluations around safety if you don’t have access to these models because we are the ones who bear the brand of these models. I’ve given an example Kenya is the biggest user of charge GPT and the first user of charge GPT is emotional advice so you’re asking that’s real data so you’re asking a model for emotional advice that doesn’t understand your context so what does that mean so I think there has to be a way that our scientists have access to these models which means also capacity for them to be able to evaluate these models but also a way that then the second persona is governments, a way that then working with scientists that governments can hold those companies to account because of the potential adverse harms that they can do to our society and community so there is where I see hand in hand, now that’s what governments want but also what governments need is capacity because you’re talking to five trillion dollar companies and your GDP is like a hundred billion million dollars.

So I think potentially we have to, this is where there has to be collaboration because this company understands market pressure, not necessarily regulatory pressure. So there has to be a nuanced approach to how you do that. The third part, of course, I think is the citizenry, right? The citizenry, I think in my sense, just needs to be included. And part of inclusivity is the safety work, right? So you must be included in a safe environment so that you’re not left to put the whims of agents or folks who can manipulate the crowd. So I think I look at those three personas potentially as that. But I think the underlying kind of infrastructure in this is basically looking at how do we ensure that as a collective in the continent that we can build our own models.

And I think that’s important, right? Because you cannot over, part of agency is human agency but also part of challenge to agency is over reliance. On example, models. I think the continent, I understand local context, I understand culture, but that capability to be able to build our own models that are nuanced to our own context, I think is a good option. Then you are not left to Gemini, Quen, OpenAI, Anthropic, I can mention five of them. What choice do we have right now if we don’t have an alternative potentially built from open source?

Michelle Malonza

Thank you very much for all your responses. I really appreciate talking about how capacity and access are the ways that we are going to figure out agency and empowerment. I think that brings me to the next question that all of you have touched on about what is going to make it possible for us to strengthen cooperation and engagement across the region in Africa because that’s a key part of making the access possible to begin with. I can see Ambassador has immediate thoughts, so I guess we can start with you. Let’s start with you since you are very expressive. with you and then go from Dr. Chinasa coming up to the rest of the panel.

Ambassador Philip Tigo

Stop competing. I’m really, it’s, I’m sorry, sometimes I stop being an ambassador at some point. Because AI is not ICT. It’s not about who’s going to build the best data centers. You know, who’s going to do X or Y. This is a collective all -in effort. I think for me, that’s the biggest shift that we need to make. That it’s not about competition. It’s about cooperation and collaboration. That’s what will make us work together. I think for me, that’s my and I’m saying this out of frustration because I see it. And it’s a waste of money. But also, it’s just a waste.

Dr. Chinasa Okolo

Alrighty. So, I know in the draft of this I mentioned, I’ll talk about some of the stuff at the UN. I’m speaking also my personal capacity too. But, you know, we just recently launched the international conference. scientific panel on AI I read nearly every application for that and so I think really it’s important that and I was very happy to see African representation you know on the panel we have eight I believe and I was thinking we were gonna at max not a max but at least get around four or five or so and so it’s really good to see that you know our voices are valued and then also more broadly that there are other efforts to complement the panel including the Africa AI Council and so I also look forward to seeing how this plays into the work that the UN is doing and also again some of the other enough initiatives that we’re doing around the global AI dialogues which play directly into the the panel’s work as well and so really just again that’s having you know in not to say that you know just this inclusion will actually lead to actual change sometimes it you know honestly doesn’t but I think the UN is a little bit special and in some cases where we’ve seen how the work that was done with the H -Lab on AI really led to increased conversations and discourse on this idea of international AI cooperation.

And so I hope to see African governments do this kind of work individually. I had the chance to serve on the AU’s Continental AI Strategy. I did this work when I was a PhD student, like four years ago, and then also served as a drafting member on the Nigeria National AI Strategy as well. And so I did this all the way from the US, and I think that there’s many opportunities for, again, African countries and also those throughout the global majority to build their own initiatives for this

Professor Jonathan Shock

AI cooperation. Yeah, I’d like to sort of follow up on, in particular, Ambassador Tigo’s point about the need to not be competing with each other. And I think that within Africa, I think there are already really, really good examples of people working together. You’ve got Masa Kani, you’ve got the Deep Learning in Daba, you’ve got GOAI Africa, you’ve got Sasanke Biotic. You’ve got the… I think that’s a good point. all of these grassroots organizations who already with limited resources doing amazing work you then add some resources to this and you really superpower what people can do um at the university of cape town the african compute initiative was announced today um and so the idea of this is that we happen to have a cluster an hpc a high performance computer center currently with a lot of capacity that is to say a lot of space we are building that we’re setting up uh an african compute initiative which which researchers around africa are going to be able to use we’re setting up a cloud platform we’re bringing in gpus um state -of -the -art compute that’s going to allow other people at other universities to do their research this is not a competition this is really about how does one set of people empower another set of people because you know there is no competing with you know a trillion dollar company you but actually what we have is a network effect and that’s really really powerful in and of itself so we need to be working with academia with civil society, with government with the private sector all of these groupings need to work together and

Michelle Malonza

Alright so I’ll do the final question before we get into the Q &A I think you’ve all touched upon how you think the engagement and the policy should be working around the continent like moving from strategies to the policy and so if Africa is able to come up with their own systems or find a way to have leverage against the companies to localize the systems that they’re going to deploy on the continent, what considerations do you think should be made while deploying those specific systems into our critical infrastructure because that somehow seems like an inevitability that’s going to happen so what considerations should African governments be making when thinking about integrating AI into critical infrastructure?

I can start with Mark since he’s the one who didn’t answer in the last round of questions the proxy paper for staying silent.

Mark Gaffley

for the problem we’re trying to solve for. Sorry, John. So, yeah, just to ask if it is actually necessary. And the other thing, just, you know, sort of, you know, recognising access and inclusion issues is just to keep the alternatives open. So if you are going to digitise something or, you know, use AI tools to solve for a particular problem, just make sure that those who can’t access them still have their kind of analogue approaches to doing things. I did mention to someone earlier I was the against tech person in the room, so I think that’s why I’m pushing the analogue way.

Professor Jonathan Shock

Cool. So I think we just have to be very, very careful here of the sort of, you know, the Silicon Valley approach of move fast and break things. If you try to take a system, some sort of infrastructure system, be it, you know, a government department, and try to AI -ify it, you know, there are massive, massive risks there. That’s not to say that… that we shouldn’t be thinking about this and doing it very carefully. But we have to understand, again, I go back to agency, the agency that we remove when we get an AI system to make the decisions for us. I think there are really good ways to do this with human in the loop where we can have transparent systems so we can understand what the decision -making process is.

But if we’re simply going to a company who sell a product, who say we can streamline your service, then we’re really beholden to that company. And if it turns out that that’s not the right solution, trying to undo that when you’ve then lost the skills, then you’re in a really difficult position. So I think we need to move at a reasonable pace but not break too many things along the way. I think that’s a real risk.

Ambassador Philip Tigo

Well, I think I probably have an advantage because I’m in government, so we kind of face a lot of these things. I think partly to understand the challenge, right? The challenge, I think, remember is, that we haven’t, especially for the African continent, age 19 .7, very young, already engaging in these tools, government engaging in 19th century technology, and so there’s a gap. And so there’s already sufficient pressure for governments to engage in these new tools. So there’s really not much room to kind of make these rational choices of not to use these new technologies because you have a population that is already using it. So then what does that leave you as options? I think the options for me, then it means that you need to start creating some form of guardrails even before you acquire the tools.

So you have procurement is one tool. And we can write a lot of these rules in the procurement documents, and I don’t think many of us are doing that. Include safety benchmarks in that, include a lot of these guys don’t want to be audited, so just get that in there because they want your business. And I have a sense maybe that’s the sweet spot, the point of decision marketing. At that time, everybody wants to talk to you, and that’s where African countries lose. lose the game. The second part of course is that because the technology changes very quickly I have a sense what we need to do is then continuously have kind of these agile mechanisms that keep pushing the foundational questions because this is not one technology it’s not a laptop that you’re going to buy and you’re going to use it for three years.

It’s going to change in the next two, three months. So I think potentially we need that. Third I think is just this contingency planning this single sourcing business should not work we need options and for me the fourth option is consideration always have the local option open because I mean data localization sovereignty. It’s about sovereignty so I think and part of it we don’t do that and that’s where we also start to make strategic decisions of separating private sector from global big tech to local private sector companies to smaller medium enterprises and I think we need to do that deliberately because then at least the local companies can be kind of managed by domestic law.

These other ones you probably have to go to Silicon Valley to sort of litigate. So I think for me, and it will keep on evolving, these are things that I’m seeing right now as potential options. But then I think it still all boils down to the capacity of the decision maker or the policy maker to be able to disarm these insights. Where we lose is negotiations. And part of what my team continuously does, and maybe this is something you guys need to consider, is think about these playbooks, guidebooks, negotiation tools, so that when they are negotiating, at least they have some sense of knowledge as their power to engage. Then I’m not talking because you can’t, you know, the hundred billion, five trillion, maybe when you have knowledge and have market insights, you have a better, you’re actually in a better position to engage.

Negotiate.

Dr. Chinasa Okolo

Yeah, so I definitely agree with my co -panelists on a lot of the topics brought up. I would say for the first one, particularly around the need for AI as an actual solution, and governments really need to evaluate whether, again, simple solutions, non -AI or deep learning based are actually necessary. And then also around the need for guidelines on procurement. I’ve been doing some work with the World Bank, and, you know, we’ve seen in our work that a lot of African governments, those across the majority of regions, are really being bombarded by, you know, suppliers to basically buy solutions. A lot of them, I think, are honestly unnecessary, and a lot of governments don’t have the capacity to evaluate these and make decisions, let’s say, transparently in -house.

And I think the key part of actually building the capacity will be, you know, establishing AI safety institutes or, you know, whatever name. I think that’s what governments want to call them. And I think that, you know, we have… this within the United States, it’s embedded within the National Institute of Standards and Technology, and they test more than technology. It’s food, you know, lotions, you know, cosmetics, all that stuff, too. And this may not obviously look the same across Africa, across Southeast Asia, South Asia, et cetera, but it really needs to be done, again, just to have this independent capacity and also, again, not be reliant on these multilateral lenders and foreign organizations or even philanthropic organizations that may be, again, funding or providing solutions, again, that may not be aligned with African needs and values and also or maybe not even be necessary in the first place.

Michelle Malonza

Thank you so much for your responses. They were very thoughtful as we think about, to figure out what we don’t want is to think about what specifically African countries think is risky and thinking about the short term as the priority and then thinking about what we want is based on thinking. And I think that’s what we’re talking about, our capacity to make that decision or to… autonomy to decide what we want and then localizing in that context. And then in terms of thinking about how to collaborate across the board, the sense that I’m getting across the panel generally is that we need to think against competition so that we can be able to have leverage against the big companies.

So thank you so much for your details and thoughtful responses. I’ll hand it over to Zach to get us into the Q &A session.

Speaker 2

Okay, thank you. So we’re going to take a few questions. And maybe I’ll also take one question from the audience, one or two questions from the audience. So one of the questions here is kind of like broad, so maybe Prashok, I’ll hand it over to you in 30 seconds. He said to improve inclusivity and trust, what shall an ideal AI model optimize for?

Professor Jonathan Shock

Gosh, that’s a difficult question. I think part of it has to be about transparency. How is a decision being made? People talk about the sort of the black box problem of AI systems. In fact, this isn’t quite the right way to look at these systems. You can look exactly what’s happening inside the model. You can look at all the weights of the matrices, but it’s really difficult to tell what’s actually happening in there. So building transparent systems that are understandable, I think that’s one way to build trust. Yeah, I think that’s a way to think about it.

Speaker 2

Okay, thank you for that. There is one question here also about what are the most significant misconceptions about the current state of AI? Maybe Dr. Chinaza.

Dr. Chinasa Okolo

I’ll probably be redundant, you know, from some of the earlier topics we discussed on the panel. But again, that is a panacea or a band -aid or a solution for a lot of things, particularly like development challenges. I think we see African – particularly like doubling down on again adopting procuring these AI solutions when honestly like building hospitals paying teachers installing sustaining or reliable electrical grids would actually solve a would solve the problems much easier and better maybe not easier but better but and also with a little opportunity for you know funds being diverted or wasted on a non -functional solution so that’s one thing I think my other panelists would probably have other good comments as well

Speaker 2

all right is there any question from the audience maybe we can take one question okay I will take one one but very brief

Audience

first of all thank you for being digitally inclusive for those of us who couldn’t use the QR code my question is to professor shock so you talked about misinformation information and disinformation maybe I can work my way back a little bit. So I think in some ways we need to start talking about kind of disincentivizing some types of AI, and this is what I mean. Usually when we talk about disinformation, we think about it from the user’s perspective, right? But if you create a tool, for example, for one, for example, I don’t see why there’s a sort of massification of the use of AI tools for media creation. Like, it’s not very necessary. Like, there’s a running joke about someone saying, well, I was hoping AI would be created to do some of the hard work that I do at home, like laundering or housekeeping, so I have more time to actually do media and entertainment, but it reverses the case, right?

So we’re having AI do all of this sort of stuff, and we’re not really making progress on robotics and stuff like that. Now, my question is, well, relatively compared to LLMs, right? So my question is, should we have some sort… say, mandatory watermark? for example for AI generated media like in that case if I see some video or some songs or some pictures I know it’s AI generated and in some ways I’m naturally not inclined to believe it is that a workable solution?

Professor Jonathan Shock

I think the cat is out of the bag I think it’s great if some organizations do put watermarks on indeed within China within some of the other companies they are beginning to do that but because we now have open source models and the open source models are getting very very good if a malicious actor wants to set out a disinformation campaign they’re just going to choose the one that doesn’t have the watermarks I see that one could for instance have media where there is some requirements to have information about whether or not it’s come from an AI system but when there are choices to have watermarked output or not watermarked output the malicious actor is just going to going to choose the one which is going to subvert the system.

So I think that it may be a stopgap, but I think it’s a very short one.

Speaker 2

Okay, thank you. In 20 seconds.

Audience

So this is to the panelists. So I would say we have about 64 % of the continent of Africa that don’t have access to the internet and so are digitally excluded. So my question is how do we make sure that our advancements with AI are not widening the digital divide? I think it’s a really big problem. As we’re moving forward with AI, there are people who don’t have access to the internet, electricity, and other things. So how do we ensure that we’re also thinking about those digitally excluded individuals? Thank you.

Mark Gaffley

This is a very abstract response, but it’s something I’ve been working on, so I’ll float it here. But it’s this idea of the digitally excluded as the kind of last vestiges of creativity left on the planet. you play it out over time, those who don’t have access to what I said about mental arrest and cognitive decline etc, being the ones that we eventually come to to ask for the sort of creative ideas and the independent decisions, so decision making abilities. So in a way just to kind of flip that, perhaps this focus on not having access as being excluded as potentially being a way down the line that you are actually included and in fact relied on because you kind of kept your cognitive abilities intact.

So yeah, a bit out there but I thought I floated.

Ambassador Philip Tigo

in that particular instance and this is where I think AI becomes interesting I think and part of what I always speak about is the unfinished business that African governments need to do. So it’s about connectivity, it’s about electricity it’s about literacy it’s about the kind of old infrastructures that we’ve not done. So I think for the African continent this is where you start to use AI to optimise development. You can do smart it’s AI accelerates AI. And if you look at what we’re doing in terms of Kenya at least, is that what we’re doing. For example we’ve realized that with artificial intelligence that a lot of our energy optimization was wrong with our artificial intelligence because we were going for last mile electricity connectivity.

But now with AI we’re realizing in the World Bank that you could do this a little bit differently. All I’m saying is that we can leverage this technology on those non -sensitive capabilities to actually accelerate development so that again it’s not AI for AI. So for African governments don’t get AI for chat, right? Get AI for something else that drives development.

Speaker 2

Alright, thank you. So we only have one minute for questions so I will take the last two questions together and briefly our panelists will answer to that. So one question here one question there.

Audience

My question is a little philosophical one. Like we talked about how right now AI is in a war where very many new technology comes. Each country and each company is trying to to be capitalistic and try to one up the other one. Uniquely in AI though, AI might just be the one which might catch up with itself where they might just like, there’s a possibility, right? So there are so many economic and structures out there like socialism, capitalism, which unique focus on optimizing certain things like engagement on social media, for example. So if you had to ideally work on a structure, if AI had to decide on a structure for humanity, I would just like your opinions on that.

Speaker 2

Okay, thank you. We’ll take one question here.

Audience

Yeah, okay, thank you. So I’m going to consider two things, which is policy and our generation at large. So I wanted to ask, considering the zeal that we have for knowing AI, is the next generation safer also? And considering the thing that you’re saying, policy, we need policy. Should we, go around or just, just say we need policy because we can catch AI at where it is actually now in Africa, considering it hasn’t gone abroad that much, and just put policy to who is going to learn this and who is going to know this on AI.

Speaker 2

Okay, thank you. So I think these two questions will be split across our panelists, so who wants to go first?

Dr. Chinasa Okolo

All righty. Yeah, I’ll take the policy one. I think that I’m very hopeful for African governments in particular when it comes to AI policy. I think there is, let’s say, like a big learning curve or actually implementation curve from the 20 or so strategies and two draft policy frameworks. And there is an opportunity, you know, for the younger generation to be involved. Obviously, one where I think is providing like feedback on different strategies, a couple of countries have had open feedback period periods. A lot of most of them haven’t, unfortunately. But, you know, despite that, I think, you know, doing research, legal analysis and providing these findings openly can actually have a lot of change. Again, if there happen to be formal mechanisms to provide this feedback, obviously take advantage of them.

If not, you know, create your own avenues or pathways to do so. And then I can, I’ll let my panelists speak. Okay.

Speaker 2

Mark, do you want to add something? All right. Pro. Okay.

Mark Gaffley

Very briefly. Okay. Well, that would be my point, is I think if AI were to structure humanity, we’d be very efficient and we’d keep to time.

Speaker 2

All right. Thank you so much for your contribution. We’ll hand it over to Iman so that she can. Thank

Speaker 1

Thank you so much. I’ll be super brief. Well, I’ll first start by thanking our incredible panel. Thanks a lot for your insights and energy and time. Thanks to you all for coming. It’s been a long few days, I imagine, being here at the conference. There are such great people to talk to and learn from. Before we wrap up, we’d love to take a picture with the panel. So I’ll invite you to just step forward here so that we can grab a picture together. And as they do that, for everyone, we have a social happening at 7 .30 today at Cafe Lota. That is in a museum close by. You could just, like, Google it. And we’d love to see you there.

We’re going to be heading there at 7 .30. Thanks, guys. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (16)
Factual NotesClaims verified against the Diplo knowledge base (8)
Confirmedhigh

“The moderator, Michelle Malonza, introduced the African‑led research team – Marie‑Ira Ducunda, Gatoni, Michel Malonza and the AI Safety South Africa initiative.”

The knowledge base lists the same research team members (Marie-Ira Ducunda, Gatoni, Michel Malonza) as part of the roundtable, confirming their involvement [S2].

!
Correctionmedium

“The moderator’s name is Michelle Malonza.”

The source identifies the co-moderator as Michel Malonza and notes this is the same person as Michelle Ma, suggesting the report’s spelling may be inaccurate [S2].

Additional Contexthigh

“Ambassador Philip Tigo warned that AI systems that create dependency, extract African data and concentrate value abroad erode human agency and constitute a form of digital neocolonialism that could pose an existential threat to the continent.”

A related warning about digital neocolonialism was made by Nicaragua at the UN, highlighting similar concerns about AI-driven dependence and concentration of value abroad [S20].

Confirmedhigh

“Professor Jonathan Shock highlighted the most pressing short‑term risk: a rapid breakdown of public trust caused by misinformation and targeted disinformation during elections in Ghana, South Africa and Nigeria.”

Multiple sources identify misinformation and disinformation as the biggest short-term risk to democratic trust, aligning with Shock’s assessment [S50] and the WEF Global Risks Report [S97].

Confirmedmedium

“He distinguished misinformation (unintentional errors) from disinformation (deliberate, often gender‑based campaigns).”

The distinction between misinformation (unintentional) and disinformation (intentional) is explicitly discussed in the IGF report on trust online [S101].

Confirmedhigh

“AI‑enabled agents now allow single malicious actors to launch large‑scale, automated attacks.”

AI’s ability to enable micro-targeted, large-scale disinformation campaigns is documented in a discussion on AI-driven manipulation [S99].

Additional Contextlow

“Dr Chinasa Okolo pointed out a critical data gap: existing AI incident databases return “African American” when “Africa” is queried, making it difficult to locate continent‑specific harms.”

While the knowledge base does not address the specific database issue, it confirms Dr Okolo’s participation and her focus on AI bias and data sovereignty in Africa [S11].

Confirmedmedium

“The moderator framed “safe and trusted AI” as technology that delivers the outcomes users desire and gave housekeeping instructions directing participants to the QR‑code and Slido for live questions.”

A similar event description notes the moderator repeatedly mentioning QR-codes and Slido polls for audience participation, confirming this procedural detail [S94].

External Sources (106)
S1
Responsible AI for Shared Prosperity — -Philip Thigo- His Excellency Ambassador, Special Technology Envoy of the Government of Kenya
S2
Toward Collective Action_ Roundtable on Safe &amp; Trusted AI — And to explore those questions, we’ve got an amazing panel that I’m honored to introduce. We’ve got Dr. Chinasa Okolo on…
S3
S4
Toward Collective Action_ Roundtable on Safe &amp; Trusted AI — -Michel Malonza: Mentioned as co-moderator but appears to be the same person as Michelle Malonza -Michelle Malonza: Co-…
S5
Agents of inclusion: Community networks &amp; media meet-up | IGF 2023 — Elisa Heppner, the grants management lead for the APNIC Foundation, is instrumental in driving these ventures. She empha…
S6
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — -Speaker 1- Role/title not specified (appears to be a moderator/participant) -Speaker 2- Role/title not specified (appe…
S7
Policy Network on Artificial Intelligence | IGF 2023 — Moderator 2, Affiliation 2 Speaker 1, Affiliation 1 Speaker 2, Affiliation 2
S8
S9
Toward Collective Action_ Roundtable on Safe &amp; Trusted AI — – Ambassador Philip Tigo- Dr. Chinasa Okolo- Mark Gaffley – Mark Gaffley- Professor Jonathan Shock
S10
https://dig.watch/event/india-ai-impact-summit-2026/toward-collective-action_-roundtable-on-safe-trusted-ai — And to explore those questions, we’ve got an amazing panel that I’m honored to introduce. We’ve got Dr. Chinasa Okolo on…
S11
Day 0 Event #251 Large Models and Small Player Leveraging AI in Small States and Startups — Chinasa T. Okolo emphasized opportunities for smaller nations to lead through contextual innovation, data sovereignty, a…
S12
Toward Collective Action_ Roundtable on Safe &amp; Trusted AI — – Ambassador Philip Tigo- Dr. Chinasa Okolo – Professor Jonathan Shock- Dr. Chinasa Okolo
S13
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S14
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S15
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S16
Toward Collective Action_ Roundtable on Safe &amp; Trusted AI — – Ambassador Philip Tigo- Professor Jonathan Shock – Professor Jonathan Shock- Audience Both recognize different types…
S17
WS #280 the DNS Trust Horizon Safeguarding Digital Identity — – **Audience** – Individual from Senegal named Yuv (role/title not specified)
S18
Building the Workforce_ AI for Viksit Bharat 2047 — -Audience- Role/Title: Professor Charu from Indian Institute of Public Administration (one identified audience member), …
S19
Nri Collaborative Session Navigating Global Cyber Threats Via Local Practices — – **Audience** – Dr. Nazar (specific role/title not clearly mentioned)
S20
UN General Assembly 66th Plenary Meeting – WSIS Plus 20 High-Level Review — Artificial Intelligence Governance and Emerging Technologies Criticism of Western countries’ selective information acce…
S21
Africa’s Prospects in the New Global Economy: A Comprehensive Analysis from Davos — Economic | Development Mene acknowledges that while the African Union has a critical minerals strategy, governments con…
S22
How Multilingual AI Bridges the Gap to Inclusive Access — Capacity development | Artificial intelligence He highlights that only a tiny pool of experts exists worldwide, stressi…
S23
How African knowledge and wisdom can inspire the development and governance of AI — Despite the significance of sharing information freely, the economic challenges faced by African experts often discourag…
S24
Developing capacities for bottom-up AI in the Global South: What role for the international community? — Amandeep Singh Gill: Thank you so much, Jovan, and thank you to you, Diplo Foundation, and its partners for convening th…
S25
Scoping Civil Society engagement in Digital Cooperation | IGF 2023 — Regulations, standards and guardrails can be ways to address risks.
S26
IGF 2025: Africa charts a sovereign path for AI governance — African leaders at theInternet Governance Forum (IGF) 2025 in Oslocalled for urgent action to build sovereign and ethica…
S27
Open Forum #46 Africa in CyberDiplomacy: Multistakeholder Engagement — How to reduce dependency on foreign technology providers while building local capabilities
S28
A Global Human Rights Approach to Responsible AI Governance | IGF 2023 WS #288 — Oluseyi Oyebisi:Yes, and thank you so much, Haiyan, for inviting me to speak this morning. I think in terms of African v…
S29
Towards a Safer South Launching the Global South AI Safety Research Network — – Ambassador Philip Thigo- Mr. Amir Banifatemi- Dr. Balaraman Ravindran – Dr. Rachel Sibande- Ms. Chenai Chair- Ambassa…
S30
From principles to practice: Governing advanced AI in action — Brian Tse: right now? First of all, it’s a great honor to be on this panel today. To ensure that AI could be used as a f…
S31
AI: Lifting All Boats / DAVOS 2025 — Vijay Vythianathan Vaitheeswaran: Welcome, ladies and gentlemen, to our session on AI, lifting all boats. I’m Vijay Vy…
S32
Advancing Scientific AI with Safety Ethics and Responsibility — -Speaker 2 (P.T.): AI safety researcher with expertise in biosecurity and AI-enabled biological tools, associated with R…
S33
Open Forum #26 High-level review of AI governance from Inter-governmental P — 5. African Nations: Need to increase data infrastructure and sovereignty. Speaker 2: I’m sure you can hear me, right? …
S34
Open Forum #67 Open-source AI as a Catalyst for Africa’s Digital Economy — Speaker: by the minister by the end of the year, early November. So, and also, we are also drafting an implementation st…
S35
WS #214 AI Readiness in Africa in a Shifting Geopolitical Landscape — Audience: Good evening, everyone. Is it? Okay. My name is Lydia Lamisa Akamvareba from Ghana. I’m looking at the team up…
S36
Finnovation — Research conducted by Georgadze’s firm reveals that women tend to require more confidence and knowledge before making fi…
S37
ACKNOWLEDGEMENTS — – Such initiatives should be designed to be inclusive, so marginalized groups and communities, especially people with di…
S38
Digital divides &amp; Inclusion — In conclusion, the digital divide remains a serious and urgent issue that requires collective action. The lack of intern…
S39
Bridging the Digital Divide for Transition to a Greener Economy — Audience:Yes, thank you very much. My name is Tilman Kupfer. I’m an independent consultant from Brussels but with a back…
S40
The digital economy in the age of AI: Implications for developing countries (UNCTAD) — Another viewpoint raises concerns about the risks associated with AI. One such risk is “knowledge slavery,” where a cent…
S41
African Priorities for the Global Digital Compact: A Comprehensive Discussion Report — The discussion took a critical turn when Moctar Yedaly delivered a stark warning about Africa’s digital sovereignty. He …
S42
Gen AI: Boon or Bane for Creativity? — An election year is approaching with an expected overflow of misinformation and disinformation
S43
Main Topic 3 –  Identification of AI generated content — Dr Laurens Naudts, from the AI Media and Democracy Lab at the University of Amsterdam, provided a legal perspective, dis…
S44
High-Level Session 1: Navigating the Misinformation Maze: Strategic Cooperation For A Trusted Digital Future — Natalia Gherman: Thank you, and good morning, ladies and gentlemen. Great pleasure to be here, and just as Madam Moderat…
S45
How African knowledge and wisdom can inspire the development and governance of AI | WSIS+20 — Ubuntu philosophy in AI A foundational concept for AI governance is Ubuntu, a core African philosophy emphasising interc…
S46
Comprehensive Discussion Report: Governance Frameworks for Reducing Digital Divides in African and Francophone Contexts — -Need for regional cooperation and mutualization: Speakers advocated for pooling resources, knowledge, and infrastructur…
S47
Open Forum #14 Data Without Borders? Navigating Policy Impacts in Africa — Audience: Good morning. I’m Levi Siansege with Internet Society, Zambia chapter, but also with the youth IGF. I love …
S48
Policy Network on Artificial Intelligence | IGF 2023 — Understanding the context of misinformation/disinformation generation is important
S49
Decoding Disinformation: Fostering Good Practices and Cooperation online course — Key initiatives to counter disinformation in the context of elections.We take a closer look at how disinformation is bei…
S50
Viewing Disinformation from a Global Governance Perspective | IGF 2023 WS #209 — Disinformation, which can impact democratic processes, is a topic of concern. However, solid evidence is needed to suppo…
S51
DC-DNSI: Beyond Borders – NIS2’s Impact on Global South — Isha Suri: Thank you, Professor Luka. I’ll just quickly share my screen. I’m joined by my co-author Shiva Kanwar and…
S52
WS #97 Interoperability of AI Governance: Scope and Mechanism — Mauricio Gibson: Thank you. Yeah, I mean, just building on what Chet was saying, I think, and what you were saying, Olg…
S53
Open Forum #75 Shaping Global AI Governance Through Multistakeholder Action — ### Government Procurement The session demonstrated broad agreement among diverse stakeholders on the need for human ri…
S54
A Global Human Rights Approach to Responsible AI Governance | IGF 2023 WS #288 — In addition to these key points, the analysis reveals a couple of noteworthy observations. One observation is the import…
S55
GOVERNING AI FOR HUMANITY — xxix Institutionalizing such multi-stakeholder exchange under the auspices of the United Nations can provide a reliably …
S56
Critical infrastructure — AI plays a pivotal role in safeguarding critical infrastructure systems. AI can strengthen the security of critical infr…
S57
WS #279 AI: Guardian for Critical Infrastructure in Developing World — These key comments shaped the discussion by progressively broadening its scope from specific technical challenges to enc…
S58
Artificial intelligence — Critical infrastructure
S59
Digital Public Infrastructure, Policy Harmonisation, and Digital Cooperation – AI, Data Governance,and Innovation for Development — Harmonization of policies across the region was identified as a critical goal to enable seamless transactions and integr…
S60
WS #270 Understanding digital exclusion in AI era — An audience member raises the question of whether to wait for government to introduce AI policies or let the industry le…
S61
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — Quote from UNDP Human Development Report 2025 stating that innovation incentives favor rapid deployment and automation o…
S62
Comprehensive Report: China’s AI Plus Economy Initiative – A Strategic Discussion on Artificial Intelligence Development and Implementation — There is unexpected consensus among speakers from different backgrounds (academia, industry startup, and large corporati…
S63
Toward Collective Action_ Roundtable on Safe &amp; Trusted AI — This is a very abstract response, but it’s something I’ve been working on, so I’ll float it here. But it’s this idea of …
S64
Panel Discussion Summary: AI Governance Implementation and Capacity Building in Government — European contexts focus heavily on regulatory compliance and managing cultural resistance within established bureaucraci…
S65
Resilient and Responsible AI | IGF 2023 Town Hall #105 — Martin Koyabe:And first of all, thank you so much for inviting me, and also for giving the GFCE an opportunity to share …
S66
AI/Gen AI for the Global Goals — Shea Gopaul: So thank you, Sanda. And like Sandra, I’d like to thank the African Union, as well as Global Compact. i…
S67
AI Safety at the Global Level Insights from Digital Ministers Of — “Is there a way to put guardrails around it?”[49]. “The second point I’d like to make is that ultimately as policymakers…
S68
Engineering Accountable AI Agents in a Global Arms Race: A Panel Discussion Report — High level of consensus on implementation approach and timeline, with moderate consensus on regulatory strategies. The a…
S69
From principles to practice: Governing advanced AI in action — Both speakers advocate for embedding safety and responsibility considerations from the initial design phase rather than …
S70
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — He emphasised the need for policy that balances principle-level guidance with practical guardrails whilst avoiding overl…
S71
Africa and the Digital Divide: Perspectives and Policies for catch up (Africa Trade Network) — Improving internet infrastructure is essential in driving the African digital economy, with the potential to increase di…
S72
NRIs MAIN SESSION: DATA GOVERNANCE — Data access and shifting from narrow notions of national sovereignty are important considerations in African data govern…
S73
The Digital Town Square Problem: public interest info online | IGF 2023 Open Forum #132 — Cultural, religious, and policy differences among African countries were emphasized in the context of data generation. T…
S74
African Priorities for the Global Digital Compact: A Comprehensive Discussion Report — Nnenna Nwakanma brought passionate advocacy for African unity and dignity to the discussion. She emphasised the fundamen…
S75
Toward Collective Action_ Roundtable on Safe &amp; Trusted AI — The discussion began with Ambassador Philip Tigo’s powerful reframing of AI safety concerns through an African lens. Rat…
S76
The fading of human agency in automated systems — In many domains today, humans remain formally responsible for decisions shaped by automated systems. A civil servant sig…
S77
WS #214 AI Readiness in Africa in a Shifting Geopolitical Landscape — Ashana Kalemera: Music Good afternoon. Thank you so much for joining us this afternoon. I’ll also say good morning, good…
S78
UN General Assembly 66th Plenary Meeting – WSIS Plus 20 High-Level Review — Nicaragua warns that artificial intelligence’s benefits are being monopolized by a small number of corporate and state a…
S79
New Technologies and the Impact on Human Rights — Reference to how produced data used to inform people is not maintained locally and goes elsewhere, creating risks of cul…
S80
Gen AI: Boon or Bane for Creativity? — An election year is approaching with an expected overflow of misinformation and disinformation
S81
WS #255 AI and disinformation: Safeguarding Elections — Babu Ram Aryal: I was also supposed to come into the very topic, disinformation and the election in our topic. So who…
S82
Main Topic 3 –  Identification of AI generated content — Dr Laurens Naudts, from the AI Media and Democracy Lab at the University of Amsterdam, provided a legal perspective, dis…
S83
Breaking the Fake in the AI World: Staying Smart in the Age of Misinformation, Disinformation, Hate, and Deepfake — AHM Bazlur Rahman from Bangladesh News Network for Radio and Communication described grassroots-level interventions focu…
S84
Learning from the MOOC model — Anyone (with Internet access) can enrol in a MOOC, but certain skills are needed to benefit from the instruction: fluenc…
S85
How African knowledge and wisdom can inspire the development and governance of AI — Dr. Jovan Kurbalija Executive Director DiploFoundation:Thank you very much. Let’s start with the Ubuntu spirit. First, e…
S86
How African knowledge and wisdom can inspire the development and governance of AI | WSIS+20 — Ubuntu philosophy in AI A foundational concept for AI governance is Ubuntu, a core African philosophy emphasising interc…
S87
Comprehensive Discussion Report: Governance Frameworks for Reducing Digital Divides in African and Francophone Contexts — -Need for regional cooperation and mutualization: Speakers advocated for pooling resources, knowledge, and infrastructur…
S88
WS #462 Bridging the Compute Divide a Global Alliance for AI — The discussion revealed that the challenge extends beyond inequitable distribution to an overall supply-demand gap affec…
S89
Open Forum #26 High-level review of AI governance from Inter-governmental P — 2. Addressing data localisation and sovereignty concerns, particularly for developing regions. 3. Data Sovereignty and …
S90
Open Forum #14 Data Without Borders? Navigating Policy Impacts in Africa — Audience: Good morning. I’m Levi Siansege with Internet Society, Zambia chapter, but also with the youth IGF. I love …
S91
AI Collaboration Across Borders_ India–Israel Innovation Roundtable — -Moderator- Session moderator facilitating the panel discussion
S92
WS #211 Disability &amp; Data Protection for Digital Inclusion — Fawaz Shaheen: . . Yes, I think it’s working now. Thank you so much. We’ll just start our session now. Welcome to …
S93
Day 0 Event #35 Empowering consumers towards secure by design ICTs — WOUT DE NATRIS: Thank you, Joao. And I think that shows how the two topics also intersect with each other, because w…
S94
World Economic Forum Town Hall on AI Ethics and Trust — He repeatedly mentioned the Slido polls, QR codes for audience participation, and encouraged questions from the audience…
S95
WS #25 Multistakeholder cooperation for online child protection — Gladys O. Yiadom: . Can the online moderator share her screen? Full screen, please. Thank you. So the firs…
S96
AI and Digital Developments Forecast for 2026 — Feudalism, at least the peasants had some sort of agency. Slavery is not only physical slavery, it’s basically not havin…
S97
Open Forum: Liberating Science — The new WEF Global Risks Report identified misinformation and disinformation as the biggest short-term risk
S98
DC-Inclusion &amp; DC-PAL: Transformative digital inclusion: Building a gender-responsive and inclusive framework for the underserved — Viktoriia Romaniuk: Thank you very much. It’s a great honor to be here and share our experience. Among the organizations…
S99
What Proliferation of Artificial Intelligence Means for Information Integrity? — Septiaji Nugroho highlighted AI’s ability to enable micro-targeting of specific audiences such as elderly people and mig…
S100
Strengthening Corporate Accountability on Inclusive, Trustworthy, and Rights-based Approach to Ethical Digital Transformation — The discussion revealed the complexity of platform governance in addressing different types of problematic content. Kend…
S101
(Re)-Building Trust Online: A Call to Action | IGF 2023 Launch / Award Event #144 — The issue of disinformation is also discussed, highlighting its intentional misleading of people and groups. It is noted…
S102
Global cyber capacity building efforts — Moctar Yedaly:Thank you, Martin. And thank you for the previous speakers. As I see in America, it’s very hard to follow,…
S103
Town Hall: How to Trust Technology — She cited instances such as airplane crashes, where people have demonstrated adverse overreactions. According to this pe…
S104
We are the AI Generation — Martin stressed that effective AI governance must be inclusive and globally representative, with AI systems reflecting l…
S105
Judiciary engagement — However, significant concerns emerged about AI implementation risks. Marcelja identified security vulnerabilities, histo…
S106
AI cheating scandal at University sparks concern — Hannah, a university student,admits to using AIto complete an essay when overwhelmed by deadlines and personal illness. …
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
A
Ambassador Philip Tigo
6 arguments175 words per minute1976 words674 seconds
Argument 1
Undesirable AI outcomes – dependency, digital neocolonialism, erosion of agency (Ambassador Philip Tigo)
EXPLANATION
The ambassador warns that AI systems that create dependency rather than building local capacity erode human agency. He also flags AI that extracts African data and concentrates value abroad as a form of digital neocolonialism, and warns that AI built without African knowledge poses existential threats.
EVIDENCE
He explains that AI creating dependency undermines agency for a continent still aspiring, describes AI as an extractor of African data and value concentration outside the continent, and calls this digital neocolonialism; he also says AI built without African knowledge creates an existential threat to civilization [33-36].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The concern about AI benefits being monopolised and creating digital neocolonialism is highlighted in the UN General Assembly discussion [S20], while calls for sovereign AI systems to avoid external dependence are echoed in IGF 2025 deliberations [S26] and multistakeholder engagement on reducing foreign tech reliance [S27].
MAJOR DISCUSSION POINT
Undesirable outcomes of AI in Africa
Argument 2
Call for an all‑in, cooperative effort across Africa; competition wastes resources (Ambassador Philip Tigo)
EXPLANATION
The ambassador stresses that AI development should be a collective, cooperative effort rather than a competitive race. He argues that competition wastes money and hampers progress, and that collaboration is essential for effective AI governance on the continent.
EVIDENCE
He states that AI is not about competition but about cooperation, describing competition as a waste of money and resources, and calls for an all-in effort across Africa [201-207].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Both the roundtable on safe & trusted AI and the collective-action discussion stress the need for collaboration over competition, noting that competition wastes money and resources [S2]; the analysis of Africa’s global economic prospects also warns that individual negotiations undermine collective interests [S21].
MAJOR DISCUSSION POINT
Need for cooperation over competition
AGREED WITH
Dr. Chinasa Okolo
Argument 3
Lack of continent‑wide AI policies and talent; need to build capacity and expertise (Ambassador Philip Tigo)
EXPLANATION
He points out that Africa currently lacks comprehensive AI policies and sufficient talent, especially in the public sector, which hampers safe AI deployment. Building local expertise and policy frameworks is therefore crucial.
EVIDENCE
He notes that there are AI strategies but no AI policies on the continent, and a shortage of talent, particularly in the public sector where AI is often misunderstood as a cost centre [101-108].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Reports on multilingual AI point to a tiny global pool of experts and the need for academic capacity building [S22], while studies on African knowledge highlight economic barriers that limit expert participation [S23]; the roundtable further identifies policy and talent gaps across the continent [S2].
MAJOR DISCUSSION POINT
Policy and talent gaps in African AI
AGREED WITH
Mark Gaffley, Dr. Chinasa Okolo, Professor Jonathan Shock, Michelle Malonza
Argument 4
Embed safety benchmarks and guardrails in procurement contracts; create agile, continuously updated mechanisms (Ambassador Philip Tigo)
EXPLANATION
The ambassador recommends incorporating safety standards into procurement processes and establishing agile mechanisms to keep pace with rapid AI changes. This approach aims to ensure that AI tools are vetted before acquisition and that policies remain current.
EVIDENCE
He suggests adding safety benchmarks to procurement documents, creating agile mechanisms to continuously push foundational questions, and emphasizes the need for ongoing updates as technology evolves rapidly [241-248].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Guidelines on digital cooperation stress the role of regulations, standards and guardrails in procurement processes [S25]; the launch of a Global South AI Safety Research Network proposes independent safety testing and certification mechanisms [S29]; and recent safety-focused discussions advise against anthropomorphising AI and call for agile safety frameworks [S24].
MAJOR DISCUSSION POINT
Safety-focused procurement and agile governance
AGREED WITH
Dr. Chinasa Okolo, Professor Jonathan Shock, Mark Gaffley
Argument 5
Promote African data sovereignty and develop home‑grown AI models to reduce reliance on foreign big‑tech providers
EXPLANATION
The ambassador argues that Africa must build capacity to access, evaluate and create its own AI models, ensuring that data stays on the continent and that AI systems reflect local contexts. This reduces dependence on external corporations and safeguards strategic interests.
EVIDENCE
He stresses that African scientists need access to models, that Kenya is a major user of ChatGPT for culturally mismatched advice, and calls for building indigenous models that understand local culture, arguing that over-reliance on companies like OpenAI, Anthropic or others is risky and that open-source alternatives are needed for data localisation and sovereignty [186-191].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
IGF 2025 highlighted the urgency of building sovereign, ethical AI systems tailored to African contexts [S26]; multistakeholder cyber-diplomacy forums discuss reducing dependency on foreign providers while building local capabilities [S27]; a high-level AI governance review notes the need for African data infrastructure and sovereignty [S33]; and open-source AI initiatives are positioned as catalysts for a home-grown digital economy [S34].
MAJOR DISCUSSION POINT
Promoting African data sovereignty and home‑grown AI models
Argument 6
Create negotiation playbooks and guidebooks for governments to strengthen bargaining power with AI vendors
EXPLANATION
The ambassador notes that African negotiators often lack market insight when dealing with trillion‑dollar AI firms. He proposes developing structured playbooks, guidebooks and negotiation tools so policymakers can secure better terms, safety benchmarks and guardrails.
EVIDENCE
He describes the need for “playbooks, guidebooks, negotiation tools” that give decision-makers knowledge and power to negotiate with large tech companies, emphasizing that knowledge is essential for effective engagement and protecting national interests [256-258].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Analyses of Africa’s mineral strategy argue for collective negotiation tools and shared principles to avoid fragmented deals [S21]; the safe-trusted AI roundtable also calls for structured negotiation tools to empower policymakers [S2].
MAJOR DISCUSSION POINT
Enhancing governmental negotiation capacity with AI providers
S
Speaker 2
1 argument149 words per minute527 words210 seconds
Argument 1
Safe AI as delivering the outcomes we want – “AI that delivers the outcome we want” (Speaker 2)
EXPLANATION
Speaker 2 defines safe and trusted AI as technology that reliably produces the desired outcomes for users. This framing emphasizes outcome‑orientation rather than abstract safety criteria.
EVIDENCE
He describes safe and trusted AI broadly as AI that delivers the outcome we want [30].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The safety researcher’s perspective emphasizes outcome-oriented AI design as a core principle of trustworthy systems [S32], and broader governance frameworks stress aligning AI performance with desired societal outcomes [S30].
MAJOR DISCUSSION POINT
Outcome‑oriented definition of safe AI
M
Michelle Malonza
1 argument218 words per minute600 words164 seconds
Argument 1
Defining what Africans want from AI – need to know the technology exists before we can decide (Michelle Malonza)
EXPLANATION
Michelle argues that Africans cannot articulate their AI preferences without first being aware of the technologies available. Understanding AI’s existence is a prerequisite for defining desired outcomes.
EVIDENCE
She links the need to know what AI technologies exist to the ability to define what Africans want, emphasizing that awareness precedes desire [152-154].
MAJOR DISCUSSION POINT
Awareness as a prerequisite for defining AI needs
M
Mark Gaffley
3 arguments166 words per minute788 words284 seconds
Argument 1
Public awareness, education programmes, MOOCs and scholarships to empower marginalized citizens (Mark Gaffley)
EXPLANATION
Mark highlights the importance of raising AI awareness through surveys, short courses, scholarships for women, and a forthcoming free MOOC. These initiatives aim to equip citizens, especially marginalized groups, with knowledge to make informed AI choices.
EVIDENCE
He cites a survey showing 75 % of respondents know little about AI, describes short courses attracting thousands of applicants, scholarships for African women, and an upcoming MOOC with relatable content [141-151].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Inclusive capacity-development initiatives targeting marginalized groups are recommended to ensure equitable access to AI education [S37]; research on gendered financial decision-making underlines the importance of tailored scholarships for women [S36]; and the scarcity of AI experts underscores the need for broad educational outreach [S22].
MAJOR DISCUSSION POINT
Education and outreach to build AI literacy
AGREED WITH
Dr. Chinasa Okolo, Ambassador Philip Tigo, Professor Jonathan Shock, Michelle Malonza
Argument 2
Preserve analogue alternatives and consider those without internet/electricity to prevent widening the digital divide (Mark Gaffley)
EXPLANATION
Mark stresses that when deploying AI solutions, societies must retain non‑digital alternatives for those lacking access. This ensures inclusivity and prevents exclusion of digitally marginalized populations.
EVIDENCE
He advises keeping analogue approaches for people who cannot access AI tools, emphasizing inclusion of those without digital access [224-225].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The digital-divide brief calls for collective action to keep non-digital options for populations lacking connectivity or electricity [S38]; inclusive programme guidelines also stress the necessity of analogue alternatives for the digitally excluded [S37].
MAJOR DISCUSSION POINT
Maintaining non‑digital options for the digitally excluded
AGREED WITH
Ambassador Philip Tigo, Dr. Chinasa Okolo, Professor Jonathan Shock
Argument 3
Treat the digitally excluded as a valuable source of creativity and ensure their inclusion as a strategic asset
EXPLANATION
Mark suggests that people without digital access retain unique creative capacities that could be leveraged in the future. Preserving analog alternatives not only prevents exclusion but also safeguards a reservoir of creativity that AI systems might later draw upon.
EVIDENCE
He describes the digitally excluded as “the kind of last vestiges of creativity left on the planet,” arguing that keeping analogue capabilities could be valuable for future innovation and should be considered when deploying AI solutions [315-317].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Analyses of the digital divide highlight the strategic value of preserving analogue creativity and ensuring that excluded communities remain part of future innovation ecosystems [S38].
MAJOR DISCUSSION POINT
Valuing and integrating the digitally excluded as creative contributors
D
Dr. Chinasa Okolo
7 arguments178 words per minute1638 words551 seconds
Argument 1
African AI incident data is missing or mis‑classified; current databases revert to “African American” (Dr. Chinasa Okolo)
EXPLANATION
Dr. Chinasa points out that existing AI incident databases inadequately represent African harms, often mis‑labeling them under “African American.” This data gap hampers understanding of AI risks on the continent.
EVIDENCE
She observes that when searching for Africa in current databases, results default to “African American,” making it hard to find AI-related harms specific to the continent [72-73].
MAJOR DISCUSSION POINT
Data gaps in African AI incident reporting
Argument 2
Need for an AI incident database and better monitoring mechanisms to track harms on the continent (Dr. Chinasa Okolo)
EXPLANATION
She advocates for the creation of a dedicated AI incident database to systematically capture and monitor AI‑related harms in Africa. Better monitoring would enable informed policy and accountability.
EVIDENCE
She recounts a conversation with an AI researcher about an AI incident database and stresses its importance for tracking harms on the continent [69-71].
MAJOR DISCUSSION POINT
Establishing an African AI incident tracking system
AGREED WITH
Professor Jonathan Shock
Argument 3
Equitable participation in AI development creates jobs and advances the field beyond Western bias (Dr. Chinasa Okolo)
EXPLANATION
Dr. Chinasa argues that involving African researchers and engineers in AI development can address underemployment and bring diverse perspectives, moving beyond bias rooted in Western constructs.
EVIDENCE
She notes that young Africans seek equitable participation to combat underemployment, and that current bias research is often race-centric, a Western construct; inclusive participation would advance the field and governance structures [166-172].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Studies on African knowledge and wisdom note that economic constraints limit African experts’ contributions, suggesting that equitable participation can both create jobs and diversify AI perspectives [S23]; broader discussions on inclusive AI development stress the need for diverse talent pools to move beyond Western-centric bias [S31]; and the shortage of multilingual AI experts further underscores this gap [S22].
MAJOR DISCUSSION POINT
Inclusive AI development for jobs and bias mitigation
AGREED WITH
Ambassador Philip Tigo
Argument 4
Coalition building among civil‑society groups amplifies advocacy impact (Dr. Chinasa Okolo)
EXPLANATION
She emphasizes that civil‑society coalitions can increase advocacy power, though she cautions about potential government suppression and associated risks.
EVIDENCE
She describes how civil-society advocacy, especially when grouped into coalitions, can be powerful, while also noting incentives for governments to suppress such movements, sometimes leading to violence [92-94].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Civil-society engagement reports identify regulations, standards and guardrails as mechanisms that become more effective when advocacy groups act in coalitions [S25]; the safe-trusted AI roundtable also highlights the power of multistakeholder collaboration for policy influence [S2].
MAJOR DISCUSSION POINT
Strength of civil‑society coalitions for AI advocacy
AGREED WITH
Ambassador Philip Tigo
Argument 5
Leverage UN, AU and national AI strategies to shape policy and ensure African voices are heard (Dr. Chinasa Okolo)
EXPLANATION
Dr. Chinasa highlights the importance of engaging with UN and AU platforms, as well as national AI strategies, to ensure African perspectives influence global AI governance.
EVIDENCE
She mentions African representation on a UN scientific panel, the Africa AI Council, her involvement in the AU Continental AI Strategy and Nigeria’s National AI Strategy, and the role of UN initiatives in fostering dialogue [210-214].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The UN General Assembly plenary on AI governance discusses the need for inclusive multilateral dialogue and warns against selective information access, aligning with calls to use UN and AU platforms for African influence [S20]; IGF 2025 emphasizes sovereign AI pathways that require engagement with continental bodies [S26]; and a high-level review notes that African voices are arriving late and need peer pressure to enter the global AI debate [S28].
MAJOR DISCUSSION POINT
Using multilateral mechanisms for African AI policy influence
Argument 6
Establish independent AI safety institutes for testing and certification, reducing dependence on external lenders (Dr. Chinasa Okolo)
EXPLANATION
She proposes creating AI safety institutes, akin to the US NIST model, to independently evaluate AI technologies, thereby decreasing reliance on foreign funding and ensuring alignment with African values.
EVIDENCE
She references the US NIST model that tests a range of products, suggesting a similar independent capacity is needed in Africa to avoid dependence on multilateral lenders or foreign organizations [264-268].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The launch of the Global South AI Safety Research Network proposes independent testing bodies akin to NIST to certify AI systems locally [S29]; earlier safety-focused discussions also call for independent evaluation mechanisms to avoid reliance on foreign entities [S24].
MAJOR DISCUSSION POINT
Independent AI safety testing bodies
Argument 7
Governments should first assess whether AI is truly necessary for a problem before procuring AI solutions
EXPLANATION
Dr. Chinasa warns that many challenges could be solved more effectively with basic infrastructure such as schools, hospitals or reliable electricity, and that rushing to buy AI tools can waste resources. A critical evaluation of AI’s added value is essential.
EVIDENCE
She states that AI is often procured when simple solutions-building hospitals, hiring teachers, or providing reliable electricity-would address the issue more efficiently, cautioning against unnecessary AI procurement and emphasizing the need for careful assessment [259-261].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Guidelines on digital cooperation stress the importance of guardrails and careful assessment before AI procurement to avoid unnecessary spending [S25]; governance frameworks further advise evaluating the added value of AI versus simpler solutions [S30].
MAJOR DISCUSSION POINT
Critical assessment of AI necessity before adoption
P
Professor Jonathan Shock
5 arguments184 words per minute1458 words473 seconds
Argument 1
Short‑term risk of misinformation, disinformation and breakdown of trust (Professor Jonathan Shock)
EXPLANATION
Professor Shock warns that AI‑enabled misinformation and disinformation campaigns are already undermining trust during elections across several African countries, posing an immediate threat.
EVIDENCE
He describes observed misinformation and targeted disinformation campaigns during elections in Ghana, South Africa, and Nigeria, noting the distinction between misinformation and disinformation and their gendered nature, leading to a breakdown of societal trust [48-55].
MAJOR DISCUSSION POINT
Immediate trust erosion from AI‑driven misinformation
AGREED WITH
Dr. Chinasa Okolo
Argument 2
Emerging threat of AI‑generated malicious agents for targeted campaigns (Professor Jonathan Shock)
EXPLANATION
He highlights that individual malicious actors can now create autonomous AI agents to conduct disinformation campaigns at scale, expanding the threat beyond large tech firms.
EVIDENCE
He notes that a single malicious actor can design their own agent for misinformation, a phenomenon observed over recent months, indicating a shift from big-tech-only threats [60-64].
MAJOR DISCUSSION POINT
Rise of AI agents as tools for malicious campaigns
Argument 3
Long‑term existential risk is less immediate; focus should be on current, observable harms (Professor Jonathan Shock)
EXPLANATION
While acknowledging theoretical existential risks, he argues that policy should prioritize the tangible, short‑term harms currently manifesting across the continent.
EVIDENCE
He states that long-term existential threats (e.g., AI pressing a nuclear button) are less immediate, and the priority should be on observable harms such as misinformation and trust breakdown [57-59].
MAJOR DISCUSSION POINT
Prioritizing present harms over speculative existential threats
Argument 4
Empowerment through agency, local language/context, and human‑in‑the‑loop design (Professor Jonathan Shock)
EXPLANATION
He argues that AI should enhance human agency by incorporating local languages and contexts, and by ensuring human‑in‑the‑loop mechanisms that allow people to make informed choices.
EVIDENCE
He stresses that empowerment requires AI to understand local context and language, and that without this, AI cannot truly empower; he also mentions the importance of human-in-the-loop design [155-163].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Multilingual AI research highlights the necessity of local language support to empower users and ensure relevance [S22]; inclusive AI discussions underline the role of human-in-the-loop designs for agency [S31]; and standards for safe AI stress human oversight as a core requirement [S25].
MAJOR DISCUSSION POINT
Designing AI for agency and contextual relevance
AGREED WITH
Mark Gaffley, Dr. Chinasa Okolo, Ambassador Philip Tigo, Michelle Malonza
Argument 5
Human‑in‑the‑loop, transparent systems are essential when AI‑ifying critical services; avoid over‑reliance (Professor Jonathan Shock)
EXPLANATION
He cautions against rapid, unchecked AI integration into critical infrastructure, advocating for transparent, human‑in‑the‑loop systems to prevent loss of control and over‑reliance.
EVIDENCE
He warns against the “move fast and break things” approach, recommends human-in-the-loop and transparent decision-making processes, and notes risks of becoming beholden to external vendors [226-233].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Regulatory recommendations call for transparent, human-in-the-loop mechanisms when integrating AI into critical infrastructure to prevent loss of control [S25]; broader AI governance literature stresses the need for ongoing oversight and transparent decision-making [S30]; and safety-focused dialogues advocate for such safeguards to reduce vendor lock-in [S24].
MAJOR DISCUSSION POINT
Safe AI integration in critical infrastructure
AGREED WITH
Ambassador Philip Tigo, Dr. Chinasa Okolo, Mark Gaffley
S
Speaker 1
1 argument103 words per minute627 words362 seconds
Argument 1
Promote community networking and post‑event engagement to strengthen regional ties (Speaker 1)
EXPLANATION
Speaker 1 encourages participants to continue networking after the event, highlighting a social gathering as an opportunity to build regional connections and sustain collaboration.
EVIDENCE
He invites attendees to a post-event picture and a social gathering at Café Lota, emphasizing community interaction after the conference [369-376].
MAJOR DISCUSSION POINT
Fostering post‑event community networking
A
Audience
4 arguments170 words per minute574 words202 seconds
Argument 1
Implement mandatory watermarks on AI‑generated media to help users identify synthetic content and curb disinformation
EXPLANATION
The audience proposes that every piece of AI‑generated audio, video or image should carry a clear, mandatory watermark. Such labeling would allow people to recognise synthetic media and reduce the spread of misinformation and disinformation.
EVIDENCE
The audience explains that AI tools are being mass-produced for media creation and suggests a mandatory watermark for AI-generated videos, songs or pictures, asking whether this would be a workable solution to help users identify AI content [295-303].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Standards and guardrails discussed in civil-society engagement reports include labeling and watermarking as tools to mitigate misinformation risks [S25].
MAJOR DISCUSSION POINT
Mitigating AI‑driven misinformation through labeling
Argument 2
AI development must not widen the digital divide; policies should explicitly include the digitally excluded population
EXPLANATION
The audience highlights that a large share of Africans lack internet access, electricity and digital skills, and warns that AI initiatives could exacerbate existing inequalities if these groups are ignored. Inclusive policies are needed to ensure AI benefits reach everyone.
EVIDENCE
The audience states that about 64 % of the continent does not have internet access, notes the lack of electricity and digital inclusion, and asks how to ensure AI advancements do not widen the digital divide [307-313].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The digital-divide brief stresses that AI initiatives must be inclusive and avoid exacerbating existing inequalities, recommending policies that address the needs of the offline population [S38]; inclusive programme guidelines further call for designing AI solutions that reach marginalized groups [S37].
MAJOR DISCUSSION POINT
Preventing AI from increasing digital exclusion
Argument 3
AI should not be allowed to decide humanity’s economic or political structure; human values must remain the guiding principle
EXPLANATION
The audience raises a philosophical concern that AI could be used to determine societal systems such as capitalism or socialism, and questions whether AI should be given that authority. The implication is that AI must remain subordinate to human‑defined values and governance.
EVIDENCE
The audience describes AI as a technology that might “decide on a structure for humanity” and asks panelists for opinions on the ideal economic system if AI were to choose, indicating apprehension about autonomous AI governance decisions [330-338].
MAJOR DISCUSSION POINT
Limits on AI’s role in determining societal structures
Argument 4
Proactive AI policy and youth engagement are essential to safeguard the next generation
EXPLANATION
The audience asks whether policy should be enacted now to protect younger people, emphasizing that early regulatory frameworks and education are required for safe AI adoption. It suggests that waiting for problems to emerge would be too late.
EVIDENCE
The audience questions if policy should be prioritized now, mentions the enthusiasm for AI knowledge, and wonders whether policy can make the next generation safer, highlighting the urgency of policy action [339-341].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The AI readiness discussion highlights the urgency of establishing policies now to create an enabling environment for safe AI adoption, especially for younger cohorts [S35]; broader governance frameworks also argue for early regulatory action to protect future generations [S30].
MAJOR DISCUSSION POINT
Early policy and education for AI safety
Agreements
Agreement Points
Broad consensus on the need for capacity development and AI literacy across the continent
Speakers: Mark Gaffley, Dr. Chinasa Okolo, Ambassador Philip Tigo, Professor Jonathan Shock, Michelle Malonza
Public awareness, education programmes, MOOCs and scholarships to empower marginalized citizens (Mark Gaffley) Equitable participation in AI development creates jobs and advances the field beyond Western bias (Dr. Chinasa Okolo) Lack of continent‑wide AI policies and talent; need to build capacity and expertise (Ambassador Philip Tigo) Empowerment through agency, local language/context, and human‑in‑the‑loop design (Professor Jonathan Shock) Defining what Africans want… need to know the technology exists before we can decide (Michelle Malonza)
All five speakers stress that without widespread AI awareness, education, and local expertise African societies cannot define or demand safe AI solutions; they cite low AI literacy, the need for training programmes, scholarships and the creation of local talent pools [141-151][166-172][101-108][155-163][152-154].
POLICY CONTEXT (KNOWLEDGE BASE)
This consensus mirrors the emphasis on capacity building and AI literacy in African AI governance discussions, where regional strategies prioritize infrastructure development and skill acquisition to bridge the digital divide [S64][S71][S59].
Shared concern about digital neocolonialism and the need for African data sovereignty
Speakers: Ambassador Philip Tigo, Dr. Chinasa Okolo
Undesirable AI outcomes — dependency, digital neocolonialism, erosion of agency (Ambassador Philip Tigo) Promote African data sovereignty and develop home‑grown AI models to reduce reliance on foreign big‑tech providers (Ambassador Philip Tigo) Equitable participation in AI development creates jobs and advances the field beyond Western bias (Dr. Chinasa Okolo)
Both speakers warn that AI systems that extract African data or impose external models erode agency and constitute digital neocolonialism; they argue for building indigenous models and increasing African participation in AI development [33-36][186-191][166-172].
POLICY CONTEXT (KNOWLEDGE BASE)
The concern aligns with calls for African data self-determination and resistance to digital neocolonialism, as highlighted in analyses of data governance that stress shifting from narrow national sovereignty toward collaborative, continent-wide data frameworks [S72][S73][S59].
Consensus that cooperation, not competition, should drive African AI initiatives
Speakers: Ambassador Philip Tigo, Dr. Chinasa Okolo
Call for an all‑in, cooperative effort across Africa; competition wastes resources (Ambassador Philip Tigo) Coalition building among civil‑society groups amplifies advocacy impact (Dr. Chinasa Okolo)
Both emphasize that collaborative, coalition-based approaches are essential and that competing for resources wastes money and hampers progress [201-207][92-94].
POLICY CONTEXT (KNOWLEDGE BASE)
The preference for cooperation reflects multistakeholder approaches advocated by UN-backed initiatives and the Global Digital Compact, which stress African unity and coordinated policy harmonisation across nations [S55][S74][S59].
Agreement on embedding safety guardrails and careful procurement of AI systems
Speakers: Ambassador Philip Tigo, Dr. Chinasa Okolo, Professor Jonathan Shock, Mark Gaffley
Embed safety benchmarks and guardrails in procurement contracts; create agile, continuously updated mechanisms (Ambassador Philip Tigo) Governments should first assess whether AI is truly necessary for a problem before procuring AI solutions (Dr. Chinasa Okolo) Human‑in‑the‑loop, transparent systems are essential when AI‑ifying critical services; avoid over‑reliance (Professor Jonathan Shock) Preserve analogue alternatives and consider those without internet/electricity to prevent widening the digital divide (Mark Gaffley)
All four speakers call for safety-focused procurement, including safety benchmarks, necessity assessments, human-in-the-loop designs, and retaining non-digital alternatives to avoid over-reliance on AI [241-248][259-261][226-233][224-225].
POLICY CONTEXT (KNOWLEDGE BASE)
Embedding safety guardrails and prudent procurement is consistent with human-rights-based AI governance frameworks and ministerial guidance on operationalising guardrails in AI procurement processes [S53][S67][S69][S70][S65].
Recognition of short‑term misinformation and disinformation risks
Speakers: Professor Jonathan Shock, Dr. Chinasa Okolo
Short‑term risk of misinformation, disinformation and breakdown of trust (Professor Jonathan Shock) Need for an AI incident database and better monitoring mechanisms to track harms on the continent (Dr. Chinasa Okolo)
Both highlight that AI-enabled misinformation is already undermining trust in elections and that systematic incident tracking is needed to monitor and mitigate these harms [48-55][69-71][72-73].
POLICY CONTEXT (KNOWLEDGE BASE)
Recognition of short-term misinformation and disinformation risks is supported by IGF 2023 sessions that underline the need to understand AI-generated false content and develop counter-disinformation strategies [S48][S49][S50][S51].
Importance of civil‑society involvement and coalition building
Speakers: Dr. Chinasa Okolo, Ambassador Philip Tigo
Coalition building among civil‑society groups amplifies advocacy impact (Dr. Chinasa Okolo) Call for an all‑in, cooperative effort across Africa; competition wastes resources (Ambassador Philip Tigo)
Both stress that civil-society coalitions are vital for effective advocacy and that a cooperative, all-in approach should include these groups [92-94][201-207].
POLICY CONTEXT (KNOWLEDGE BASE)
The call for civil-society involvement reflects the multistakeholder participation principle emphasized in global AI governance forums, which aim to balance power dynamics and include NGOs in policy design [S54][S55][S52].
Similar Viewpoints
Both see foreign‑centric AI as a threat to African agency and advocate for locally‑controlled data and models to avoid digital neocolonialism [33-36][186-191][166-172].
Speakers: Ambassador Philip Tigo, Dr. Chinasa Okolo
Undesirable AI outcomes — dependency, digital neocolonialism, erosion of agency (Ambassador Philip Tigo) Promote African data sovereignty and develop home‑grown AI models to reduce reliance on foreign big‑tech providers (Ambassador Philip Tigo) Equitable participation in AI development creates jobs and advances the field beyond Western bias (Dr. Chinasa Okolo)
Both argue that competition is counter‑productive and that inclusive, cooperative strategies—including retaining non‑digital options—are essential for equitable AI deployment [201-207][224-225].
Speakers: Ambassador Philip Tigo, Mark Gaffley
Call for an all‑in, cooperative effort across Africa; competition wastes resources (Ambassador Philip Tigo) Preserve analogue alternatives and consider those without internet/electricity to prevent widening the digital divide (Mark Gaffley)
Both caution against over‑reliance on AI and stress the need for human oversight or analogue alternatives to safeguard critical services [226-233][224-225].
Speakers: Professor Jonathan Shock, Mark Gaffley
Human‑in‑the‑loop, transparent systems are essential when AI‑ifying critical services; avoid over‑reliance (Professor Jonathan Shock) Preserve analogue alternatives and consider those without internet/electricity to prevent widening the digital divide (Mark Gaffley)
Unexpected Consensus
Preserving non‑digital/analogue alternatives while deploying AI
Speakers: Mark Gaffley, Professor Jonathan Shock
Preserve analogue alternatives and consider those without internet/electricity to prevent widening the digital divide (Mark Gaffley) Human‑in‑the‑loop, transparent systems are essential when AI‑ifying critical services; avoid over‑reliance (Professor Jonathan Shock)
Although Mark focuses on keeping analogue options for the digitally excluded and Professor Shock emphasizes human-in-the-loop designs to avoid over-reliance, both converge on the principle that AI should not replace all existing non-digital processes, an alignment not explicitly anticipated at the start of the discussion [224-225][226-233].
POLICY CONTEXT (KNOWLEDGE BASE)
Preserving non-digital alternatives resonates with discussions on the value of the digitally excluded as a source of creativity and the need to avoid marginalising them amid rapid AI deployment [S63][S60].
Overall Assessment

The panel displayed strong convergence on four main themes: (1) the urgent need for capacity building and AI literacy; (2) the risk of digital neocolonialism and the imperative for African data sovereignty; (3) the preference for cooperative, coalition‑based approaches over competition; and (4) the necessity of embedding safety guardrails, transparent human‑in‑the‑loop designs, and preserving analogue alternatives in AI procurement and deployment.

High consensus – the repeated alignment across multiple speakers and arguments indicates a solid shared understanding of the priorities for safe and trusted AI in Africa, providing a robust foundation for coordinated policy action and regional collaboration.

Differences
Different Viewpoints
Extent and timing of AI integration into critical infrastructure and development projects
Speakers: Ambassador Philip Tigo, Professor Jonathan Shock, Mark Gaffley
AI can be used to optimise development, e.g., energy optimisation, and should be adopted (Ambassador Philip Tigo) Move‑fast AI integration is risky; need human‑in‑the‑loop, transparent systems and avoid over‑reliance (Professor Jonathan Shock) When deploying AI solutions, keep analogue alternatives for those without digital access to prevent exclusion (Mark Gaffley)
Ambassador Tigo promotes using AI now to accelerate development such as energy optimisation [318-326], while Professor Shock cautions that rapid AI-ification of services can break trust and recommends human-in-the-loop, transparent designs to avoid over-reliance [226-233]. Mark adds that any AI rollout must preserve non-digital options for the digitally excluded [224-225]. The three share a goal of safe AI use but diverge on how quickly and in what form AI should be embedded in critical services.
POLICY CONTEXT (KNOWLEDGE BASE)
Debates on AI integration into critical infrastructure reference studies highlighting AI’s role in safeguarding critical systems and the necessity for policy and capacity-building frameworks before large-scale deployment in developing contexts [S56][S57][S58].
Whether AI is necessary to solve development problems versus relying on basic infrastructure solutions
Speakers: Dr. Chinasa Okolo, Ambassador Philip Tigo
Governments should first assess if AI is truly needed; many challenges are better solved with hospitals, schools, electricity (Dr. Chinasa Okolo) AI can be leveraged to optimise development (e.g., energy optimisation) and should be adopted rather than waiting for AI‑only solutions (Ambassador Philip Tigo)
Dr. Chinasa argues that AI procurement often occurs when simpler, non-AI solutions would be more effective and that a critical assessment of AI necessity is essential [259-261]. Ambassador Tigo counters that AI already offers concrete benefits, such as improving energy optimisation, and should be integrated into development agendas [318-326]. This reflects a disagreement on the priority of AI versus traditional development interventions.
POLICY CONTEXT (KNOWLEDGE BASE)
The question mirrors prior dialogues that prioritize foundational connectivity and infrastructure as prerequisites for effective AI adoption in Africa, while also noting pragmatic views that immediate AI applications should complement, not replace, basic infrastructure [S71][S62].
Preferred focus for building AI capacity on the continent
Speakers: Mark Gaffley, Ambassador Philip Tigo
Raise public awareness through surveys, short courses, scholarships and a free MOOC to empower citizens (Mark Gaffley) Develop home‑grown AI models, ensure data sovereignty and give African scientists access to and ability to evaluate models (Ambassador Philip Tigo)
Mark emphasizes citizen-level education and outreach as the primary route to capacity building [141-151], whereas Ambassador Tigo stresses technical capacity for model development and data sovereignty as the cornerstone of African AI capability [186-191]. Both aim to strengthen capacity but differ on whether the priority is broad public AI literacy or technical model-building infrastructure.
POLICY CONTEXT (KNOWLEDGE BASE)
Preferences for AI capacity-building focus are informed by reports that African strategies prioritize infrastructure development, innovation hubs, and policy harmonisation to foster sustainable digital economies [S64][S59][S71].
Preferred mechanisms for strengthening governmental negotiating power with AI vendors
Speakers: Ambassador Philip Tigo, Dr. Chinasa Okolo
Create negotiation playbooks, guidebooks and tools to give policymakers market insight and bargaining power (Ambassador Philip Tigo) Leverage UN, AU and national AI strategy platforms to shape policy and ensure African voices are heard (Dr. Chinasa Okolo)
Ambassador Tigo proposes practical negotiation toolkits for governments to secure better terms with large AI firms [256-258]. Dr. Chinasa advocates using multilateral fora such as the UN scientific panel, the Africa AI Council and continental strategies to influence policy and secure African interests [210-214]. The disagreement lies in whether the focus should be on tactical negotiation aids or on multilateral policy engagement.
POLICY CONTEXT (KNOWLEDGE BASE)
Strengthening governmental negotiating power is addressed in discussions on government procurement policies and the role of public-sector coordination to secure fair, transparent contracts with AI vendors [S53][S52].
Unexpected Differences
Value of preserving the digitally excluded as a creative resource versus prioritising rapid AI rollout
Speakers: Mark Gaffley, Ambassador Philip Tigo
Mark frames the digitally excluded as a valuable source of creativity that should be preserved and even leveraged in the future [315-317] Ambassador Tigo pushes for an all-in AI effort, emphasizing cooperation and rapid adoption to avoid competition and to build capacity [201-207]
Mark’s unconventional view that the analog, digitally excluded population constitutes a strategic creative asset contrasts with Ambassador Tigo’s focus on accelerating AI adoption across the continent, revealing an unexpected tension between preserving non‑digital creativity and pursuing swift AI integration.
POLICY CONTEXT (KNOWLEDGE BASE)
The tension between preserving the digitally excluded as a creative resource and rapid AI rollout is highlighted in round-table debates that argue for inclusive approaches to avoid “mental arrest” of marginalized groups while advancing AI initiatives [S63][S60][S61].
Overall Assessment

The panel displayed broad consensus on the importance of AI safety, capacity building and inclusive governance, but diverged on how quickly AI should be deployed, whether AI is necessary for many development challenges, the primary focus of capacity building (public education vs technical model development), and the preferred strategy for strengthening governmental leverage with AI vendors.

Moderate to high – while participants share overarching goals (safe, trusted, inclusive AI), they hold contrasting views on implementation pathways, leading to potential delays or fragmented policies if not reconciled. These disagreements could affect the speed and effectiveness of regional AI collaboration, procurement standards, and the balance between AI adoption and addressing basic infrastructure needs.

Partial Agreements
Both see the need for safeguards, but Ambassador focuses on contractual/administrative levers while Professor Shock emphasizes technical design and operational safeguards.
Speakers: Ambassador Philip Tigo, Professor Jonathan Shock
Both agree AI safety must be embedded in procurement and governance processes Ambassador Tigo calls for safety benchmarks in procurement contracts and agile mechanisms [241-248] Professor Shock stresses human-in-the-loop and transparent systems to avoid over-reliance [226-233]
They share the goal of inclusive empowerment, but Dr. Chinasa focuses on structural participation and advocacy, whereas Mark concentrates on education and skill‑building for citizens.
Speakers: Dr. Chinasa Okolo, Mark Gaffley
Both highlight the importance of inclusive participation and capacity building Dr. Chinasa stresses civil-society coalitions and equitable participation in AI development [92-94][166-172] Mark stresses public awareness, education programmes and scholarships for marginalized groups [141-151]
Takeaways
Key takeaways
Safe and trusted AI for Africa means avoiding dependency, digital neocolonialism, and erosion of human agency; AI should deliver outcomes that African citizens want and understand. Short‑term risks such as misinformation, disinformation, and AI‑generated malicious agents are more urgent than speculative long‑term existential threats. There is a critical data gap: existing AI incident databases do not capture African‑specific harms, often mis‑classifying them as “African American.” Capacity building, public awareness, and education (MOOCs, scholarships, short courses) are essential to empower marginalized communities and create local expertise. Collaboration across academia, civil society, government, and the private sector is needed; competition among African actors wastes resources. Policy and governance are weak: many countries lack AI strategies, talent, and procurement safeguards; embedding safety benchmarks and agile oversight is required. Deploying AI in critical infrastructure must retain human‑in‑the‑loop controls, transparency, and fallback analogue systems to avoid over‑reliance. Inclusion of the digitally excluded must be considered to prevent widening the digital divide; AI should be used to accelerate development rather than for its own sake.
Resolutions and action items
Develop and maintain an Africa‑focused AI incident database to track harms and inform policy. Launch a publicly accessible MOOC on AI ethics, safety, and human rights for African audiences. Expand scholarship programmes (e.g., Women in Focus) and short courses to build local AI expertise. Create an African Compute Initiative providing shared high‑performance computing resources to researchers continent‑wide. Incorporate AI safety benchmarks and guardrails into government procurement contracts and develop agile, continuously updated oversight mechanisms. Establish independent AI safety institutes or certification bodies within African countries to evaluate models and certify compliance. Foster coalition building among civil‑society groups to amplify advocacy and policy influence. Encourage the development of locally‑trained AI models that reflect African languages, cultures, and contexts.
Unresolved issues
Specific mechanisms for enforcing accountability of multinational AI providers when harms occur on the continent. Detailed guidelines for balancing AI adoption with non‑AI solutions in sectors like education, health, and infrastructure. How to effectively monitor and mitigate AI model “leaks” and frontier‑model risks originating outside Africa. Concrete steps for integrating AI into critical infrastructure while ensuring transparency, human‑in‑the‑loop control, and fallback options. Implementation of mandatory watermarking or provenance labeling for AI‑generated media and its enforceability. Strategies to bridge the digital divide for the 64 % of Africans lacking internet or reliable electricity. Long‑term governance frameworks for existential AI risks specific to the African context.
Suggested compromises
Adopt a cooperative, continent‑wide approach rather than competitive national efforts, sharing resources and expertise. Maintain analogue alternatives alongside AI solutions to ensure services remain accessible to those without digital access. Use watermarks or provenance tags as a short‑term mitigation for AI‑generated disinformation, recognizing they can be bypassed. Combine AI deployment with traditional development investments (e.g., infrastructure, education) rather than treating AI as a panacea. Balance the use of foreign AI models with the development of open‑source, locally‑trained models to retain agency and data sovereignty.
Thought Provoking Comments
If AI systems are creating a dependency rather than building capacity or capability, eroding human agency and extracting African data while concentrating value outside the continent – that is digital neocolonialism and an existential threat.
Frames AI risk in terms of sovereignty and agency rather than technical safety, introducing the powerful concept of digital neocolonialism and linking it to existential risk for Africa.
Shifted the conversation from abstract safety concerns to concrete geopolitical and socio‑economic implications, prompting other panelists to discuss capacity building, local model development, and the need for African‑led governance structures.
Speaker: Ambassador Philip Tigo
We are already seeing a breakdown in trust caused by misinformation and disinformation campaigns, especially around elections, and now single malicious actors can create AI agents to spread targeted, gender‑based political violence at scale.
Highlights an immediate, observable threat—political manipulation via AI—while introducing the novel idea of AI‑driven agents as a new vector for disinformation.
Moved the discussion toward short‑term, real‑world harms, leading other speakers (e.g., Ambassador Tigo and Dr. Chinasa) to emphasize urgent mitigation strategies such as regulation, monitoring, and capacity for rapid response.
Speaker: Professor Jonathan Shock
Current AI incident databases return ‘African American’ when I search for Africa; there is virtually no accessible record of AI harms on the continent, making it hard for governments to craft appropriate regulations.
Exposes a data gap that hampers visibility of African AI risks, calling attention to the need for continent‑specific incident tracking and knowledge sharing.
Prompted calls for better data collection and monitoring mechanisms, influencing subsequent remarks about building African‑focused safety institutes and the importance of localized research.
Speaker: Dr. Chinasa Okolo
We have AI strategies but not AI policies; there is a lack of talent and fluency in the public sector, so AI safety is not even on the radar. We must redefine what ‘existential risk’ means for Africa—not sci‑fi scenarios, but threats to democracy and societal harmony.
Challenges the assumption that existing global AI risk frameworks apply directly to Africa, urging a re‑orientation toward context‑specific risks.
Steered the panel toward discussing concrete policy gaps, capacity building, and the necessity of African‑centric risk definitions, influencing later suggestions about procurement guardrails and collaborative frameworks.
Speaker: Ambassador Philip Tigo
Our public awareness survey showed that nearly 75 % of South Africans know very little about AI, learning mainly through informal channels. We need education, MOOCs, and scholarships to empower citizens to define what they want from AI.
Provides empirical evidence of low AI literacy and proposes concrete capacity‑building solutions, linking public awareness directly to the ability to shape AI governance.
Introduced the theme of education as a prerequisite for meaningful participation, which was echoed by other speakers emphasizing empowerment and agency.
Speaker: Mark Gaffley
What we all want is empowerment – AI must give agency within local contexts. Without language and cultural relevance, models cannot truly empower people.
Reframes the goal of AI from abstract safety to tangible empowerment, emphasizing the necessity of contextualized models.
Deepened the analysis of what ‘trusted AI’ looks like, leading to discussions about building African‑specific models and the importance of local data and expertise.
Speaker: Professor Jonathan Shock
We should view scientists, governments, and citizenry as three interdependent personas; capacity‑building for scientists, regulatory ability for governments, and inclusion of citizens are all essential for safe AI deployment.
Offers a structured framework for stakeholder engagement, highlighting the interconnectedness of capacity, regulation, and inclusion.
Guided the conversation toward collaborative mechanisms and the need for coordinated action across sectors, influencing later remarks on cooperation versus competition.
Speaker: Ambassador Philip Tigo
The African Compute Initiative will provide a shared high‑performance computing platform for researchers across the continent – not a competition with big tech, but a network effect that empowers local AI development.
Introduces a concrete collaborative infrastructure that can democratize access to compute resources, directly addressing earlier concerns about dependence on foreign providers.
Shifted the tone from problem‑focused to solution‑oriented, reinforcing the panel’s call for collective, non‑competitive approaches.
Speaker: Professor Jonathan Shock
Stop competing. AI is not about who builds the biggest data centre; it’s a collective all‑in effort. Competition wastes money and hampers progress.
A succinct, emphatic call to abandon competitive mindsets, reinforcing the earlier theme of cooperation.
Re‑energized the discussion on collaboration, prompting other panelists to cite existing cooperative initiatives (e.g., Masa Kani, GOAI Africa) and to stress the importance of shared resources.
Speaker: Ambassador Philip Tigo
Procurement documents should embed safety benchmarks and agile oversight; without them governments lose bargaining power against trillion‑dollar firms.
Provides a practical policy lever—embedding safety criteria in procurement—to address power asymmetries with large AI vendors.
Directed the conversation toward actionable governance tools, influencing later suggestions about creating negotiation playbooks and continuous, agile regulatory mechanisms.
Speaker: Ambassador Philip Tigo
Overall Assessment

The discussion was shaped by a series of pivotal remarks that moved the panel from abstract definitions of safe AI to concrete, Africa‑specific challenges and solutions. Ambassador Tigo’s framing of digital neocolonialism and the need to redefine existential risk set a geopolitical lens that other speakers expanded upon with evidence of misinformation, data gaps, and low AI literacy. Dr. Okolo’s observation about missing incident data and Mark Gaffley’s survey on public awareness highlighted the foundational need for knowledge and capacity. Professor Shock’s focus on trust, agency, and the emerging threat of AI‑driven agents deepened the analysis of short‑term harms. Together, these insights redirected the conversation toward practical pathways—education, collaborative compute infrastructure, inclusive stakeholder frameworks, and procurement safeguards—emphasizing cooperation over competition. The cumulative effect was a shift from problem‑identification to a coordinated, actionable agenda for building safe, trusted, and locally relevant AI across Africa.

Follow-up Questions
How can Africa monitor and mitigate AI frontier model leaks given most development is outside the continent?
Need mechanisms to detect, track, and respond to AI model leaks that originate abroad, as current capacity to monitor such leaks is limited.
Speaker: Dr. Chinasa Okolo
What pathways exist for civil society advocacy and holding AI actors accountable in African countries?
The speaker highlighted uncertainty about effective advocacy routes in Africa, indicating a gap in mechanisms for civil society to influence AI policy and accountability.
Speaker: Dr. Chinasa Okolo
How can African scientists gain access to AI models for evaluation and safety testing?
Access to proprietary models is essential for local safety assessments; without it, African researchers cannot conduct meaningful evaluations.
Speaker: Ambassador Philip Tigo
What are effective ways to build African‑owned AI models that reflect local context and reduce dependence on foreign providers?
Developing home‑grown models would address issues of cultural relevance, data sovereignty, and avoid over‑reliance on external tech firms.
Speaker: Ambassador Philip Tigo
How can local languages and cultural context be incorporated into AI models to empower users?
Current models lack African language and cultural nuance, limiting their ability to provide genuine agency and empowerment.
Speaker: Professor Jonathan Shock
What research is needed on AI bias across African social categories such as caste, tribe, religion, and gender intersections?
Understanding bias beyond race—covering tribal, religious, gender, and other intersections—is crucial for fair AI systems in Africa.
Speaker: Dr. Chinasa Okolo
How can an African AI incident database be created and maintained to track harms and incidents?
Existing incident databases do not capture African‑specific AI harms; a dedicated database would support monitoring and policy making.
Speaker: Dr. Chinasa Okolo
What are the short‑term and long‑term AI risks specific to Africa, especially regarding misinformation, disinformation, and AI agents?
Identifying immediate threats (election‑related misinformation) and future existential risks is needed for targeted mitigation strategies.
Speaker: Professor Jonathan Shock
How should AI procurement processes incorporate safety benchmarks and agile oversight mechanisms?
Embedding safety criteria in procurement contracts offers a practical lever to enforce responsible AI adoption.
Speaker: Ambassador Philip Tigo
What negotiation playbooks or guidebooks are needed for African governments to engage effectively with large tech companies?
Governments lack bargaining power and structured guidance; dedicated playbooks would improve negotiation outcomes.
Speaker: Ambassador Philip Tigo
How can AI deployment in critical infrastructure be designed to avoid excluding digitally disconnected populations?
Ensuring analogue alternatives and inclusive design prevents widening the digital divide when AI is integrated into essential services.
Speaker: Mark Gaffley, Ambassador Philip Tigo
How can youth and broader citizenry be meaningfully involved in AI policy formulation?
Inclusive feedback mechanisms and open consultation processes are needed to incorporate the perspectives of younger generations and the public.
Speaker: Dr. Chinasa Okolo
Would mandatory watermarks for AI‑generated media be an effective tool to combat disinformation in Africa?
Explores a policy option to label AI content, though its efficacy against malicious actors remains uncertain.
Speaker: Audience (follow‑up addressed by Professor Jonathan Shock)
How can African governments balance AI adoption with basic development priorities such as electricity, education, and healthcare?
Avoids misallocation of scarce resources to AI solutions when fundamental infrastructure needs may yield greater impact.
Speaker: Dr. Chinasa Okolo
What are best practices for establishing AI safety institutes in Africa, similar to NIST in the United States?
Creating independent, standards‑based bodies would provide systematic testing and certification of AI systems.
Speaker: Dr. Chinasa Okolo
How can the African Compute Initiative be scaled and coordinated across institutions to support AI research?
A shared high‑performance computing platform can amplify research capacity, but requires coordinated governance and resource sharing.
Speaker: Professor Jonathan Shock
What human‑in‑the‑loop designs are most effective for AI systems deployed in critical infrastructure to maintain transparency and agency?
Ensuring that AI decisions remain overseen by humans helps preserve trust and prevents loss of agency in essential services.
Speaker: Professor Jonathan Shock

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Safeguarding Children with Responsible AI

Safeguarding Children with Responsible AI

Session at a glanceSummary, keypoints, and speakers overview

Summary

The summit opened with Baroness Joanna Shields warning that governing AI for children is the clearest test of responsible technology stewardship and that existing post-harm models are inadequate for AI’s unique, intimate one-to-one interactions with kids [1-13][16-22]. She emphasized that AI simulates intimacy at scale, affecting children’s safety, mental health and identity formation, and called for age-appropriate default experiences and guardrails to protect dignity and development [14-21].


Moderator Rahul John Aju was introduced as a young AI innovator, highlighting the need to speak with, not just about, children in technology debates [26-31]. Rahul argued that children’s innate curiosity must be guided by critical thinking and that AI safety cannot rely on parents alone, noting the difficulty of distinguishing real from fake information in the AI era [58-63][66-69]. He illustrated the problem with examples of photo uploads and unread terms-and-conditions, and described his “Rescue AI” tool that flags risky contract clauses, underscoring the urgency of AI awareness [85-92]. Rahul advocated for foundational education in natural intelligence before AI, proposing personalized, multimodal learning tools such as Notebook LM and StudyFetch, and cited his free ThinkCraft Academy courses that have reached over 700,000 learners [138-173].


In the panel, Tom Hall defined AI literacy as giving children a “screwdriver” to dissect technology, noting that while 80 % of teachers are excited, only 41 % feel prepared, thus calling for teacher tools and child-centered curricula [209-218][306-324]. Chris Lehane highlighted AI’s potential to individualize tutoring and expand agency, warning that existing K-12 systems were built for the industrial age and must be re-engineered to empower learners rather than constrain them [232-248]. Urvashi Aneja stressed the importance of embedding AI literacy within policy and pedagogy, and raised concerns that cultural and socioeconomic contexts affect agency, especially in the Global South [220-253].


The Baroness reiterated that a post-harm regulatory paradigm will not work, advocating safety-by-design, age-assurance technologies, and the Open Age Alliance to create interoperable, privacy-preserving age verification across jurisdictions [266-281][377-395]. Maria Bielikova called for systematic, child-involved studies to detect profiling and other harms, arguing that existing data-protection tools can mitigate risks if properly enforced [402-410]. The panel reached consensus that safety by design, inclusion, offline accessibility, and placing children at the centre of governance are essential to avoid a monoculture and preserve curiosity [419-444]. Participants expressed measured optimism that, with coordinated standards and child-led evaluation, AI can enhance learning without eroding agency [421-426][438-440].


Rahul concluded by thanking the summit and urging that policies keep children’s needs at the forefront, reinforcing the collective commitment to responsibly shape the AI-enabled future for kids [456-462][464-466].


Keypoints


Major discussion points


Child-centric AI governance and safety-by-design – The panel repeatedly stressed that AI must be built with age-appropriate guardrails, robust age-verification and privacy protections from the outset, rather than relying on post-harm regulation. The Baroness highlighted the failure of “post-harm” models for AI and called for “safety from the ground up” and “age-assurance technology” that can be verified across platforms [266-276][278-281][389-395].


AI literacy and agency for children and educators – Multiple speakers argued that children need foundational knowledge and critical thinking skills before they can use AI responsibly. Tom Hall described AI literacy as giving children a “screwdriver” to understand the inner workings of AI and noted that many teachers feel unprepared [209-217][214-216]; Rahul emphasized teaching the basics of maths before relying on calculators and advocated a curriculum that teaches “how to think” rather than just “what to think” [106-118][155-158].


Risk mitigation and the potential harms of unchecked AI – Concerns were raised about emotional dependency, manipulation, profiling, loss of curiosity, and cultural homogenisation. The Baroness warned that simulated intimacy can affect mental health and identity formation [13-16][21]; Maria pointed out hidden profiling on platforms like TikTok despite low formal advertising [402-406]; Thomas warned that over-reliance on AI could blunt children’s curiosity and grit [425-437].


Global policy coordination and contextual adaptation – Participants called for a mix of universal standards (e.g., age-assurance, data-privacy) and locally-tailored rules that respect cultural and regulatory differences. Chris outlined a “multi-pronged” package (age assurance, parental controls, no targeted ads, external review) and noted the need to adapt it across jurisdictions [327-357][340-354]; the Baroness highlighted the Open Age Alliance as a mechanism for interoperable age-verification while cautioning against a monoculture of models [389-398].


Inclusion and equitable access – The discussion stressed that AI solutions must be inclusive of diverse languages, abilities, and offline contexts, especially for the Global South. Tom advocated “data privacy, data sovereignty and inclusion” and stressed involving children in design [306-319][322-324]; Urvashi and Thomas added that policies should address children with disabilities and those without reliable internet [440-442][250-253].


Overall purpose / goal of the discussion


The session aimed to shape a responsible, child-focused AI ecosystem by (1) identifying the regulatory and design gaps that could endanger children’s wellbeing, (2) outlining how AI literacy and agency can be cultivated in schools and homes, (3) proposing concrete governance tools (age-assurance, parental controls, global standards) and (4) ensuring that these solutions are inclusive, culturally sensitive, and equitable across different socioeconomic contexts.


Overall tone and its evolution


– The opening remarks by the Baroness set a serious, urgent tone, emphasizing risk and the need for proactive safeguards.


– Rahul’s contribution shifted the mood to energetic and informal, using humor and personal anecdotes while still stressing the importance of guidance.


– The moderated panel adopted a collaborative and analytical tone, balancing optimism about AI’s potential with caution about harms, and offering concrete policy ideas.


– As the conversation progressed, the tone became hopeful and solution-oriented, highlighting emerging tools (age-gate, Open Age Alliance) and the possibility of inclusive, globally coordinated standards.


– The closing remarks returned to a reflective and appreciative tone, thanking participants and reaffirming a collective commitment to responsible AI for children.


Speakers

Baroness Joanna Shields


– Area of expertise: Internet safety, child online protection, policy & regulation


– Role/Title: Baroness; former UK Government minister for Internet safety and harms; senior leader in international child-online-safety coalitions [S9]


Chris Lehane


– Area of expertise: AI policy, public affairs, child-focused AI safety


– Role/Title: Chief Global Affairs Officer, OpenAI [S1]


Tom Hall


– Area of expertise: AI-enabled education, legal frameworks for technology


– Role/Title: Vice President and General Manager, Lego Education (also representing the National Legal Foundation) [S4]


Urvashi Aneja


– Area of expertise: Digital governance, AI ethics, child-centred technology policy


– Role/Title: Director, Digital Futures Lab [S6]


Maria Bielikova


– Area of expertise: Trustworthy AI, user modelling, personalization, disinformation risk


– Role/Title: Director, Kempelen Institute for Intelligent Technologies [transcript]

Thomas Davin


– Area of expertise: Innovation for children, UNICEF programmes, AI for development


– Role/Title: Director of Innovation, UNICEF (Director of the Office of Innovation at UNICEF) [S10]


Moderator


– Area of expertise: Session facilitation, event moderation


– Role/Title: Moderator of the AI Impact Summit panel [transcript]

Rahul John Aju


– Area of expertise: Youth AI entrepreneurship, AI safety tools, AI education for children


– Role/Title: Young AI innovator; Founder, AIRM Technologies; Founder, ThinkCraft Academy; Speaker at the summit [transcript]

Additional speakers:


Alicia – (appears as a panel participant; specific role or expertise not provided) [transcript]

Full session reportComprehensive analysis and detailed insights

Baroness Joanna Shields (UK Minister for Internet Safety) opened the session warning that governing AI for children will be “the clearest test yet on whether we are governing this technology responsibly and for the public good” [1-3]. She argued that AI’s rapid adoption is driven by extraordinary capabilities, but its continued place in society hinges on trust built through responsible design [2-4]. Rejecting the “post-harm regulatory model” used for social media as “not fit for purpose in the AI world”, she noted that AI differs from a platform because it creates one-to-one adaptive interactions that become part of how children learn, communicate, create and form their sense of self [5-8]. Simulated intimacy can feel real to a child even though it is merely code [9-11], and children cannot reliably distinguish authentic human connection from artificial intimacy [12-13]. This blurring threatens safety, mental health, identity formation and long-term well-being, with observed harms such as emotional dependency, manipulation, deep-fake abuse and devastating loss [14-16]. She concluded that children must not be “beta testers” for AI, calling for age-appropriate experiences by default and guard-rails around systems that simulate intimacy [17-21]. She expressed optimism about joining the panel [22-23].


Rahul John Aju (Founder, AIRM Technologies; Founder & Director, ThinkCraft Academy; advisor to public institutions) was introduced by the moderator as “the AI kid of India” and emphasised the need to speak with children rather than about them [26-29]. He recalled his father’s advice to “question everything” and to be critical [45-47][50-54], and described how his parents taught him to separate correct from fake information when using Google [58-60]. He asked whether, in the AI age, children can still perform this discrimination, noting that even parents struggle to identify reliable information [61-64]. To illustrate digital opacity, he pointed to the habit of uploading photos to the cloud without knowing what happens to them [76-78] and the widespread neglect of lengthy terms-and-conditions [85-87].


He then presented Rescue AI, a prototype he and his team have been developing for three years that can ingest any contract or terms-and-conditions document, flag high-risk and low-risk clauses, and advise whether the product should be used [91-94][95-98]. He framed this tool as evidence that AI awareness and safety are essential, especially when no such safeguards exist [95-98].


Building on foundational knowledge, Rahul argued that children should first master natural intelligence-basic maths, reading and critical thinking-before relying on AI tools [106-118]. He likened AI-enhanced learning to a calculator that becomes useful only after the basics are understood [101-108]. He advocated personalised, multimodal resources, citing Notebook LM’s ability to generate videos and podcasts from textbook content [140-144] and StudyFetch’s conversion of chapters into games [144-146]. Through ThinkCraft Academy he has taught over 700 000 learners how to build and fine-tune large language models in 30 days [166-173]; the academy is supported by AIRM Technologies and his advisory work with public bodies [166-173][468-470].


Panel introduction: Thomas Davin (Director, Office of Innovation, UNICEF); Urvashi Aneja (Director, Digital Futures Lab); Maria Bielikova (Director, Kempelen Institute for Intelligent Technologies); Chris Lehan (Chief Global Affairs Officer, OpenAI); Tom Hall (Vice-President, National Legal Foundation) alongside Baroness Shields [198-206].


Tom Hall (Vice-President, National Legal Foundation) defined AI-literacy as giving children a “screwdriver” to dissect technology, stressing that children must understand how computers see the world as data, how bias works and how accountability can be built [212-216]. He noted that while 80 % of teachers are excited about AI in classrooms, only 41 % feel prepared to teach it, highlighting a capacity gap that requires tools and real-world curricula [209-218]. Hall announced a free AI policy toolkit for classrooms and urged child-centred, inclusive design that respects data privacy and sovereignty [306-311][312-319]. He called for children’s involvement in policy development and warned against one-size-fits-all solutions [322-324]. Importantly, he described “no-regret moves” as design choices that protect inclusion and privacy while allowing iterative improvement [306-311].


Urvashi Aneja (Director, Digital Futures Lab) asked how to embed AI literacy into policy, seeking ways to translate real-world safety practices into the digital AI environment for children [220-223]. She stressed that agency is shaped not only by individual capacity but also by socioeconomic and institutional contexts, especially in the Global South [250-254]. Aneja highlighted the need for child-involved real-world evaluations, redress mechanisms and clear, enforceable principles across jurisdictions [299-304][311-314].


Chris Lehan (Chief Global Affairs Officer, OpenAI) described AI’s potential to provide personalised tutoring that adapts to each child’s pace and learning style [232-236]. He warned that the current K-12 system, designed for the industrial age, limits agency and must be re-engineered to empower learners rather than constrain them [240-246]. Lehan positioned AI as a “leveling technology” that can expand agency, but only if education teaches children how to use it critically [247-249].


Thomas Davin (Director, Office of Innovation, UNICEF) warned of the danger of over-dependence: if AI always supplies the correct answer, children may lose curiosity and grit [425-437]. He suggested that models could deliberately provide occasional wrong answers to foster resilience, and called for systematic impact measurement [425-437]. He also shared a striking statistic: “7 out of 10 children in classrooms cannot explain a text they read at 10 years of age” [471-473]. During the discussion he noted the panel’s deliberate gender arrangement-boys on one side, girls on the other-as a “beautiful” design choice [474-476].


Maria Bielikova (Director, Kempelen Institute for Intelligent Technologies) shifted focus to hidden commercial profiling, noting that while formal advertising on TikTok is low, children are exposed to influencer-driven content five times more often, a risk that can only be uncovered through child-involved studies [402-406]. She used the metaphor of not prohibiting children from the city but travelling with them to understand the environment, arguing that existing data-protection tools should be enforced [408-410]. She added that “Even though we have the Digital Services Act in Europe, the problem persists” [477-479].


Across the discussion, all speakers endorsed proactive, safety-by-design governance with age-appropriate safeguards, rejecting post-harm models [1-3][266-269][327-354][306-311][312-319][322-324][299-304][442-444][419-426]; they agreed that AI-literacy must begin with critical thinking and foundational skills before AI tools are introduced [41-54][106-118][212-216][155-158]; they stressed inclusion and cultural diversity, warning against a monoculture of AI models [395-398][250-254][306-311][370-374]; they supported a global, interoperable age-verification framework while allowing local adaptation [390-394][327-342][375-381]; and they highlighted the need for teachers to receive practical toolkits and training [209-218][306-311][299-304].


Points of disagreement emerged:


* The Baroness advocated a single, interoperable age-key via the Open Age Alliance [390-394], whereas Lehan noted privacy-law limitations (e.g., in Europe) and cultural norms that require country-specific adaptations [370-374].


* Hall promoted rapid, broad AI integration with toolkits and inclusive curricula [306-311], while Lehan cautioned that over-reliance could erode agency and suggested limiting AI’s role to preserve curiosity [247-249][240-246].


* Bielikova identified covert commercial profiling as the highest technical risk [402-406], whereas Lehan focused on explicit harmful content (violence, sexual, mental-health) and advocated age-gates and parental controls [340-349].


* Davin favoured experimental designs such as intentionally wrong answers to preserve grit [425-437], while Bielikova called for continuous observational research involving children to understand platform effects [403-410].


Key take-aways (consolidated):


1. Post-harm regulation is insufficient; safety must be built into AI design from the outset.


2. Robust age-assurance, parental controls and external reviews are essential safeguards.


3. AI-literacy should start with critical thinking and natural-intelligence foundations, with teachers equipped through toolkits and real-world curricula.


4. Personalised AI tutors can boost agency, but over-dependence risks eroding curiosity; intentional challenge-based design is needed.


5. A global age-verification standard (e.g., Open Age Alliance) is required, yet must be adaptable to cultural and regulatory contexts.


6. Continuous real-world impact research, especially on covert profiling, is needed.


7. Children must be directly involved in testing, redress mechanisms and policy design.


Concrete actions announced:


* UNICEF and partners released a free AI policy toolkit for classrooms [306-311].


* OpenAI committed to a multi-pronged safety package (age gates, default under-18 models, parental controls, advertising bans, external review) [340-349][357-360].


* The Baroness highlighted the Open Age Alliance’s work on interoperable age-keys [389-395].


* Tom Hall pledged to embed child-centred governance, data privacy and inclusion in LEGO’s AI education initiatives [306-324].


* Rahul showcased “Rescue AI” and offered to continue developing tools that help children understand contracts [91-94].


* The panel agreed to pursue further collaboration on teacher training, real-world evaluations and inclusion of Global-South perspectives.


Unresolved issues include: embedding AI-literacy effectively across diverse curricula; balancing a universal safety baseline with locally-tailored cultural rules; ensuring emerging AI companies comply with child-safety standards; designing redress and accountability mechanisms; and preserving cultural and linguistic diversity while delivering age-appropriate content.


Proposed compromises: adopt “no-regret” design principles that prioritise privacy, inclusion and child-respect while allowing iterative improvement; implement robust yet privacy-preserving age-verification adaptable locally; combine AI assistance with intentional gaps or challenges to maintain curiosity; use a hybrid governance model that sets global baseline safeguards (age gates, advertising bans) complemented by region-specific cultural guidelines; and involve children throughout design, testing and policy-making.


Thought-provoking remarks: the Baroness described AI as “engineered simulated intimacy at scale”, reframing the conversation from platform risk to relational-psychological risk [6-9]; Rahul’s question about distinguishing real from fake information highlighted everyday challenges for children [58-64]; his “Rescue AI” demo provided a tangible youth-led safety solution [91-94]; Hall warned against over-dependence and championed “no-regret moves” that respect inclusion and data sovereignty [306-311]; Lehan linked AI to broader socioeconomic structures, arguing that AI must empower agency rather than reinforce existing labour contracts [247-249][240-246]; Bielikova’s city metaphor urged “traveling with children” rather than banning them, underscoring the need for contextual, child-involved research [408-410]; Davin’s “ancestor” comment framed the policy challenge as an ethical legacy [415-418]; and the moderator noted that Under-Secretary-General Amandeep Gill was stranded in traffic during the closing [480-482].


Closing: Rahul thanked the United Nations and the summit organisers, reiterating that policies must keep children at the forefront and that young innovators should be heard as they help design the future [455-462]. The moderator thanked all participants, noted the collective commitment to responsible AI advancement for children, mentioned the traffic delay affecting Under-Secretary-General Amandeep Gill, and formally concluded the session [464-467][480-482].


Actionable recommendations


1. Adopt safety-by-design with robust age-assurance and parental-control mechanisms.


2. Deploy teacher-training toolkits and real-world curricula to build AI-literacy.


3. Conduct child-involved real-world evaluations of AI impacts, especially covert profiling.


4. Implement a global interoperable age-key (e.g., Open Age Alliance) while allowing local cultural adaptation.


5. Ensure continuous monitoring, redress and accountability frameworks that involve children in policy design.


Session transcriptComplete transcript of the session
Baroness Joanna Shields

governance. How we manage AI on behalf of children will be the clearest test yet on whether we are governing this technology responsibly and for the public good. AI’s rapid adoption has been driven by extraordinary capabilities, but its continued place in society will depend on trust, and trust is built through responsible design. The post -harm regulatory model that we’ve seen with social media reacting after damage is not fit for purpose in the AI world. AI is fundamentally different. It is not a platform. It is increasingly a one -to -one adaptive interaction embedded in how children learn, communicate, create, and form their own sense of self. Inadvertently, AI is engineering simulated intimacy and human -like interaction at a scale that is not just a matter of how children learn, but how they learn.

It is hard to imagine. When a model says to a child, I care, I understand, that’s not conscience, that’s code. But for a child, it can feel very real. And children are not miniature adults. They cannot reliably distinguish between authentic human connection and artificial intimacy, especially when systems are so persuasive, emotionally responsive, and always available. That difference has implications not only for safety, but for mental health, identity formation, and long -term well -being. We have already seen what happens when the line blurs. Emotional dependency, manipulation, deep fake abuse, and in some cases, devastating loss. Children must not be the beta testers for our AI -enabled world. We need age -appropriate experiences by default. with guardrails around systems that simulate intimacy without accountability.

The question is not whether AI will continue to advance. Of course it will. The question is whether we shape it in a way that safeguards the dignity and the development of children. And accountability begins with protection. And I’m excited to join this distinguished panel to have this important conversation, even though it’s day five of the summit. Thank you very much. I’m going to have to move this back up. I’m sorry.

Moderator

Thank you so much, Baroness Joanna Shields, for setting the stakes so clearly. Too often, discussions about children and technology speak about children rather than with them. This session is intentional in doing otherwise. Therefore, I am very pleased to introduce Rahul John Aju, widely recognized as the AI kid of India. He is our featured young AI innovator who has built and deployed real -world AI tools, founded his own AI startup, and advised public institutions on using AI. Raul, I’d like to invite you on stage.

Rahul John Aju

Thank you. Thank you, guys. Thank you so much for the lovely introduction. I know safety is a bit boring topic, but it’s a very crucial topic. And I think if I stand there, no one is going to see me, so I’m using a hand mic. So hopefully everyone can see me. Yes? Can I get more energy? Hi, guys. Is this all you guys have? Hi. Perfect. So let’s get started. Starting with, you know, when I was young, my father used to tell me… Okay, I’m still young. I’m still young. Younger, younger. That’s what I bet. He still tells me that Raul, question everything. Be critical about everything. The slide changer is not working. Okay, without the slide changer also it will work.

Okay. Be critical about everything. Ask questions. So I did. Why does the chair have four legs? Why is the sky blue? And also, why do birds fly? Why can’t humans fly then? I bombarded him with a lot of questions. So he just took the phone and he’s like, Raul, this is Google. Go search it. And so I did. But you know, while I was using Google, my parents also taught me one thing. How to figure out what is the correct information and the fake information. And that helped me a lot. But this age of AI, how do you expect me to do it? I don’t think even parents can figure out what is the right information and fake information.

We all agree upon that? Yes? So how do we do that? Because curiosity is there in every child. I think I have enough curiosity. But it only becomes powerful if it’s guided the right way. So how do we guide the right way? Because right now we are just teaching kids how to talk to machines. Before we teach them how to… Question. Now I am just saying random quotes now but let’s dive deeper and see why. I will give an example. Everyone remembers the Ghibli trend? Everyone did it? I did it too. Guilty. But it was very fun to be honest. But what happened there was we were all just taking pictures, uploading our pictures to the cloud.

But we don’t even know what’s happening with it. We all agree, right? But right now kids are also doing the same thing, taking their pictures, uploading it to the cloud. But we tell children don’t be on social media, don’t upload your pictures to social media, don’t share your pictures to strangers and all, right? But what about the AI world? We are missing, the parallel is missing. We need to translate real world safety into the digital world. Because right now even most, okay, I have a question. How many of you guys read the 25 page terms and conditions? I don’t, right? You don’t know what’s happening behind the scenes. I don’t know what’s happening behind the scenes.

like most of these pictures were taken and obviously made for the model to be better for all of us, right? Right now a lot of companies are making sure children are safe but we don’t know about it. Are they safe? There are a lot of unknown AI companies as well. What do we do then? That’s right. Also I created an AI software where you can upload a full terms and conditions or any contract and it will tell you the high risk clauses, low risk clauses and it will, thank you and it will literally tell them what to do, if you should use the product or not, right? So be careful. Anyway, so that tool was known as Rescue AI.

I’ve been working on it for the past three years for emergency, for law people, a lot of things. I don’t want to promote myself too much but I’m trying to do that. But what about when things like that are not there? What about if I didn’t do something like that? That is why AI awareness and safety is necessary. Obviously it is. That’s why you’re called here, Raul. But how do you do that education? Right? How do you teach about AI? You know, recently I got calculator in my school and I am so happy because I don’t have to do maths by multiplying, dividing manually. I can do it through calculators in my exam. By the way, I bunked my exam and came today.

Anyway, very happy for that. But you have to do all this calculation. But because I have a calculator, it’s way easier. But I only got access to it once I learned the basics of maths, right? I believe AS should be same. We should learn how to write essays. We should learn how to sing, maybe. Then you should, I don’t know how to sing. Everyone will run away if I start singing. But you should know the basics and the foundations before you start using AI. I feel that’s when you teach about AI. That’s when you say, okay, AI can help you do the essay. AI can help you do the song. You should use the natural intelligence first.

Then start using artificial intelligence. I believe. It’s about using the combination of both, right? Yes. How many of you guys use natural intelligence? Everyone does, right? I’m mostly reliant on artificial intelligence. I’ve got to switch to it. But that’s what matters. But it’s not just about that. It’s also about how we teach, deliver topics. Starting with personalized content. You know, reading for me is kind of boring. I’m so sorry. But everyone learns differently. It might be through reading. It might be through listening. It might be through watching videos, which I prefer the most. That’s how I learn most of the things that are happening. From geopolitics to cricket, which I love. All of these things I’ve learned because I watch the video.

I’m a more visual person. It’s not one size fits all. But sadly, I feel educationist. And I believe AI can generate content. Wait. It’s not believe. It’s already happening. You guys know about Notebook LM, right? It can generate videos. It can generate podcasts with one textbook content. That’s how I passed all my exams, to be honest. Even not just that, there is this tool StudyFetch where you can upload a chapter content and it will convert it into games. It’s not just about that. Everyone’s interest is different, right? Take a wild guess. What do you think my interests are? Wild guess. AI. AI, exactly. I am here to talk about AI guys. Cricket on the side but AI, right?

What if you connected E is equal to MC square and thought that through AI? You can do that too in this AI world. How do you do that? See, right now schools teach us what to think. I am repeating that. Schools teach us what to think but I believe schools should teach us how to think. How to think and how to think critical, how to think critically and how to face failures, how to communicate. These are basic things. Trust me, to stand here I had to face a lot of failures. But I learned how to do that because of my father. Trust me. I am giving you some credit. So, thank you. See, now he’s recording the audience, clapping for him.

Okay. So, that is what matters. And here’s one proof of demand, okay. I started something under my company, AIRM Technologies, ThinkCraft Academy. Yes, a bit of promoting, but ThinkCraft Academy, where I taught what is AI to building your own AI, LLM, fine tuning and all that, that in 30 days and more than 7 lakh people learned from that. And that course was completely free. And even there was another course going from what is AI to building your own AI as a startup founder, as a student. And that course was also completely free. But do you know how many people joined and learned from that? Again, 7 lakh people did, combined. that shows that people want to learn about AI.

It just has to be delivered the right way. The name of this course, I know everyone is searching right now. It’s on my YouTube channel. I’m a content creator too. Raul the Rockstar. Yes, you might be thinking, what does he not do, right? I’m joking. But a lot of things goes on. See, I am not saying a lot of big things. I believe we all should be open mind. We should be open to learning more things. We should be curious because AI will not take your job. But someone using AI can. But at the same time, the most important thing in the world of AI is also to be as human as possible. My name is Raul.

Thank you so much. Is it okay if I take a small video? Influencer. Thank you so much. I have to do this too, guys. So it’s very simple. Like I said, I have to do this. totally forced to I am just going to say AI Impact Summit how was the session and you guys can be if you guys didn’t like it just say no hated it you guys can say that be fully honest I should say you and also right I am totally joking I am very grateful for this opportunity you know in last November I was wanting to come here I was like register for this and the fact that they called me to speak here I am very grateful for this opportunity and we have to thank them thank you shall we do it AI Impact Summit Delhi by UN ok not by ok what’s the worry it’s a part right ok this is how many times I have to record a normal video thank you so much UN for calling me and AI Impact Summit the audience how was the session was it boring yes was it boring you guys are agreeing it’s boring no thank you guys thank you I will not take too much time

Moderator

Thank you, Rahul, for that very thoughtful and energizing address. Your perspective underscores a key message for today. The question is not whether children will engage with AI, but whether adults, institutions, and systems are prepared to guide that engagement responsibly. We will now turn to our panel discussion. The discussion will be guided by two co -moderators with deep expertise at the intersection of innovation, policy, and child well -being. I am pleased to introduce our moderators, Thomas Davin, Director of the Office of Innovation at UNICEF, as well as Urvashi Aneja, Director of the Digital Futures Lab, and I invite them to guide the discussion.

Thomas Davin

Thank you. Can you hear me? Yes? All right, so delighted to be here with you all. I’m one of the two co -moderators, and I’m… delighted to invite four leaders in the industry who are going to have the high bar of keeping you all as entertained and on substance as Raoul just did over now. So please, a warm welcome to Baroness Joanna Shields. Please, Maria Bielikova, Director of the Kempelen Institute for Intelligent Technologies. I took the liberty of not reintroducing Baroness because I think she was already known to you. Chris Lehan, welcome, Chief Global Affair Officer for OpenAI. Tom Hall, welcome, Vice President and General Manager of the National Legal Foundation. Over to you, Alicia.

Urvashi Aneja

Alicia Thank you and thank you to the UNICEF team and thank you for that very energizing opening. Yeah, I hope we can live up to that level of dynamism. oh yes can we invest the Baroness wants to know if we can invest in your company okay great so on that very cheerful note thank you all for being here and I’m delighted to be able to moderate this discussion at the India AI Impact Summit as someone who studies the governance choices that shape how technologies land in society I’m interested in a very simple test whether AI expands children’s agency and learning or does it quietly narrow it through design incentives and design choices so let’s begin with what we want AI to enable for children at scale and in practice Tom so perhaps I can start with you first Lego education has recently pushed into computer science and AI learning in young classrooms so what does AI literacy that supports well -being look like in real classrooms and what does it look like in real classrooms and what should we do if we want AI to deepen creativity rather than replace it

Tom Hall

Well, first of all, thank you for having me and very tough shoes to fill after Rahul’s spot there I agree with so much of what he just had to say and yeah, I’d love him to come and guide some more conversations Being at this conference, I think we can all see that the rate of technological advancement is breathtaking and I think often we stand, whether we’re deeply involved in it or on the sidelines there can be a feeling of incredible excitement there can also be a feeling of, frankly, doom that this change is happening so fast and I think that we kind of underestimate what the role of children is going to be in this journey They might look at what’s happening in the world of AI and simply see it as a magic box that they can interrogate at the click of a button and ask simple questions and get really quite deep answers back.

It might be a funny video they want to produce. It might be the answer to a history exam that they have to submit on Monday morning. And what we think AI literacy is, is ultimately handing children a screwdriver and saying, here is a fairly complex box, but let’s take it apart and let’s understand what’s under the hood. And let’s understand all the components. So for us, AI literacy is allowing children and empowering them to really kind of interrogate the fundamental basis of computer science and artificial intelligence. And that’s teaching them how computers see the world as data, what is sensing, how to think about kind of predictability, how to think about bias and force conversations and accountability.

So we want to empower children to have deep thoughts about this. We also want to empower teachers. and I think right now again this pace is happening so fast we asked some primary and middle school teachers in the United States what they thought about the pace of artificial intelligence in classrooms and they’re all hugely excited about or a very high number of them are very excited about what’s happening they agree that artificial intelligence literacy needs to be a foundational skill in school but that’s 80 % of them see that only 41 % of them feel remotely ready to go and teach AI literacy in a classroom so I think we have to provide teachers with the tools that are going to allow them to bring real world learning to life

Urvashi Aneja

Thanks Tom and I would love to maybe at a later stage in the panel come back to you on the how because we do a lot of work with policy makers trying to do kind of capacity support with policy makers and we really struggle in terms of how do you actually embed AI into AI literacy so I imagine children can do that and I think that’s a really good point We really have to think about the pedagogy quite carefully to make sure that we are imparting that learning. So I’d love to kind of come back to you on that. Chris, if I can bring you in next. Open AI has emphasized that AI systems will increasingly support learning, creativity, and problem -solving for young people.

From your perspective, where do you see the most promising opportunities for AI to positively shape children’s experiences, particularly in ways that strengthen agency, curiosity, or access to knowledge? And you’re not allowed to say what Raoul already said.

Chris Lehane

I was just going to say you got a great explanation of that. First of all, thanks for having me. Awesome panel. Baroness, always good to be with you. My son would be very jealous that I’m sitting next to the Lego guy. That’s a pretty cool thing. So thank you. And I’ll just also share, I may have to exit a little bit early, because I have a question. I’m supposed to be at the date double scheduled, so if so, my apologies in advance. I’ll try to answer your question at a macro level and then maybe a more specific level that I think picks up on your pedagogy question that you were just asking. First of all, this technology has enormous capabilities to basically individualize teaching, individualize.

I mean, you’re at a place where every kid in the world could, in effect, have their own AI tutor that would be able to help them to learn at the pace that they learn and in ways that they learn. I think amongst, you know, sort of insights in education is kids just learn in very different ways. And this technology could be incredibly liberating in terms of answering that. You mentioned the teachers. We do work with the largest teachers union in the United States, 400 ,000 teachers, to actually train them to develop the AI to, in fact, do some of that individualized teaching. But I think there’s maybe a level down from that, which I think you were sort of picking up on when you were setting up this question.

And that’s the agency question. I know the U .S. public education system better than I know others around the world, so part of what I’ll say is really based on my U .S. experience. But the U .S. K -12, I see the sign, yes, you’re telling me to shut up, K -12 public education system was designed for the industrial age. It was basically designed to take kids who were coming from rural environments and the urban environments and teach them to be able to work in factories. That was both the bells, different classrooms that you would go to, the time that the day started, how long the school day lasted. But sort of at its core was not just literacy in terms of teaching people to read, write, do arithmetic.

It was actually creating an ethos about how you should work and participate in an industrial age economy. I do think one of the big issues that we’re going to need to think about with this particular technology, which is going to really reward people like Rahul who take agency, is how do we actually teach people? Agency. This technology is an incredibly… leveling technology, it scales the ability of anyone to think, to learn, to create, to build, to produce. And the question is, do you actually encourage people to be able to use it that way? Because if so, the way we think about the social contract relationship between capital and labor and how that is calibrated, this technology can have a huge impact on actually giving individuals the ability to control their own labor as owners of it.

Urvashi Aneja

Thanks, Chris. And I appreciate particularly the point around agency and how can we teach people agency. And I also wonder that sitting here in India, in the global south, one of the things that we can see very clearly is that agency in some sense is not only a factor of individual capacity, but has so much to do with the broader socioeconomic institutional context in which you are in. And so I wonder how we think about agency. Across different contexts. Back to you.

Thomas Davin

Thank you so much. let’s get into the next segment which is really about what happens when it fails, what happens when there’s harm that is being done from a UNICEF lens of course when we think of the education in the world today, 7 out of 10 children in classrooms cannot explain to us a text that they read at 10 years of age 7 out of 10, so clearly the technology potential is immense in really realizing huge bounds in learning outcomes what happens if actually we go the other way and we suddenly have an over dependency on that technology for children when we maybe frame the children’s creativity in ways that we actually constrain it or make it one one fits all, so let’s go into that segment of risks and harms and what is the accountability frameworks and how do we protect against this, for those of you who are following carefully I would say that the organizers of the panel have done a beautiful work on gender, I don’t know if you noticed but it’s boys on one side, girls on the other women asking questions to the men same question to the women.

They’re by definition much smarter. That’s pretty clear. And that’s exactly where I was going. And the next question to the women are going to be harder than before, as they should be. So let’s start on a curve. Yes. But to be fair, it continues to be harder and harder as the panel continues. Let’s start with Baroness Joanna. You’ve held UK government roles focused on Internet safety and harms. And you have helped build major child online safety coalitions internationally. From that experience, what is one key lesson from the UK Internet safety agenda that you believe is worthwhile surfacing today? And maybe one area where you think, or you should say, we’ve tried this. Please don’t do this.

Baroness Joanna Shields

If I could convey one thing. After 15 years of looking at how do we regulate technology to prevent harm, how do we regulate technology to prevent harm? I think it’s important to this post -harm paradigm that we’re operating in is not going to work in the AI. future. So we have had to adapt very quickly as governments as harms have emerged using AI. For instance, the deepfake crisis that we’ve experienced recently. I know six, seven jurisdictions of, you know, countries that have implemented very quickly, you know, laws that are specific and targeted to that particular harm. But what we need to do is we need to step back and we need to think about that how do we build and design safety from the ground up.

And my personal view is that this has to come through consultation with the companies. I see a very different type of reaction from the AI models developers. They’re much more receptive to the idea of safety by design and building in guardrails that protect children from the outset. And I’m actually an optimist at the moment because I’m starting to see a lot of people who are doing a lot of the work that we’re doing right now. And I’m actually a lot of people who are doing a lot of the work that we’re doing right now. And I’m actually companies like Like OpenAI just recently announced that they have an age gate, age assurance technology to ensure that children under age, you know, whatever the jurisdiction is, I think it’s 18, okay, are not able to engage with the model and to experience, you know, that.

And I think that’s really important because, you know, we’ve been battling this age on the Internet for 15 years. And now the technology, whether it’s cryptography or biometrics, all kinds of technologies have emerged to where you can preserve privacy and ensure that you can protect privacy. So there’s no excuses anymore for companies not to build in robust age assurance that’s privacy preserving and that can ensure that the design experience that you get is appropriate for the age you are.

Thomas Davin

Thank you so much. So I love the point that social media, we talk a lot about social media these days, right? Rightfully so. But indeed, it’s been a late awakening worldwide about. the potential of that, but also the potential for what happens to children in many ways, and we cannot make that same mistake with AI. It’s just so much deeper and broader, and we need to look at this a lot more systematically. Maria, if I can come to you. Your work spans user modeling, personalization, as far as I understand it, and trustworthy AI, and you’ve also spoken publicly about disinformation risks. In your view, where do AI systems create the highest risk failure modes for children specifically, and what kind of technical evaluation should be required before deployment?

Maria Bielikova

on TikTok for 10 days in Germany, actually. And then we found out what happened. And maybe I can tell it in second, in second my entry that it was really shocked for us. Thank you so much.

Thomas Davin

So in essence, really having very clear impact focus research continuously so that can inform potential query mechanism and potential redress mechanism as a way to safeguard against those potential risks.

Maria Bielikova

And how they are exposed to commercial content. And this is the most critical.

Thomas Davin

Thank you so much.

Maria Bielikova

Even though we have Digital Service Act in Europe.

Thomas Davin

Thank you. Let’s move to the third segment.

Urvashi Aneja

Thanks. Yeah, and I think that brings us really nicely to this question of what next, what do we do? I think we often agree on what needs to be done at the level of principles, safety, transparency, accountability. I think you’ve added another dimension to it when you talk about, in some sense, evaluations, that we need to be doing kind of real -world evaluations in real -world deployment context of these systems, not just testing these systems in a lab setting, but testing, evaluating them in a real -world context. And regularly, I think the hard part, at least when we talk about the principles, things like safety, transparency, and accountability, is how we operationalize them across jurisdictions and also across business models, which I think also speaks to the point you were making around it being a feature and not a bug.

So this segment is really about the how, what becomes enforceable, what becomes measurable, and what changes incentives. Tom, if I can start with you again. As AI becomes more embedded in classrooms and in learning platforms, what governance or design choices are essential to ensure that these tools support children’s well -being at scale, particularly around diverse education systems and cultural contexts?

Tom Hall

thank you clearly this is a really uh exciting and uh high you know the potential of this moment in time is enormous so i think everyone should be ambitious uh but at the same time be measured um go in ambitious with your design plans for bringing ai into classrooms and see it as an as an opportunity to maybe make exponential gains in in many different markets where you may have been very challenged before i think there are tremendous opportunities for many markets in the global south right now so see the introduction of ai and ai literacy as something of a reset but you know don’t jump in blindfolded this is a once in a lifetime opportunity to establish essential foundational skills for young people and it’s going to need really careful thought these governance and design choices they’ve got to be built on no regret moves so i would say put data privacy data sovereignty and inclusion and respect for the student at the top of any plan When you sort of teach about, I don’t know, systemic bias and large language models in classrooms, make sure that all kids of all types of diversity and inclusions are represented and can see themselves coming back in the products that they’re experiencing.

Children have a lot to say in this space, so involve them. We’ve published a free AI policy toolkit for classrooms. Have children think about what kind of things they think need to be considered here. It’s going to be a really meaningful conversation between teacher and student. And talking of teachers, I think give them exciting but also relevant curriculum. We have computer science qualifications in the UK. The entry levels for that are critically low. And. Very low for girls. We introduced that 10 years ago. We gave very insufficient training for children. And the curriculum is frankly very dry. I think we have to really think about real world curriculum that is going to excite students. And so let them see themselves with real world problems in the types of learning experiences that we’re putting out there.

I’m speaking on behalf of the LEGO Group. So, you know, children are our role models. I think when you’re designing AI policies for children, this has to be sort of child -centered and child -led. And so just involve them in the plans as you roll them out. And I hope that will lead to some really exciting changes.

Urvashi Aneja

Thanks, Tom. Chris, earlier this year, OpenAI’s policy engagement has included calls for common -sense youth safety approaches and more parental control. So what, in your view, should be the baseline governance package for child -facing AI, and what should be globally interoperable versus what is locally set?

Chris Lehane

Sure. Thank you for the question. And let me just give two points, and then I’ll answer that question specifically. First of all, and I think this is a really smart room, so I’m sure we’re all thinking about it this way, but really important to understand and recognize that this is not social. This is not social media, and we should not make the classic mistake of fighting the last war with the next. next war. There are certainly lessons that are important that you take from it, but understanding that this is going to be a technology that is not just on your device, but is going to be around you in all sorts of different ways, physical world, non -physical world.

So understanding that component. I think secondly, interesting lessons from what we’ve seen on the catastrophic harm side. You’ve seen the emergence of AI safety institutes around the world where the leading frontier labs, for the most part, work with those safety institutes to basically be creating safety standards. UK, US, Europe, Japan, Australia, you’ve seen an early version of that. Here in India, and I do wonder whether there’s some version of that that you actually do specifically for kids’ safety. The third point really goes to your question, which is, yeah, we have put forth, and we’re really the only AI company that has done this thus far. We do hope others will join us. Basically, a multi -pronged approach.

The first, and the Baroness mentioned this, is we do do age assurance. We try to use signals to identify whether you’re under 18 or not. If we identify you as under 18 and if we are unable to identify you, we then default you to an under 18 model. So even if we’re not sure of your age, we do default you to an under 18 model, which has all sorts of restrictions around violence and sexual conversations and mental health type of issues. Three, we build it in with a ton of parental controls. Parents can control whether it has memory or not about your child. Parents can get real -time feedback. Parents can control how long you’re spending on it. You can get warnings and alerts around stuff if your child is asking stuff that would be in the mental health types of space.

Four, we prohibit any targeted advertising of kids using the technology. I think that’s one that’s a clear lesson from the social media age. Fifth, we have an outside review process that we’ve called for. In the U .S., that would be done by like a state attorney general, but someone who’s a part of government to actually review that what you’re saying you in fact are doing. And then finally, prohibit the targeting of specific kids bots. There may come a time and place when we actually have really good guardrails around this, and they can really serve really helpful, positive, productive purposes. But until we have those guardrails, we think we need to be really, really, really mindful of that.

So it is a complete package. We are pushing this in California and a number of states. We want to take it around the world. We’re working with some of the leading children’s advocacy organizations. And anyone here who would want to work with us on it, we really welcome that. And we don’t pretend to have all the answers. Like we’re super humble about this. We do think this is what we’ve seen from our data. This makes a lot of sense. It goes farther than what others have done. But we also know that this is going to be a constant learning process, and this is a beginning, not even the middle, and certainly not the end.

Urvashi Aneja

Sorry, just to ask a follow -up question on the bit around how you make this locally relevant. So you have this kind of package, you’re rolling it out in the U .S. How do you then cater it to different contexts?

Chris Lehane

You know, it’s a great question. Like there are some parts of the world, you know, Europe is an example of this, where there are some privacy limitations that actually impact your ability to do the age assurance at the level that you would like to be able to do it at. So we’re in the process of some of these jurisdictions of trying to work through some of those types of issues. I think there’s other dynamics that potentially come into play, which may be what you’re asking about, you know, cultural context, societal context. And I think those are things that you do have to work through with individual countries because individual countries are going to have their own norms on those.

And I think we’ll also see different levels of vulnerability or different types of vulnerabilities in those different contexts.

Urvashi Aneja

Fair enough. If I can bring you in. How should global norms for children’s safety handle cultural and regulatory diversity without creating, in some sense, loopholes that allow companies then to opt for the weakest protection?

Baroness Joanna Shields

So I wanted to take that question in two different directions. First of all, in terms of a global regulatory framework, there are certain standards that are required across every jurisdiction. I mean, every country has an age where children can participate in the digital world. And unfortunately, it’s a blunt instrument in many cases. It applies across the board at a certain age. We’ve been seeing a lot of social media bans recently. And I think that has come out of exasperation on the part of governments, the fact that they just have. They’ve given up trying to regulate this technology, and they’ve decided they’re going to just use that blunt instrument as a guide. And unfortunately, you know, there are benefits then that the children can’t participate in.

But the reality is that this, there’s a little bit of movement here. As the, you know, as the age of assurance technology grows and becomes much more capable, we can custom design experiences for young people that accommodate their level of maturity and capability and ensure that we can meet these requirements in a much more sophisticated and better way. It’s about time we solve for age online once and for all. And I believe we’re getting close to that. There’s an organization called the Open Age Alliance. And it’s a very important organization that’s looking to harmonize standards across all of, age assurance technology. So whatever age assurance you think in your platform is reliable, Open Age will enable you to generate an age key.

And then that age key travels with. the child everywhere they go online. So we’ve got a very absolutely verifiable way for companies to deliver an age -appropriate experience. And you asked me about something else that I think is really important in this context about culture. And if we have a world where we are accepting models from just the global north, I really believe we will lose so much of our cultural diversity, our uniqueness as people, wherever we come from, whatever our background is. We have to be very mindful of the fact that we don’t want to develop a monoculture that is based on a handful of models that everybody uses around the world, and we lose that richness of who we are, what makes us human.

I think that that wasn’t… really the aim of the question, but I couldn’t let it go without bringing that to bear, because this is an absolutely critical question we need to solve. as society.

Urvashi Aneja

Thank you. I couldn’t agree more on both those points on how we have to get the age, we have to solve for age verification and then the risk of kind of flattening culture and what that means for children and what that means for how they develop and grow. Maria, last but not least, you’ve helped elevate trustworthy AI as a public agenda in Slovakia and in Europe through initiatives spotlighting responsible practices. So if a regulator asks you for key measures or measurable indicators that an AI system is acting in a child’s best interest, what would those be?

Maria Bielikova

Actually, I already mentioned it somehow that it’s something that AI at this moment is so complex, meaning I mean the neural networks that we have that we we cannot actually measure something that we don’t know. We can observe it and this is quite important to do a lot of studies as we do and not just taking analytics from companies that provide it even though they seem the best because even though they tell that children are not profiled but they are because we see it and sometimes it’s out of their control because we should really make such studies as I mentioned because for example one of the results of a study I mentioned before is that children see less formal advertisement on TikTok.

This is fine but actually they are exposed five times more … to profiling to the topics with influencers and so on. They are not formal advertisements. So we definitely should do a lot of such studies. And the children should be there because if we prohibit everything for them until some age, then they will not be able to explore it. It is the same as we will prohibit children to go to the city. But we should know what is going on and we should travel with them through this environment. And this is probably the most important to doing such studies to really understand what is going on on the platform where they are because they will be there.

Urvashi Aneja

I think that’s such a powerful that’s such a powerful analogy, the city one and I think while you were speaking what struck me is that you know we have some tools already we don’t have to kind of approach this afresh so we have actually tools around data protection and privacy if we actually enforce them some of that profiling that you’re talking about need not happen we have tools that allow us to get data from the platforms to actually understand what is happening on these systems so again we have things in our kind of regulatory toolbox that we can exercise and then of course I think in addition to that really this point around contextual evaluations that involve children is so that we can understand what these systems are actually doing Thomas maybe I can hand it back to you to or did you want to add something?

Tom Hall

Thomas is my formal name so I thought you were talking

Urvashi Aneja

oh right if you would like to add something and then you can hand it on to the other Would you want something? Other Thomas.

Tom Hall

A lady said something I thought was very wise to me this morning and said, you know, you’ve got to think about what kind of ancestor you want to be. And I guess we’re at this really interesting moment where we’ve had social media, we’ve had sugar, we’ve had tobacco. Surely now this is our chance to make some really sharp decisions and pay it forward for the next generation. So credit to the lady who said that to me this morning.

Thomas Davin

Thank you so much, Tom. So it’s going to be very hard to close, so maybe I’ll just try to see at least the points that I took from the panel and hopefully they will resonate. I come away with a sense of, I would say it’s going to sound terribly UN, but measured optimism. One, because the potential is tremendous. We are all aware of that. The potential, at least from a UNICEF lens, on really changing outcomes for children in ways we have never been able to do before is huge. And I think that’s something that we can all be proud of. and the risks are equally tremendously important and potentially will be there for decades if we don’t craft, design it right.

To my mind, there may be three directions that I heard that we are going in the right direction. One is safety by design has to be a must. That’s about age appropriateness. It’s about data privacy. It’s about child rights at the heart. It’s about appropriate content for the right age. It’s about systematic impact measurement. I was struck, Tom, in your session this morning when you were talking about, you know, if we have a model that actually gives the right answer or an answer to children all the time, they might actually lose their sense of curiosity. And I never thought about it like this. What huge loss would that be for humanity if we suddenly have children who are just no more curious because they just ask whatever question?

Can we design a model that actually gives the wrong answer on purpose so that the child actually struggles because we know… that grit is going to be one of the huge skills? of tomorrow. So those things are going to be massively important. Redress mechanism, we don’t talk about this and how we enforce those redress mechanism when things go wrong is also there. The second layer in my mind would be inclusion by default, coming back to Baroness’s point about having a monoculture under risk of this and we know that some of that is already playing out and hopefully having a summit in India is one of the turning points where we can see actually this turning around a little bit where we really have so many more countries beyond the global north creating shaping what those solutions are, having representation of regions of language of different dialects but also children with disabilities which are quite often left as we know out of those out of time.

And maybe one thing that we haven’t really talked about is having solutions that work for the unconnected, having solutions that work offline. We are at risk of just focusing on urban centered people and that will be terrible if we don’t do it right to those who are already kind of struggling by the wayline. And last but not least is children at the heart. And children at the heart because that’s who we want to create that world for, the ancestors we want to be for them but also because Raoul demonstrated that for us, they are the most effective users of that and the ones that have the ability to tell us this works for me, this doesn’t work for me and they should be not just divorced but they should be part of the governance of those mechanisms.

That starts with AI literacy in schools, it starts with also helping parents having the ability to help their children know where to get that literacy and hopefully if we hit all of these right we have a chance.

Urvashi Aneja

Thank you all for joining us. I just want to give the floor back to the MC.

Moderator

Thank you so much to the panelists as well as the moderators and the audience. Also on behalf of Undersecretary General Amandeep Gill, United Nations Special Envoy for Digital and Emerging Technologies. who regrets missing the session as he is stranded with the Secretary General’s program. Even the United Nations motorcade cannot make it through Delhi traffic. But could we please welcome Rahul back up to the stage for a very brief reflection on the discussion?

Rahul John Aju

I’ll make sure it’s brief. First of all, guys, can we have a big glass for them? That was not enough. If you don’t realize, these are the main people who designed the future for us kids. And the fact that I got an opportunity here to speak, thank you again, UN for that. Thank you, AI Summit for that. And whatever they said is very true. You know why? Because at this age, specifically us kids, the policies that are designed, when we are building these AI tools, that should be the first thought of keeping kids in mind, not an afterthought, right? And the fact that that’s happening is good, right? Because from Lego to open AI to all these big places to ma ‘am, everyone here, they’re designing the next world.

and I just want to say a big, big, big thank you and I also want to add one last thing. Thank you so much for always talking about me also in between but more than that, for listening to us kids, you know, for not just having, thinking what we need, for putting our opinion in mind also while building this. So a big thank you from all the children out there. Thank you

Moderator

Excellencies and distinguished guests, thank you for your participation and engagement. We appreciate the insights shared today and look forward to continued discussion on the responsible advancement of AI. The session is now concluded. Thank you. Thank you. Thank you audience. May I request the session officers to please come to the stage. May I request the audience to exit from the door behind us. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (43)
Factual NotesClaims verified against the Diplo knowledge base (5)
Confirmedhigh

“Baroness Joanna Shields warned that governing AI for children will be “the clearest test yet on whether we are governing this technology responsibly and for the public good”.”

The knowledge-base entry titled “Safeguarding Children with Responsible AI: Baroness Joanna Shields” records her framing of AI governance for children as a key test of responsible and public-good regulation, confirming this statement [S1].

Confirmedhigh

“Simulated intimacy can feel real to a child even though it is merely code.”

S3 explicitly notes that a model’s messages are “code” but “for a child, it can feel very real”, directly supporting the claim.

Confirmedhigh

“Children cannot reliably distinguish authentic human connection from artificial intimacy.”

S3 also states that “children are not miniature adults. They cannot reliably distinguish between authentic human connection and artificial intimacy,” confirming the claim.

Confirmedhigh

“The blurring threatens safety, mental health, identity formation and long‑term well‑being, with observed harms such as emotional dependency, manipulation, deep‑fake abuse and devastating loss.”

S30 documents deep‑fake abuse as a concrete harm to vulnerable groups, and S35 discusses hidden psychological risks and “AI psychosis”, providing additional context for emotional dependency and mental‑health impacts.

Additional Contextmedium

“Children should first master natural intelligence—basic maths, reading and critical thinking—before relying on AI tools.”

S113 highlights that young people are exposed to sophisticated AI‑generated content and need proper education and verification tools to distinguish real from artificial, reinforcing the argument for foundational learning before AI reliance.

External Sources (119)
S1
Safeguarding Children with Responsible AI — -Chris Lehane- Chief Global Affairs Officer for OpenAI
S2
OpenAI’s push to establish AI as critical infrastructure — In a recent interview,Chris Lehane, the newly appointed vice president of public works at OpenAI, underscores AI’s role …
S3
https://dig.watch/event/india-ai-impact-summit-2026/safeguarding-children-with-responsible-ai — Thank you. Can you hear me? Yes? All right, so delighted to be here with you all. I’m one of the two co -moderators, and…
S4
Safeguarding Children with Responsible AI — -Tom Hall- Vice President and General Manager at Lego Education (works with the National Legal Foundation)
S5
https://dig.watch/event/india-ai-impact-summit-2026/safeguarding-children-with-responsible-ai — Thank you. Can you hear me? Yes? All right, so delighted to be here with you all. I’m one of the two co -moderators, and…
S6
Safeguarding Children with Responsible AI — – Baroness Joanna Shields- Urvashi Aneja
S7
Towards a Safer South Launching the Global South AI Safety Research Network — – Dr. Urvashi Aneja- Ambassador Philip Thigo
S8
Safeguarding Children with Responsible AI — Bielikova offered a memorable analogy: “It is the same as we will prohibit children to go to the city. But we should kno…
S9
Safeguarding Children with Responsible AI — -Baroness Joanna Shields- Former UK government roles focused on Internet safety and harms, helped build major child onli…
S10
High Level Session 4: Securing Child Safety in the Age of the Algorithms — – **Thomas Davin** – Director of Innovation, UNICEF Shivanee Thapa: Thank you. I’ll come back to Minister Bah shortly a…
S11
High Level Session 2: Digital Public Goods and Global Digital Cooperation — – **Thomas Davin** – Global Director for UNICEF Innovation, Session moderator Karianne Tung, Veronica M. Nduva, Nandan…
S12
Children safety online in 2025: Global leaders demand stronger rules — At the20th Internet Governance Forum in Lillestrøm, Norway, global leaders, technology firms, and child rights advocates…
S13
Keynote-Olivier Blum — -Moderator: Role/Title: Conference Moderator; Area of Expertise: Not mentioned -Mr. Schneider: Role/Title: Not mentione…
S14
Keynote-Vinod Khosla — -Moderator: Role/Title: Moderator of the event; Area of Expertise: Not mentioned -Mr. Jeet Adani: Role/Title: Not menti…
S15
Day 0 Event #250 Building Trust and Combatting Fraud in the Internet Ecosystem — – **Frode Sørensen** – Role/Title: Online moderator, colleague of Johannes Vallesverd, Area of Expertise: Online session…
S16
Safeguarding Children with Responsible AI — – Rahul John Aju- Chris Lehane
S17
Intelligent Society Governance Based on Experimentalism | IGF 2023 Open Forum #30 — She highlighted the need for AI systems to be inclusive of diverse voices and ensure that they respond to the needs and …
S18
AI governance debated at IGF 2025: Global cooperation meets local needs — At theInternet Governance Forum (IGF) 2025 in Norway, an expert panel convened to examine the growing complexity of arti…
S19
Child safety online – update on legal regulatory trends combatting child sexual abuse online — Jaap-Henk Hoepman:Yeah, so like I said, so I recently was talking to somebody who was doing research on the false positi…
S20
Data Protection for Next Generation: Putting Children First | IGF 2023 WS #62 — Age verification should not be the default solution for protecting minors’ data. Other non-technical alternatives should…
S21
Cultural diversity — While AI can help preserve cultural diversity, it is crucial to shed light on the problem of cultural homogeneity when d…
S22
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Matthew Prince Cloudflare — “One where AI is bringing about our humanity and our differences, not homogenizing us.”[10]. “AI needs to respect and ac…
S23
Balancing innovation and oversight: AI’s future requires shared governance — At IGF 2024, day two in Riyadh, policymakers, tech experts, and corporate leaders discussed one of the most pressing dil…
S24
Shaping AI to ensure Respect for Human Rights and Democracy | IGF 2023 Day 0 Event #51 — Furthermore, Francesca advocates robustly for targeted regulation in the AI field. She firmly asserts that any necessary…
S25
What is it about AI that we need to regulate? — AI systems’ tendency to perpetuate and amplify existing biases was identified as requiring immediate regulatory attentio…
S26
Rethinking learning: Hope, solutions, and wisdom with AI in the classroom — But this adaptation won’t happen without effort. It requires educators willing to experiment with new approaches even wh…
S27
Responsible AI for Children Safe Playful and Empowering Learning — AI could easily offer little prompts that inspire me to play. It could support diverse learning methods. AI could help u…
S28
Education meets AI — Additionally, the speakers emphasized the need for personalized learning and adaptive teaching methods. They discussed t…
S29
WS #179 Navigating Online Safety for Children and Youth — 1. Global Standards vs Local Adaptation: Keith Andere highlighted the need to adapt global standards to local contexts a…
S30
Parliamentary Session 3 Click with Care Protecting Vulnerable Groups Online — Rather than applying universal Western standards, different regions should be able to establish standards that align wit…
S31
High-Level Session 5: Protecting Children’s Rights in the Digital World — Need for child-appropriate verification, age assurance protocols, and standards for platforms
S32
WS #123 Responsible AI in Security Governance Risks and Innovation — This comment elevated the technical discussion to a more sophisticated understanding of systemic governance challenges. …
S33
WS #232 Innovative Approaches to Teaching AI Fairness &amp; Governance — Tayma argues that educators need to adapt their teaching goals in the AI era. She suggests focusing on developing critic…
S34
AI (and) education: Convergences between Chinese and European pedagogical practices — Norman Sze: Thank you for introduction. It’s my honor to join this forum and share insight from perspective of professio…
S35
Hidden psychological risks and AI psychosis in human-AI relationships — For years, stories and movies have imagined humans interacting with intelligent machines, envisioning a coexistence of t…
S36
Generative AI and Synthetic Realities: Design and Governance | IGF 2023 Networking Session #153 — Generative AI and large language models have the potential to significantly enhance conversational systems. These system…
S37
Interim Report: — 27. Other risks are more a product of humans than AI. Deep fakes and hostile information campaigns are merely the l ates…
S38
DC-IoT &amp; DC-CRIDE: Age aware IoT – Better IoT — Abhilash Nair: Thank you. I want to talk a little bit about why age assurance matters from a legal perspective. As a s…
S39
Leveraging AI to Support Gender Inclusivity | IGF 2023 WS #235 — Another important point emphasized in the analysis is the significance of involving users and technical experts in the p…
S40
Global AI Policy Framework: International Cooperation and Historical Perspectives — Bali contends that fundamental concepts like privacy vary significantly across cultures, and that Global South countries…
S41
WS #100 Integrating the Global South in Global AI Governance — Fadi Salim: Thank you. And this covers a little bit the grassroot element of it. So it’s awareness, diversity, inclusi…
S42
AI &amp; Child Rights: Implementing UNICEF Policy Guidance | IGF 2023 WS #469 — Algorithmic systems indirectly impact children by determining health benefits or loan approvals for their parents By de…
S43
Safeguarding Children with Responsible AI — Artificial intelligence | Monitoring and measurement
S44
WS #172 Regulating AI and Emerging Risks for Children’s Rights — 3. Regulatory Landscape and Challenges 5. Role of Education and Awareness Ansgar Koene: terms of actually making this…
S45
Harnessing AI for Child Protection | IGF 2023 — Monitoring digital content is seen as intrusive and infringing on privacy, while not monitoring absolves platforms of ac…
S46
Governments vs ChatGPT: Investigations around the world — But a more challenging request is for the company to introduce measures for identifying accounts used by children by 30 …
S47
Data Protection for Next Generation: Putting Children First | IGF 2023 WS #62 — Age verification should not be the default solution for protecting minors’ data. Other non-technical alternatives should…
S48
Tech Transformed Cybersecurity: AI’s Role in Securing the Future — Moderator – Massimo Marioni:Now you’re all senior leaders within your companies What do you think are the most important…
S49
Open Forum #26 High-level review of AI governance from Inter-governmental P — 4. Youth: Should be involved in policy-making and allowed to innovate while addressing potential risks. Leydon Shantsek…
S50
Policy Network on Artificial Intelligence | IGF 2023 — Furthermore, one speaker raises the question of whether the world being created aligns with the aspirations for future g…
S51
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — He emphasised the need for policy that balances principle-level guidance with practical guardrails whilst avoiding overl…
S52
Debate over AI regulation intensifies amidst innovation and safety concerns — In recent years, debates over AI haveintensified, oscillating between catastrophic warnings and optimistic visions. Tech…
S53
Building Indias Digital and Industrial Future with AI — These key comments fundamentally elevated the discussion from surface-level policy rhetoric to deep, nuanced analysis of…
S54
What is it about AI that we need to regulate? — Interestingly, some speakers noted that clear regulatory guidance can actually accelerate innovation. Eltjo Poort inWS #…
S55
Open Forum #17 AI Regulation Insights From Parliaments — Cybersecurity | Violent extremism | Children rights Research shows that children are being recruited for extremism and …
S56
Cultural diversity — While AI can help preserve cultural diversity, it is crucial to shed light on the problem of cultural homogeneity when d…
S57
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — Fadi Daou:Wonderful. I think this is so important to be considered by the policymakers. In fact, this multi-stakeholder …
S58
Responsible AI for Children Safe Playful and Empowering Learning — AI could easily offer little prompts that inspire me to play. It could support diverse learning methods. AI could help u…
S59
Generative AI in Education — Personal insights, including those from a mother’s perspective, touched on the challenges of steering children towards b…
S60
WSIS Action Line Facilitators Meeting: 20-Year Progress Report — UNESCO is providing policy guidance on AI in education, focusing on frameworks that emphasize ethical use of AI in educa…
S61
TIMELINE — This strategy will integrate artificial intelligence technologies into the field of education through projects aimed at …
S62
JANUARY 14 TH , 2019 — Digital Inclusion and Education for all is an essential component of AI development. More extensive knowledge a…
S63
High-Level Session 5: Protecting Children’s Rights in the Digital World — Need for child-appropriate verification, age assurance protocols, and standards for platforms
S64
Safeguarding Children with Responsible AI — Age assurance technology must be implemented with privacy-preserving methods to ensure age-appropriate experiences
S65
High Level Session 4: Securing Child Safety in the Age of the Algorithms — – Karianne Tung- Christine Grahn- Emily Yu Barrington-Leach advocates for a fundamental shift in platform design where …
S66
Cybersecurity regulation in the age of AI | IGF 2023 Open Forum #81 — One of the principles focuses on robustness, security, and safety.
S67
Responsible AI for Children Safe Playful and Empowering Learning — This comment established the philosophical foundation for the entire discussion, shifting focus from AI as a consumption…
S68
Education meets AI — In addition to the above topics, the significance of critical information and critical thinking in education was also di…
S69
AI (and) education: Convergences between Chinese and European pedagogical practices — Norman Sze: Thank you for introduction. It’s my honor to join this forum and share insight from perspective of professio…
S70
Hidden psychological risks and AI psychosis in human-AI relationships — For years, stories and movies have imagined humans interacting with intelligent machines, envisioning a coexistence of t…
S71
Generative AI and Synthetic Realities: Design and Governance | IGF 2023 Networking Session #153 — Generative AI and large language models have the potential to significantly enhance conversational systems. These system…
S72
Comprehensive Discussion Report: AI’s Existential Challenge to Human Identity and Society — Harari identifies the potential for AI to become children’s primary interaction partner from birth as the most dangerous…
S73
Most US teens use AI companion bots despite risks — Anew national surveyshows that roughly 72% of American teenagers, aged 13 to 17, have tried AI companion apps such as Re…
S74
Revisiting 10 AI and digital forecasts for 2025: Predictions and Reality — Collaborative Efforts: The Global Age Assurance Standards Summit and the International Age Assurance Working Group are p…
S75
Child safety online – update on legal regulatory trends combatting child sexual abuse online — Jaap-Henk Hoepman:Yeah, so like I said, so I recently was talking to somebody who was doing research on the false positi…
S76
DC-IoT &amp; DC-CRIDE: Age aware IoT – Better IoT — Abhilash Nair: Thank you. I want to talk a little bit about why age assurance matters from a legal perspective. As a s…
S77
Global Youth Summit: Too Young to Scroll? Age verification and social media regulation — All stakeholders, including government, industry, and civil society representatives, acknowledge that there are no perfe…
S78
WS #270 Understanding digital exclusion in AI era — Speaker 4: Yeah, so I think that this is a very important question. I think first, we need to be very inclusive or in …
S79
How can Artificial Intelligence (AI) improve digital accessibility for persons with disabilities? — Audience:Thank you. Thank you so much. I represent you from Chinese mission. We appreciate Her Excellency, Ambassador Es…
S80
WS #100 Integrating the Global South in Global AI Governance — Fadi Salim: Thank you. And this covers a little bit the grassroot element of it. So it’s awareness, diversity, inclusi…
S81
Discussion Report: AI Implementation and Global Accessibility — -Development: Using diverse datasets that include perspectives from the global south, both sexes, and people with disabi…
S82
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — If compute, database and foundational models remain concentrated of a few, we risk creating a new form of inequality, an…
S83
Protection of Subsea Communication Cables — The discussion maintained a consistently serious and urgent tone throughout, reflecting the critical nature of the infra…
S84
Women, peace and security — The overall tone was one of concern and urgency. Many speakers expressed alarm at negative trends and backsliding on wom…
S85
Opening plenary session and adoption of the agenda — Emphasis is placed on the need to protect critical infrastructure and to increase confidence-building measures among nat…
S86
WS #70 Combating Sexual Deepfakes Safeguarding Teens Globally — The discussion maintained a serious, urgent, and collaborative tone throughout. Speakers demonstrated deep concern about…
S87
How Humans Sense / Davos 2025 — The overall tone was enthusiastic and engaging, with the speaker using humor, personal anecdotes, and even a tattoo demo…
S88
Book launch: What changes and remains the same in 20 years in the life of Kurbalija’s book on internet governance? — The tone is academic and informative, with Kurbalija speaking as an expert educator sharing insights from decades of exp…
S89
https://dig.watch/event/india-ai-impact-summit-2026/the-global-power-shift-indias-rise-in-ai-semiconductors — Absolutely. So we are all lucky to be here at this age of AI. We are truly lucky to be in this. No, that was very insigh…
S90
Sauna diplomacy and the quiet power of informality — When transplanted into statecraft, these qualities had radical consequences. At the presidential residence in Tamminiemi…
S91
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — The tone is consistently enthusiastic, patriotic, and inspirational throughout. Sharma maintains an optimistic and confi…
S92
Engineering Accountable AI Agents in a Global Arms Race: A Panel Discussion Report — The discussion maintained a thoughtful but somewhat cautious tone throughout, with speakers acknowledging both opportuni…
S93
Scaling AI for Billions_ Building Digital Public Infrastructure — The discussion maintained a balanced but cautionary tone throughout. While panelists acknowledged the tremendous opportu…
S94
World Economic Forum Panel Discussion: Global Economic Growth in the Age of AI — The conversation maintained a cautiously optimistic tone throughout, characterized by intellectual rigor and practical r…
S95
WS #283 AI Agents: Ensuring Responsible Deployment — The discussion maintained a balanced, thoughtful tone throughout, combining cautious optimism with realistic concern. Pa…
S96
Driving Enterprise Impact Through Scalable AI Adoption — The tone was thoughtful and exploratory rather than alarmist, with participants acknowledging both the transformative po…
S97
High-Level Session 2: Transforming Health: Integrating Innovation and Digital Solutions for Global Well-being — The tone of the discussion was largely optimistic and forward-looking. Panelists acknowledged challenges but focused on …
S98
Main Session 1: Global Access, Global Progress: Managing the Challenges of Global Digital Adoption — The tone of the discussion was largely optimistic and solution-oriented. Speakers highlighted positive examples of how t…
S99
WS #302 Upgrading Digital Governance at the Local Level — The discussion maintained a consistently professional and collaborative tone throughout. It began with formal introducti…
S100
AI: Lifting All Boats / DAVOS 2025 — The tone was largely optimistic and solution-oriented, with speakers acknowledging challenges but focusing on opportunit…
S101
Emerging Markets: Resilience, Innovation, and the Future of Global Development — The tone was notably optimistic and forward-looking throughout the conversation. Panelists consistently emphasized oppor…
S102
Closing remarks — This comment is powerful because it creates a generational identity and responsibility. The repetition emphasizes urgenc…
S103
Open Mic &amp; Closing Ceremony — The overall tone of the session was appreciative, with a sense of accomplishment expressed by participants. As Mary Udum…
S104
Responsible AI in India Leadership Ethics &amp; Global Impact part1_2 — The tone was professional, collaborative, and pragmatically optimistic throughout. Speakers maintained a solution-orient…
S105
[Parliamentary Session Closing] Closing remarks — The tone of the discussion was formal yet collaborative and appreciative. There was a sense of accomplishment for the wo…
S106
Protecting children online with emerging technologies | IGF 2023 Open Forum #15 — Xianliang Ren:Ladies and gentlemen, I am pleased to attend the UN Internet Governance Forum in 2023, which is a forum on…
S107
UK security minister raises alarm on potential misuse of AI technology — Tom Tugendhat, the UK’s Minister of State for Security,has warned of the dangers posed by the malicious use of AI techno…
S108
Building a Digital Society, from Vision to Implementation — Stacey Hines introduced the “SEA” framework for people-centric AI adoption: Storytelling for trust-building, Education f…
S109
From principles to practice: Governing advanced AI in action — Human rights | Economic | Development Sasha presents a causal chain showing how prioritizing responsibility in AI devel…
S110
Keynote-Roy Jakobs — This comment introduces a systems-thinking perspective that acknowledges the complexity of AI implementation beyond just…
S111
WS #462 Bridging the Compute Divide a Global Alliance for AI — Elena Estavillo Flores emphasized the need for “inclusive governance models with meaningful civil society participation”…
S112
Responsible AI in India Leadership Ethics &amp; Global Impact — She notes that a one‑size‑fits‑all approach does not work; diverse industry templates and varying maturity levels create…
S113
Lightning Talk #139 Including youth to the public discourse — Young people are exposed to sophisticated AI-generated content that appears authentic but is fabricated, including fake …
S114
Next Steps for Digital Worlds — Striking a balance between technology usage and other aspects of life, such as interpersonal communication, exercise, an…
S115
DC3 Community Networks: Digital Sovereignty and Sustainability | IGF 2023 — Luca Belli:5, 4, 3, 2, 1. All right. So welcome to everyone to this annual meeting of the Dynamic Coalition on Community…
S116
Session — Kazakova’s role in the negotiations has been characteristically inquisitive and considered; she endeavours to capture th…
S117
Book presentation: “Youth Atlas (Second edition)” | IGF 2023 Launch / Award Event #61 — Despite the challenges, Pyrate is delighted to be working with her team. She values the opportunities for collaboration …
S118
Internet standards and human rights | IGF 2023 WS #460 — Eva Ignatuschtschenko:Thank you. I’m trying to be quick. I think a bit of optimism. We are talking about dozens of stand…
S119
Keynote ‘I’ to the Power of AI An 8-Year-Old on Aspiring India Impacting the World — 8 year old prodigy: Sharing is learning with the rest of the world. One, an AI that is independent. From large global A…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
B
Baroness Joanna Shields
5 arguments149 words per minute1123 words449 seconds
Argument 1
Proactive governance is needed; post‑harm models are inadequate and age‑appropriate safeguards must be built into AI from the start.
EXPLANATION
Baroness Shields argues that the traditional reactive, post‑harm regulatory approach used for social media cannot protect children in the AI era. Instead, governance must be anticipatory, embedding safety and age‑appropriate design into AI systems before they are deployed.
EVIDENCE
She notes that the post-harm paradigm “is not going to work in the AI future” and that “the post-harm regulatory model … is not fit for purpose in the AI world” [3][266-269]. She also stresses that children must not be “beta testers” and that age-appropriate experiences with guardrails are essential [16-18].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Baroness Shields’ call for proactive, safety‑by‑design regulation is documented in S1, and S17 reinforces the need for responsible innovation that anticipates risks.
MAJOR DISCUSSION POINT
Proactive governance and age‑appropriate safeguards
AGREED WITH
Chris Lehane, Tom Hall, Urvashi Aneja, Thomas Davin
Argument 2
Harmonized age‑verification standards (e.g., Open Age Alliance) are needed to provide consistent protection across jurisdictions.
EXPLANATION
The Baroness calls for a global, interoperable age‑verification framework so that children’s age can be reliably confirmed online, enabling consistent age‑appropriate experiences worldwide. She cites the Open Age Alliance as a mechanism to generate a portable age key.
EVIDENCE
She explains that “the Open Age Alliance … will enable you to generate an age key… that travels with the child everywhere they go online” [390-394]. She also highlights the need for “robust age assurance that is privacy preserving” and that technology now makes this possible [278-281].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The importance of robust, privacy‑preserving age verification is highlighted in S1, while S20 discusses the need for careful implementation of age‑verification tools, and S29 examines global standards versus local adaptation.
MAJOR DISCUSSION POINT
Global age‑verification standards
AGREED WITH
Chris Lehane, Urvashi Aneja
Argument 3
Avoid a monoculture of AI models; preserve linguistic and cultural diversity to protect children’s identity development.
EXPLANATION
Baroness Shields warns that relying on a narrow set of AI models from the Global North would erode cultural diversity and limit children’s exposure to varied cultural expressions. She advocates for a pluralistic AI ecosystem that respects local languages and identities.
EVIDENCE
She states that “if we have a world where we are accepting models from just the global north, we will lose so much of our cultural diversity… we don’t want to develop a monoculture” [395-398].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Concerns about cultural homogeneity in AI models are detailed in S21, and S22 stresses that AI should respect and emphasize cultural differences; S17 adds that inclusive voices are essential.
MAJOR DISCUSSION POINT
Preserving cultural diversity in AI
AGREED WITH
Urvashi Aneja, Tom Hall, Chris Lehane
Argument 4
AI is fundamentally different from platforms and requires a distinct regulatory approach.
EXPLANATION
She argues that AI’s one‑to‑one adaptive nature and simulated intimacy set it apart from traditional platforms, so post‑harm models used for social media are inadequate.
EVIDENCE
She says, “The post-harm regulatory model … is not fit for purpose in the AI world” and follows with “AI is fundamentally different. It is not a platform” and describes AI as “a one-to-one adaptive interaction embedded in how children learn” [3-5][6-8].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S23 argues for shared governance tailored to AI’s unique characteristics, and S25 notes that regulation should focus on specific uses rather than treating AI as a generic platform; S17 also points to the need for proactive, AI‑specific safeguards.
MAJOR DISCUSSION POINT
Need for new governance models tailored to AI’s unique characteristics
Argument 5
Guardrails are needed around AI systems that simulate intimacy to protect children.
EXPLANATION
She warns that children should not be used as beta testers for AI that can mimic human connection, calling for age‑appropriate experiences with safeguards against simulated intimacy.
EVIDENCE
She states, “Children must not be the beta testers for our AI-enabled world” and adds, “We need age-appropriate experiences by default with guardrails around systems that simulate intimacy without accountability” [16-18].
MAJOR DISCUSSION POINT
Protective safeguards for AI‑driven simulated intimacy
C
Chris Lehane
6 arguments191 words per minute1316 words412 seconds
Argument 1
Implement age‑assurance, parental controls, and external reviews to block under‑18 access and harmful content.
EXPLANATION
Lehane outlines OpenAI’s multi‑layered safety package that first verifies a user’s age, defaults to an under‑18 model when uncertain, and adds parental controls, real‑time alerts, and an external review process to protect children from harmful content.
EVIDENCE
He describes age-assurance signals, defaulting to an under-18 model, parental controls for memory, time limits, and alerts, a ban on targeted advertising, and an outside review by government authorities [327-354].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
OpenAI’s multi‑layered safety package is described in S1, while S20 provides context on age‑verification and privacy considerations, and S29 discusses aligning such safeguards with global standards.
MAJOR DISCUSSION POINT
Age‑assurance and parental safeguards
AGREED WITH
Baroness Joanna Shields, Urvashi Aneja
Argument 2
AI can level the playing field, but must be used to foster agency and curiosity rather than replace effort; children need space to struggle and develop grit.
EXPLANATION
Lehane emphasizes that AI’s scaling power can democratise learning, yet it should encourage children to think and create rather than provide ready answers, preserving the development of agency and resilience.
EVIDENCE
He notes that AI “is an incredibly… leveling technology” and asks whether we “encourage people to be able to use it that way” or risk undermining the social contract and personal labour control [247-249]. He also mentions the need to avoid replacing effort with AI to preserve agency [240-246].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need to preserve curiosity and encourage effortful learning is emphasized in S26, and S27 highlights AI’s role in empowering rather than replacing learner agency.
MAJOR DISCUSSION POINT
Fostering agency and grit with AI
AGREED WITH
Thomas Davin, Baroness Joanna Shields, Tom Hall
Argument 3
Personalized AI tutors can support diverse learning needs, enhancing agency and self‑directed learning.
EXPLANATION
Lehane points out that AI can provide individualized tutoring for every child, adapting to each learner’s pace and style, thereby expanding agency and self‑directed education.
EVIDENCE
He states that “every kid in the world could… have their own AI tutor that would be able to help them to learn at the pace that they learn and in ways that they learn” [232-236].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The potential of adaptive, personalized tutoring is discussed in S28, which outlines AI‑driven individualized learning experiences.
MAJOR DISCUSSION POINT
Personalized tutoring for agency
Argument 4
Global norms must allow local cultural adaptation while preventing weakest‑link loopholes; standards should be flexible to regional values.
EXPLANATION
Lehane argues that while global safety standards are needed, they must be adaptable to differing privacy laws, cultural expectations, and vulnerability profiles across countries.
EVIDENCE
He references Europe’s privacy limitations affecting age-assurance, and notes that “cultural context, societal context… have to be worked through with individual countries” [370-374].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The tension between global standards and local adaptation is explored in S29 and S30, and S18 provides a broader view of global cooperation meeting local needs.
MAJOR DISCUSSION POINT
Balancing global standards with local contexts
AGREED WITH
Baroness Joanna Shields, Urvashi Aneja, Tom Hall
Argument 5
OpenAI’s safety package includes age gates, parental controls, real‑time alerts, advertising bans, and external review processes.
EXPLANATION
Lehane reiterates the components of OpenAI’s child‑safety framework, emphasizing that it combines technical safeguards with governance mechanisms to protect minors.
EVIDENCE
He lists the age-gate, parental controls, real-time feedback, limits on memory, prohibition of targeted ads, and an external review by state attorneys general or similar bodies [327-354].
MAJOR DISCUSSION POINT
Comprehensive child‑safety package
Argument 6
Collaboration between AI labs and emerging AI safety institutes is essential for creating effective safety standards.
EXPLANATION
He points out that leading frontier labs are working with newly formed safety institutes worldwide to develop safety standards, indicating that such partnerships are crucial for child‑focused AI safety.
EVIDENCE
He notes, “You’ve seen the emergence of AI safety institutes around the world where the leading frontier labs, for the most part, work with those safety institutes to basically be creating safety standards” [334-336].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S23 notes the emerging partnership model between frontier AI labs and safety institutes to develop shared standards.
MAJOR DISCUSSION POINT
Partnerships between AI developers and safety institutes to build standards
T
Tom Hall
4 arguments163 words per minute927 words340 seconds
Argument 1
Design choices must prioritize data privacy, inclusion, and child‑centered governance, with clear “no‑regret” principles.
EXPLANATION
Hall stresses that AI tools for education should be built on a foundation of data privacy, sovereignty, and inclusive design, ensuring that no‑regret moves protect children’s rights and dignity.
EVIDENCE
He lists “data privacy, data sovereignty and inclusion and respect for the student” as top priorities, and calls for “no-regret moves” in design plans [306-311]. He also mentions publishing a free AI policy toolkit and involving children in design decisions [306-324].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Privacy‑first design and respect for cultural differences are highlighted in S20 and S22, while S30 discusses the need for regionally appropriate standards.
MAJOR DISCUSSION POINT
Privacy‑first, inclusive, child‑centered design
Argument 2
AI literacy empowers children to understand data, bias, and algorithmic foundations, turning AI into a “screwdriver” for learning.
EXPLANATION
Hall describes AI literacy as giving children a “screwdriver” to dismantle and understand AI systems, teaching them about data, sensing, predictability, bias, and accountability.
EVIDENCE
He says “handing children a screwdriver… let’s take it apart and understand what’s under the hood… teaching them how computers see the world as data, what is sensing, how to think about bias and accountability” [212-216].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S27 describes AI as a tool that can empower children to explore and understand technology, and S28 supports the pedagogical value of AI literacy.
MAJOR DISCUSSION POINT
AI as a learning tool
AGREED WITH
Chris Lehane, Thomas Davin, Baroness Joanna Shields
Argument 3
Real‑world evaluations and policy toolkits help embed AI literacy sustainably across schools and jurisdictions.
EXPLANATION
Hall argues that practical resources such as a free AI policy toolkit and real‑world curriculum examples are essential for schools to adopt AI literacy responsibly and consistently.
EVIDENCE
He mentions publishing a “free AI policy toolkit for classrooms” and stresses the need for “real world curriculum” that excites students and reflects real-world problems [306-311][316-319].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The importance of real‑world curriculum examples and toolkits is documented in S26, and S27 mentions the development of practical resources for safe AI integration.
MAJOR DISCUSSION POINT
Toolkits for sustainable AI literacy
AGREED WITH
Urvashi Aneja, Maria Bielikova, Thomas Davin
Argument 4
UNICEF and partners have released a free AI policy toolkit for classrooms to guide safe implementation.
EXPLANATION
Hall notes that UNICEF, together with partners, has made a publicly available toolkit that provides guidance for educators on safely integrating AI into teaching.
EVIDENCE
He explicitly states “We’ve published a free AI policy toolkit for classrooms” [306-311].
MAJOR DISCUSSION POINT
UNICEF AI policy toolkit
AGREED WITH
Thomas Davin, Urvashi Aneja
M
Maria Bielikova
3 arguments129 words per minute310 words143 seconds
Argument 1
Highest technical risk for children is exposure to commercial content and covert profiling; continuous impact research is required before deployment.
EXPLANATION
Bielikova highlights that the most serious technical threats to children stem from hidden commercial targeting and profiling, which are not evident from standard analytics, and calls for ongoing impact studies.
EVIDENCE
She points out that children see “less formal advertisement on TikTok” but are “exposed five times more… to profiling” through influencers, emphasizing the need for studies to uncover such hidden risks [403-406].
MAJOR DISCUSSION POINT
Commercial content and profiling risks
AGREED WITH
Urvashi Aneja, Thomas Davin, Tom Hall
Argument 2
Existing data‑protection and privacy tools can be enforced to limit profiling and safeguard children online.
EXPLANATION
Bielikova argues that current data‑protection frameworks, such as the Digital Services Act, already provide mechanisms to curb profiling, and these should be actively applied to protect children.
EVIDENCE
She references the Digital Service Act in Europe as an existing tool that can be leveraged, noting “Even though we have the Digital Service Act in Europe” [297].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S20 discusses leveraging existing data‑protection frameworks such as the Digital Services Act to curb profiling, and S25 underscores regulatory focus on mitigating bias and profiling risks.
MAJOR DISCUSSION POINT
Leveraging data‑protection tools
Argument 3
Ongoing studies and real‑world monitoring are essential to understand platform impacts and to adjust safeguards accordingly.
EXPLANATION
Bielikova stresses the importance of continuous empirical research, involving children in studies, to monitor how platforms affect them and to refine protective measures.
EVIDENCE
She calls for “a lot of studies… children should be there… we should travel with them through this environment” to grasp platform effects [408-410].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for continuous impact research and monitoring is emphasized in S26 and S27, which call for empirical studies to inform safeguards.
MAJOR DISCUSSION POINT
Need for continuous research and monitoring
U
Urvashi Aneja
4 arguments169 words per minute1080 words382 seconds
Argument 1
Safety frameworks should involve children directly in testing and redress mechanisms to ensure they work for the intended users.
EXPLANATION
Aneja argues that effective child‑safety policies must be co‑designed with children, incorporating real‑world evaluations and clear redress pathways so that safeguards are meaningful for young users.
EVIDENCE
She emphasizes “real-world evaluations… operationalize them… and redress mechanisms” as essential for making principles work in practice [299-304].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Child participation in safety design is advocated in S1, and S29 highlights the importance of involving end‑users in testing frameworks.
MAJOR DISCUSSION POINT
Child‑involved testing and redress
AGREED WITH
Maria Bielikova, Thomas Davin, Tom Hall
Argument 2
Pedagogy must be adaptable to diverse learning styles and contexts, leveraging visual, auditory, and interactive media.
EXPLANATION
Aneja stresses that education on AI should respect varied learner preferences—visual, auditory, kinesthetic—and be tailored to different cultural and socioeconomic contexts.
EVIDENCE
She notes the need to “think about pedagogy quite carefully” and that “everyone learns differently… through reading, listening, watching videos” [218-224][128-133].
MAJOR DISCUSSION POINT
Adaptive, multimodal pedagogy
Argument 3
Children should have a voice in AI design and policy to maintain agency and ensure solutions serve their interests.
EXPLANATION
Aneja highlights that children must be active participants in shaping AI systems and regulations, ensuring that their agency is respected and that policies reflect their lived realities.
EVIDENCE
She asks “how do we think about agency… across different contexts” and later stresses that “children at the heart… should be part of the governance of those mechanisms” [250-254][442-444].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S1 stresses child involvement in governance, and S29 reinforces the principle of designing with, not for, children.
MAJOR DISCUSSION POINT
Child participation in governance
Argument 4
Policy frameworks must incorporate Global South perspectives to ensure inclusive, equitable AI governance.
EXPLANATION
Aneja points out that AI governance should not be dominated by Global North viewpoints; incorporating insights from the Global South is essential for equitable outcomes.
EVIDENCE
She remarks on being in India and notes “agency … has so much to do with the broader socioeconomic institutional context” and calls for inclusive perspectives [250-254].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for Global South input in AI governance is discussed in S18, and S30 illustrates how regional blocs can set culturally appropriate standards.
MAJOR DISCUSSION POINT
Inclusion of Global South viewpoints
AGREED WITH
Baroness Joanna Shields, Tom Hall, Chris Lehane
R
Rahul John Aju
4 arguments175 words per minute1914 words656 seconds
Argument 1
Curiosity should be guided; children must first learn critical thinking before being taught how to interact with machines.
EXPLANATION
Rahul reflects on his upbringing, emphasizing that questioning and critical thinking were taught before using search engines, and argues that similar guidance is needed for AI interactions.
EVIDENCE
He recounts his father urging him to “question everything” and his experience using Google to verify information, noting that “parents also taught me how to figure out what is correct information and fake information” [41-60]. He then asks how children can do this in the AI age [61-68].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S26 highlights the educational value of guided curiosity and critical thinking before relying on AI, and S27 supports teaching children to interrogate technology.
MAJOR DISCUSSION POINT
Guided curiosity and critical thinking
Argument 2
“Rescue AI” can analyze contracts and terms‑and‑conditions, flagging high‑risk clauses for users, including children.
EXPLANATION
Rahul describes a tool he built that automatically scans legal documents, identifies risky clauses, and advises users on whether to proceed, illustrating a practical safety solution.
EVIDENCE
He explains that the AI software “can upload a full terms and conditions or any contract and it will tell you the high risk clauses, low risk clauses and … what to do” and names the tool “Rescue AI” [91-94].
MAJOR DISCUSSION POINT
AI tool for contract risk analysis
Argument 3
AI awareness and safety education are necessary for children in the AI age.
EXPLANATION
He stresses that, given the complexity of AI, children need dedicated awareness and safety training to navigate AI tools responsibly.
EVIDENCE
He declares, “That is why AI awareness and safety is necessary” after discussing the limits of parental guidance in the AI era [97-98].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The necessity of AI safety education for youth is reinforced in S27, which calls for empowering learners with knowledge about AI risks.
MAJOR DISCUSSION POINT
Need for AI safety and awareness education for children
Argument 4
Foundational learning should precede AI use; children must master natural intelligence before relying on artificial intelligence.
EXPLANATION
He argues that teaching basic skills and critical thinking first ensures that AI becomes a supportive tool rather than a crutch.
EVIDENCE
He says, “I believe AS should be same. We should learn the basics before using AI. You should use the natural intelligence first, then start using artificial intelligence” [108-114].
MAJOR DISCUSSION POINT
Sequencing education: fundamentals before AI integration
T
Thomas Davin
3 arguments175 words per minute1227 words419 seconds
Argument 1
Teachers feel unprepared; providing them with tools, training, and a real‑world curriculum is essential for effective AI literacy.
EXPLANATION
Thomas highlights the gap between teachers’ enthusiasm for AI and their lack of readiness, calling for resources and curriculum that bridge this divide.
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S26 stresses the need for teacher resources and real‑world curriculum examples, while S27 mentions toolkits that support educator readiness.
MAJOR DISCUSSION POINT
Teacher preparedness for AI literacy
AGREED WITH
Tom Hall, Urvashi Aneja
Argument 2
Over‑reliance on AI risks eroding curiosity and creativity; models could intentionally introduce challenges to build resilience.
EXPLANATION
Thomas warns that if AI always provides correct answers, children may lose curiosity, suggesting that deliberately imperfect models could foster grit and deeper learning.
EVIDENCE
He reflects that “if we have a model that actually gives the right answer… they might lose their sense of curiosity” and proposes designing models that “give the wrong answer on purpose so that the child actually struggles” [432-437].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S26 argues for designing learning experiences that preserve struggle and curiosity, suggesting intentional imperfections in AI outputs.
MAJOR DISCUSSION POINT
Balancing AI assistance with curiosity
AGREED WITH
Chris Lehane, Baroness Joanna Shields, Tom Hall
Argument 3
Systematic impact measurement is required to monitor AI’s effects on learning outcomes and child curiosity.
EXPLANATION
He highlights that without rigorous measurement, AI could diminish curiosity; therefore, ongoing impact assessment is essential to ensure AI supports rather than undermines learning.
EVIDENCE
He remarks that “systematic impact measurement” is needed and warns that “if we have a model that actually gives the right answer … they might lose their sense of curiosity” [425-433].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The call for systematic impact assessment of AI in education is made in S26.
MAJOR DISCUSSION POINT
Importance of monitoring and measuring AI’s impact on education
AGREED WITH
Urvashi Aneja, Maria Bielikova, Tom Hall
M
Moderator
2 arguments104 words per minute338 words193 seconds
Argument 1
Discussions about children and technology must involve children directly rather than speaking about them.
EXPLANATION
The moderator highlights that policy conversations often overlook children’s voices, emphasizing the need for child participation in shaping technology governance.
EVIDENCE
He notes that “Too often, discussions about children and technology speak about children rather than with them” and that “This session is intentional in doing otherwise” [27-28].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Child‑centered policy dialogue is championed in S1, and S29 underscores the importance of involving children in safety discussions.
MAJOR DISCUSSION POINT
Child participation in policy dialogue
Argument 2
Responsibility for guiding children’s AI engagement lies with adults, institutions, and systems.
EXPLANATION
The moderator stresses that the central issue is not whether children will use AI, but whether the surrounding ecosystem is prepared to steer that interaction safely and responsibly.
EVIDENCE
He states, “The question is not whether children will engage with AI, but whether adults, institutions, and systems are prepared to guide that engagement responsibly” [194-195].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Shared governance and adult responsibility for safe AI deployment are highlighted in S23.
MAJOR DISCUSSION POINT
Adult and institutional responsibility for safe AI use by children
Agreements
Agreement Points
Proactive governance and safety‑by‑design with age‑appropriate safeguards are essential, moving away from post‑harm models.
Speakers: Baroness Joanna Shields, Chris Lehane, Tom Hall, Urvashi Aneja, Thomas Davin
Proactive governance is needed; post‑harm models are inadequate and age‑appropriate safeguards must be built into AI from the start. Implement age‑assurance, parental controls, and external reviews to block under‑18 access and harmful content. Design choices must prioritize data privacy, inclusion, and child‑centered governance, with clear ‘no‑regret’ principles. Safety frameworks should involve children directly in testing and redress mechanisms to ensure they work for the intended users. Over‑reliance on AI risks eroding curiosity and creativity; systematic impact measurement is required.
All speakers stress that AI for children must be governed proactively, embedding safety, privacy and age-verification from the outset rather than reacting after harms occur, and that children should be part of the design and redress process [3-5][16-18][266-269][278-281][327-354][306-311][312-319][320-324][299-304][442-444][419-426][432-437].
POLICY CONTEXT (KNOWLEDGE BASE)
Aligns with UNICEF child-rights policy urging safety-by-design and proactive governance, reflected in IGF discussions on responsible AI for children and calls for principle-level guidance with practical guardrails [S42][S51][S48].
Preserving child agency, curiosity and grit; AI should augment, not replace, effortful learning.
Speakers: Chris Lehane, Thomas Davin, Baroness Joanna Shields, Tom Hall
AI can level the playing field, but must be used to foster agency and curiosity rather than replace effort; children need space to struggle and develop grit. Over‑reliance on AI risks eroding curiosity and creativity; models could intentionally introduce challenges to build resilience. That difference has implications not only for safety, but for mental health, identity formation, and long‑term well‑being. AI literacy empowers children to understand data, bias, and algorithmic foundations, turning AI into a “screwdriver” for learning.
Panelists agree that AI must be designed to support agency and curiosity, avoiding a model that always gives the right answer; instead, tools should encourage critical thinking and resilience [247-249][240-246][13-15][16-18][432-437][212-216].
POLICY CONTEXT (KNOWLEDGE BASE)
Mirrors UNICEF guidance on preserving child agency and UNESCO recommendations that AI should augment learning, not replace human effort, as highlighted in responsible AI for children and generative AI in education literature [S58][S59][S42].
Inclusion and cultural diversity must be protected; AI should not create a monoculture.
Speakers: Baroness Joanna Shields, Urvashi Aneja, Tom Hall, Chris Lehane
Avoid a monoculture of AI models; preserve linguistic and cultural diversity to protect children’s identity development. Policy frameworks must incorporate Global South perspectives to ensure inclusive, equitable AI governance. Inclusion and respect for the student; ensure diverse representation in AI products. Global norms must allow local cultural adaptation while preventing weakest‑link loopholes; standards should be flexible to regional values.
All agree that AI systems must reflect cultural and linguistic diversity and that global standards should be adaptable to local contexts to avoid a single-culture dominance [395-398][250-254][399-401][306-311][312-319][370-374].
POLICY CONTEXT (KNOWLEDGE BASE)
Supported by IGF observations on cultural-diversity risks of AI monoculture and calls for inclusive, multi-stakeholder AI development to protect cultural heritage [S56][S57][S62].
A global, interoperable age‑verification framework is needed to protect children across jurisdictions.
Speakers: Baroness Joanna Shields, Chris Lehane, Urvashi Aneja
Harmonized age‑verification standards (e.g., Open Age Alliance) are needed to provide consistent protection across jurisdictions. Implement age‑assurance, parental controls, and external reviews to block under‑18 access and harmful content. How should global norms for children’s safety handle cultural and regulatory diversity without creating loopholes that allow companies to opt for the weakest protection?
Baroness and Chris outline concrete age-verification and safety mechanisms, while Urvashi raises the need for these norms to be globally consistent yet locally adaptable [390-394][327-342][375-381].
POLICY CONTEXT (KNOWLEDGE BASE)
Consistent with multiple policy calls for age-verification mechanisms, including EU-wide age-gating proposals and UNICEF/UNESCO emphasis on interoperable verification while noting privacy concerns [S45][S46][S47][S42].
Teachers need tools, training and real‑world curricula to deliver effective AI literacy.
Speakers: Thomas Davin, Tom Hall, Urvashi Aneja
Teachers feel unprepared; providing them with tools, training, and a real‑world curriculum is essential for effective AI literacy. UNICEF and partners have released a free AI policy toolkit for classrooms to guide safe implementation. Real‑world evaluations and policy toolkits help embed AI literacy sustainably across schools and jurisdictions.
Consensus that educator capacity is a bottleneck and that toolkits and practical curricula are required to scale AI literacy responsibly [419-426][306-311][299-304].
POLICY CONTEXT (KNOWLEDGE BASE)
Echoes UNESCO’s AI-in-education framework that stresses teacher training, curricula development, and awareness-raising as essential for effective AI literacy [S44][S60][S61][S50].
Continuous, real‑world monitoring and impact measurement are required to ensure AI safeguards work for children.
Speakers: Urvashi Aneja, Maria Bielikova, Thomas Davin, Tom Hall
Safety frameworks should involve children directly in testing and redress mechanisms to ensure they work for the intended users. Highest technical risk for children is exposure to commercial content and covert profiling; continuous impact research is required before deployment. Systematic impact measurement is required to monitor AI’s effects on learning outcomes and child curiosity. Real‑world evaluations and policy toolkits help embed AI literacy sustainably across schools and jurisdictions.
All emphasize the need for ongoing empirical research, child-involved testing, and systematic metrics to track AI’s impact and adjust safeguards accordingly [299-304][442-444][403-410][419-426][306-311].
POLICY CONTEXT (KNOWLEDGE BASE)
Aligns with IGF recommendations for continuous monitoring and impact measurement of AI systems affecting children, as outlined in responsible AI safeguarding reports [S43][S45][S55].
Similar Viewpoints
Both stress that safety must be built into AI systems before deployment, using age‑verification and proactive design rather than reacting after harms occur [266-269][327-354].
Speakers: Baroness Joanna Shields, Chris Lehane
Proactive governance is needed; post‑harm models are inadequate and age‑appropriate safeguards must be built into AI from the start. Implement age‑assurance, parental controls, and external reviews to block under‑18 access and harmful content.
Both advocate for strong privacy, inclusion and parental oversight mechanisms as core components of child‑focused AI safety [306-311][327-354].
Speakers: Tom Hall, Chris Lehane
Design choices must prioritize data privacy, inclusion, and child‑centered governance, with clear ‘no‑regret’ principles. Implement age‑assurance, parental controls, and external reviews to block under‑18 access and harmful content.
Both underline the necessity of continuous, child‑involved research and monitoring to detect hidden risks such as covert profiling [299-304][403-410].
Speakers: Urvashi Aneja, Maria Bielikova
Safety frameworks should involve children directly in testing and redress mechanisms to ensure they work for the intended users. Ongoing studies and real‑world monitoring are essential to understand platform impacts and to adjust safeguards accordingly.
Both stress that children need foundational critical‑thinking skills before relying on AI, and that AI should be used to nurture, not replace, curiosity [41-60][61-68][432-437].
Speakers: Rahul John Aju, Thomas Davin
Curiosity should be guided; children must first learn critical thinking before being taught how to interact with machines. Over‑reliance on AI risks eroding curiosity and creativity; models could intentionally introduce challenges to build resilience.
Unexpected Consensus
A youth innovator (Rahul) proposes a concrete AI safety tool (Rescue AI) aligning with senior panel calls for practical safety solutions.
Speakers: Rahul John Aju, Baroness Joanna Shields, Chris Lehane
“Rescue AI” can analyze contracts and terms‑and‑conditions, flagging high‑risk clauses for users, including children. Proactive governance is needed; post‑harm models are inadequate and age‑appropriate safeguards must be built into AI from the start. Implement age‑assurance, parental controls, and external reviews to block under‑18 access and harmful content.
Rahul’s tool exemplifies the type of safety-by-design solution the Baroness and Chris advocate, showing an unexpected alignment between a young practitioner and senior policymakers [91-94][266-269][327-354].
POLICY CONTEXT (KNOWLEDGE BASE)
Reflects IGF youth-engagement recommendations that encourage young innovators to develop safety tools, with Rahul’s proposal cited as an example of youth-led solutions [S49][S53].
Rahul’s emphasis on guided curiosity mirrors senior experts’ warnings about AI eroding curiosity.
Speakers: Rahul John Aju, Chris Lehane, Thomas Davin
Curiosity should be guided; children must first learn critical thinking before being taught how to interact with machines. AI can level the playing field, but must be used to foster agency and curiosity rather than replace effort; children need space to struggle and develop grit. Over‑reliance on AI risks eroding curiosity and creativity; models could intentionally introduce challenges to build resilience.
Despite the age gap, Rahul’s call for guided curiosity aligns with Chris and Thomas’s concerns that AI should not diminish children’s natural inquisitiveness [41-60][61-68][247-249][432-437].
POLICY CONTEXT (KNOWLEDGE BASE)
Resonates with UNICEF warnings about AI diminishing curiosity and with responsible AI guidelines that promote guided exploration rather than passive consumption [S42][S58].
Overall Assessment

There is strong consensus that AI for children must be governed proactively with safety‑by‑design, age‑verification, inclusion, teacher capacity building, and continuous monitoring. Panelists across roles and regions agree on the need for child‑centered design, preservation of cultural diversity, and mechanisms to preserve agency and curiosity.

High consensus – the convergence of viewpoints across senior policymakers, industry leaders, educators, and a youth innovator indicates a shared commitment to child‑focused, inclusive, and measurable AI governance, which should accelerate the development of global standards and practical toolkits.

Differences
Different Viewpoints
Global harmonised age‑verification versus locally‑adapted age‑assurance
Speakers: Baroness Joanna Shields, Chris Lehane
Baroness: Calls for a global, interoperable age-key generated by the Open Age Alliance that travels with the child across platforms [390-394]. Chris: Highlights that privacy-law limitations (e.g., in Europe) constrain age-assurance signals and that cultural and societal contexts require country-specific solutions [370-374].
Both agree age verification is essential, but the Baroness pushes for a single worldwide standard, whereas Chris argues that legal and cultural differences mean a one‑size‑fits‑all solution is not feasible, requiring local adaptation.
POLICY CONTEXT (KNOWLEDGE BASE)
Highlights the tension noted in policy debates between a harmonised global age-verification system and locally-adapted approaches respecting privacy and data-protection laws [S45][S46][S47].
Extent and style of AI integration in education
Speakers: Tom Hall, Chris Lehane
Tom Hall: Advocates broad AI literacy, publishing a free policy toolkit, and embedding AI as a “screwdriver” for learning with inclusive, child-centred design [306-311]. Chris Lehane: Warns that AI must preserve agency and curiosity, suggesting models might intentionally give wrong answers to foster grit and that AI should not replace effortful learning [247-249][240-246].
Tom pushes for rapid, wide‑scale AI adoption and tooling, while Chris cautions against over‑reliance and proposes limiting AI’s role to protect agency, indicating divergent views on how deeply AI should be embedded in classrooms.
POLICY CONTEXT (KNOWLEDGE BASE)
Relates to ongoing discussions on the appropriate depth of AI integration in curricula, balancing innovation with pedagogical integrity as described in UNESCO and IGF education sessions [S60][S61][S44].
Primary risk focus for children using AI platforms
Speakers: Maria Bielikova, Chris Lehane
Maria Bielikova: Identifies covert commercial profiling and hidden influencer-driven targeting as the highest technical risk for children, calling for continuous impact research [403-406]. Chris Lehane: Emphasises content-related harms (violence, sexual, mental-health) and proposes age-gates, parental controls, and external reviews to block such content [340-349].
Maria stresses hidden profiling as the most urgent danger, whereas Chris concentrates on explicit harmful content, revealing a mismatch in perceived priority of child‑safety risks.
POLICY CONTEXT (KNOWLEDGE BASE)
Addresses identified primary risks such as exposure to extremist content and inappropriate material, which have been highlighted in IGF research on child recruitment and content monitoring [S55][S45][S42].
Measurement and evaluation of AI’s impact on children
Speakers: Thomas Davin, Maria Bielikova
Thomas Davin: Calls for systematic impact measurement and even designing AI models that deliberately give wrong answers to preserve curiosity and assess outcomes [425-437]. Maria Bielikova: Argues for ongoing empirical studies and real-world monitoring involving children to understand platform effects and adjust safeguards [408-410].
Thomas proposes a more experimental, design‑centric evaluation (including intentional errors), while Maria advocates for continuous observational research, showing different methodological preferences for impact assessment.
POLICY CONTEXT (KNOWLEDGE BASE)
Calls for evidence-based evaluation echo the AI Policy Research Roadmap’s emphasis on measurement frameworks and IGF monitoring guidelines for children’s AI impact [S43][S51][S45].
Unexpected Differences
Different prioritisation of child‑safety risks (profiling vs explicit harmful content)
Speakers: Maria Bielikova, Chris Lehane
Maria: Highlights covert commercial profiling and influencer-driven targeting as the most serious hidden risk [403-406]. Chris: Focuses on preventing exposure to violent, sexual, or mental-health-related content through age-gates and parental controls [340-349].
While both address safety, the unexpected divergence lies in which risk they deem most urgent—hidden data‑driven profiling versus overt harmful content—suggesting differing threat models among experts.
POLICY CONTEXT (KNOWLEDGE BASE)
Reflects privacy-centric critiques of profiling versus explicit harmful-content safeguards discussed in data-protection forums and age-verification debates [S47][S45][S55].
Optimism about age‑verification technology versus concern over its bluntness
Speakers: Baroness Joanna Shields, Chris Lehane
Baroness: Notes that age-assurance technology is improving but still often a blunt instrument, calling for better solutions [378-382][278-281]. Chris: Presents a working age-assurance system that defaults to an under-18 model and integrates parental controls, showing confidence in current technical solutions [340-349].
The Baroness expresses caution that existing age‑verification remains coarse, whereas Chris displays confidence that current multi‑layered mechanisms are sufficient—an unexpected contrast between caution and optimism.
POLICY CONTEXT (KNOWLEDGE BASE)
Captures the optimism for age-verification tools contrasted with concerns about their bluntness and potential privacy infringement, as debated in IGF sessions on age-gating and data protection [S47][S45][S46].
Overall Assessment

The panel displayed broad consensus on the need to protect children and to embed AI literacy, but significant disagreements emerged around how to implement age‑verification, the depth of AI integration in education, the primary safety risks to prioritise, and the appropriate methods for impact measurement. These divergences reflect differing priorities (global standards vs local adaptation, rapid deployment vs cautious pedagogy) and risk perceptions (profiling vs content harms).

Moderate to high – while all participants share the overarching goal of child safety and empowerment, the contrasting approaches to governance, risk focus, and evaluation suggest that reaching coordinated policy action will require substantial negotiation and compromise.

Partial Agreements
Both aim to protect children through age‑verification and safeguards, but differ on the mechanism—global interoperable age‑keys versus a layered, company‑specific package.
Speakers: Baroness Joanna Shields, Chris Lehane
Baroness: Supports age-verification and guardrails built into AI from the outset [278-281]. Chris: Implements a multi-layered safety package with age-assurance, parental controls, and external review [340-349].
Both seek widespread AI literacy for children, yet Tom focuses on resources and immediate deployment, whereas Thomas stresses careful measurement and pedagogical design that may limit AI’s ease of answer‑giving.
Speakers: Tom Hall, Thomas Davin
Tom Hall: Promotes AI literacy via toolkits, inclusive curricula, and real-world problem-based learning [306-311]. Thomas Davin: Emphasises AI literacy as essential but stresses systematic impact measurement and designing models that challenge children to preserve curiosity [425-437].
Both agree children must participate in shaping AI safety, but Urvashi stresses procedural testing and redress, while Thomas emphasizes broader governance involvement.
Speakers: Urvashi Aneja, Thomas Davin
Urvashi Aneja: Calls for child-involved real-world evaluations, testing, and clear redress mechanisms [299-304]. Thomas Davin: Highlights the need for children to be at the heart of governance, giving them a voice in design and policy [442-444].
Takeaways
Key takeaways
Post‑harm regulatory approaches are insufficient for AI; safety must be built into design from the outset, especially for children. Age‑assurance, robust parental controls, and external independent reviews are essential safeguards for child‑facing AI systems. AI literacy should begin with teaching critical thinking and foundational knowledge before introducing AI tools; teachers need training and practical curricula. Personalized AI tutors can enhance agency and learning, but over‑reliance may erode curiosity and grit; intentional challenge‑based design may be needed. Global harmonisation of age‑verification (e.g., Open Age Alliance) is required, while allowing cultural and regulatory adaptation to avoid a monoculture of models. Continuous real‑world impact research and monitoring of profiling, commercial content, and covert influences are needed to evaluate safety before deployment. Children must be involved directly in testing, redress mechanisms, and policy design to ensure solutions serve their interests.
Resolutions and action items
UNICEF and partners released a free AI policy toolkit for classrooms to guide safe implementation. OpenAI committed to a multi‑pronged safety package: age gates, default under‑18 models, parental controls, advertising bans, and external review processes. Baroness Shields highlighted the Open Age Alliance initiative to create interoperable, privacy‑preserving age‑verification keys. Tom Hall (LEGO) pledged to incorporate child‑centered governance, data privacy, inclusion, and to involve children in policy development. Rahul John Aju showcased “Rescue AI” for contract risk analysis and offered to continue developing tools that help children understand terms and conditions. Panelists agreed to pursue further collaboration on real‑world evaluations, teacher training resources, and inclusion of Global South perspectives.
Unresolved issues
How to embed AI literacy effectively into diverse curricula and teaching practices across different education systems. Specific mechanisms for localising global safety standards without creating weakest‑link loopholes. Methods for ensuring unknown or emerging AI companies comply with child‑safety requirements. Detailed processes for redress and accountability when AI harms children in practice. Balancing the need for age‑appropriate content with preserving cultural and linguistic diversity in AI models.
Suggested compromises
Adopt “no‑regret” design principles that prioritize data privacy, inclusion, and child‑respect while allowing iterative improvements. Implement age‑verification that is robust yet privacy‑preserving, enabling a universal age key that can be adapted locally. Combine AI assistance with intentional gaps or challenges to preserve curiosity and develop resilience. Use a hybrid governance model: global baseline safeguards (age gates, advertising bans) complemented by locally‑tailored cultural guidelines. Involve children in the design and testing phases to balance safety controls with user agency.
Thought Provoking Comments
AI is fundamentally different. It is not a platform. It is increasingly a one‑to‑one adaptive interaction embedded in how children learn, communicate, create, and form their own sense of self. AI is engineering simulated intimacy at scale, and children cannot reliably distinguish between authentic human connection and artificial intimacy.
She reframes the conversation from treating AI like other digital platforms to recognizing its unique relational dynamics with children, highlighting deep‑rooted psychological risks rather than just data privacy.
Sets the thematic foundation for the whole panel, prompting subsequent speakers to discuss safety‑by‑design, age‑appropriate experiences, and the need for new regulatory approaches rather than post‑harm models.
Speaker: Baroness Joanna Shields
While I was using Google, my parents taught me how to figure out what is correct information and what is fake. In this age of AI, how do we expect kids to do it? Parents can’t even figure it out. Curiosity is there in every child, but it only becomes powerful if it’s guided the right way.
Raises the practical challenge of misinformation and critical thinking for children in the AI era, moving the discussion from abstract policy to everyday lived experience.
Triggered dialogue on AI literacy, the role of educators and parents, and led Rahul to introduce his own tool (Rescue AI) as a concrete response to the problem.
Speaker: Rahul John Aju
I created an AI software called Rescue AI that can upload a full terms‑and‑conditions document and highlight high‑risk and low‑risk clauses, telling you whether you should use the product.
Provides a tangible, youth‑driven solution to the transparency problem, demonstrating that children can be innovators in safety, not just passive users.
Illustrated the potential of child‑led tool development, encouraging other panelists to consider how to empower young people to create safeguards, and reinforced the call for AI literacy that includes hands‑on building.
Speaker: Rahul John Aju
We must be careful not to create over‑dependency on AI that narrows creativity. Governance and design choices need to prioritize data privacy, data sovereignty, inclusion, and respect for the student. Involve children in the design process and avoid one‑size‑fits‑all solutions.
Highlights the risk of homogenized AI experiences and stresses inclusive, child‑centered governance, expanding the conversation beyond safety to broader ethical design principles.
Shifted the tone toward concrete design criteria, prompting further discussion on inclusion, cultural diversity, and the need for “no‑regret” moves in policy.
Speaker: Tom Hall
AI is a leveling technology that scales the ability of anyone to think, learn, create, and own their labor. We need to teach people agency so they can use it to control their own work, not just be exploited by the existing social contract between capital and labor.
Connects AI for children to larger socioeconomic structures, introducing the concept of agency as a central metric for future policy, moving the debate from child safety to societal transformation.
Broadened the panel’s scope, leading participants to consider long‑term implications of AI on labor markets and the importance of teaching agency alongside technical skills.
Speaker: Chris Lehane
Age‑assurance technology can create a verifiable age‑key that travels with the child across every platform. The Open Age Alliance is working to harmonise standards so that age‑appropriate experiences are delivered globally.
Offers a concrete, globally‑scalable regulatory mechanism that addresses the blunt‑instrument age bans currently used, linking technical innovation with policy harmonisation.
Prompted dialogue on global standards versus local adaptation, and set the stage for later discussion on cultural diversity and avoiding a monoculture of AI models.
Speaker: Baroness Joanna Shields
We shouldn’t prohibit children from the city; we should travel with them through the environment, study what’s happening, and use those insights to protect them while allowing exploration.
Uses a vivid metaphor to argue for contextual, child‑centered research rather than blanket bans, emphasizing the need for empirical studies and child participation.
Reinforced the call for real‑world evaluations and child involvement, influencing the moderators’ summary that highlighted the importance of “traveling with children” in AI governance.
Speaker: Maria Bielikova
Think about what kind of ancestor you want to be. We have the chance now, after social media and other harmful technologies, to make sharp decisions that will pay forward for the next generation.
Frames the policy challenge as an ethical legacy, giving the discussion a moral urgency that resonates beyond technical solutions.
Served as a concluding moral anchor, influencing the final synthesis that emphasized responsibility, inclusion, and long‑term societal impact.
Speaker: Tom Hall
Overall Assessment

The discussion was driven forward by a handful of pivotal remarks that reframed AI from a mere platform to a relational, agency‑shaping technology. Baroness Shields’ opening about simulated intimacy set a high‑stakes context, which Rahul’s personal experience and his Rescue AI tool grounded in everyday challenges. Tom Hall’s warnings about over‑dependence and inclusion, Chris Lehane’s linkage of AI to broader labor dynamics, and Maria Bielikova’s city metaphor together expanded the conversation from child‑centric safety to systemic design, cultural diversity, and empirical oversight. These comments sparked new sub‑topics—age‑verification standards, global‑local regulatory balance, and the moral imperative of shaping a responsible legacy—thereby deepening the analysis and steering the panel toward concrete, actionable pathways for child‑focused AI governance.

Follow-up Questions
How can children be guided to distinguish between authentic human connection and artificial intimacy, and how can we teach them to critically evaluate AI‑generated information?
Ensures children are not misled by persuasive AI interactions, protecting mental health and fostering critical thinking.
Speaker: Rahul John Aju
What mechanisms can help children understand and evaluate lengthy terms and conditions and privacy policies of AI services?
Improves informed consent and transparency, preventing hidden data collection or harmful clauses.
Speaker: Rahul John Aju
How should we handle unknown or unregulated AI companies that claim to be safe for children?
Addresses gaps in oversight and the need for standards to protect children from potentially unsafe AI products.
Speaker: Rahul John Aju
How can AI education be structured so that children first learn natural‑intelligence fundamentals before using AI tools?
Establishes a solid knowledge base, ensuring AI is used as a supplement rather than a replacement for basic skills.
Speaker: Rahul John Aju
What are the most effective ways to translate real‑world safety practices into the digital AI environment for children?
Bridges the gap between offline safety habits and online AI interactions, reducing risk of harm.
Speaker: Rahul John Aju
What are the highest‑risk failure modes of AI systems for children, and what technical evaluations should be required before deployment?
Identifies specific safety threats and establishes pre‑deployment testing standards to protect children.
Speaker: Thomas Davin (to Maria Bielikova)
What key lesson from the UK Internet safety agenda is most relevant today, and what practice should be avoided in AI governance?
Leverages past policy experience to inform current AI regulation and avoid repeating ineffective approaches.
Speaker: Thomas Davin (to Baroness Joanna Shields)
What governance and design choices are essential to ensure AI tools support children’s well‑being across diverse education systems and cultural contexts?
Ensures AI implementations are inclusive, culturally sensitive, and aligned with varied educational needs.
Speaker: Thomas Davin (to Tom Hall)
What baseline governance package should be globally interoperable for child‑facing AI, and what elements need local adaptation?
Creates a common safety framework while allowing flexibility for jurisdiction‑specific cultural and regulatory factors.
Speaker: Urvashi Aneja (to Chris Lehane)
How should global norms for children’s safety handle cultural and regulatory diversity without creating loopholes that allow the weakest protection?
Prevents regulatory arbitrage and ensures minimum safety standards worldwide.
Speaker: Urvashi Aneja (to Chris Lehane)
What measurable indicators can regulators use to assess whether an AI system is acting in a child’s best interest?
Provides concrete metrics for accountability and ongoing monitoring of AI impacts on children.
Speaker: Urvashi Aneja (to Maria Bielikova)
How can we conduct real‑world evaluations of AI systems in children’s contexts rather than only lab testing?
Ensures that safety and effectiveness assessments reflect actual usage environments and diverse user experiences.
Speaker: Urvashi Aneja
How can we ensure inclusion of children from the Global South, children with disabilities, and those without internet access in AI solutions?
Addresses equity concerns, preventing a digital divide that could exacerbate existing inequalities.
Speaker: Thomas Davin (summary)
Can AI models be intentionally designed to give occasional wrong answers to foster grit and curiosity in children?
Explores pedagogical design that balances assistance with challenge to support long‑term learning skills.
Speaker: Thomas Davin (summary)
What redress mechanisms are needed when AI harms children, and how can they be effectively enforced?
Establishes pathways for remediation and accountability when safety safeguards fail.
Speaker: Thomas Davin (summary)
How can profiling of children on platforms (e.g., via influencer content rather than formal ads) be measured and mitigated?
Targets subtle forms of data exploitation that affect children’s privacy and autonomy.
Speaker: Maria Bielikova
How can age‑assurance technology be standardized globally (e.g., via the Open Age Alliance) and integrated across platforms to protect children?
Seeks a universal, privacy‑preserving solution for age‑appropriate experiences, reducing reliance on blunt age bans.
Speaker: Baroness Joanna Shields

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.