Agents of Change AI for Government Services & Climate Resilience

20 Feb 2026 11:00h - 12:00h

Agents of Change AI for Government Services & Climate Resilience

Session at a glanceSummary, keypoints, and speakers overview

Summary

The session focused on the emerging role of AI agents in public governance, introduced by Minister Sridhar Babu and explored by a panel of experts. The Minister framed the current moment as an inflection point where the shift from generative AI that merely answers to agentic AI that can act is redefining policy making [22-24]. He argued that intelligence should be treated as public infrastructure and described Telangana’s vision of AI as a co-governor that can forecast floods on the Moosy river and pre-allocate resources before crises hit [45-48]. Pilot projects cited include AI-assisted farmers trained with local dialects, a Telugu-language land-record AI that compresses response times, and satellite-driven heat analysis that will guide Hyderabad’s urban cooling strategy by 2035 [53-62]. The state also launched a sovereign AI nerve centre (ICOM) and an open data exchange platform with over 1,000 datasets, which he says already powers anticipatory health care and climate-resilient services [72-78][80-86]. Panelists defined an AI agent as a role-aware system with memory that can act across digital channels, insisting that guardrails and a trust layer are essential to curb hallucinations and bias [133-141][142-144][145-147]. They agreed that the biggest change is the move from narrow, task-specific bots to end-to-end, systems-level agents capable of autonomously executing business or government processes [110-113][119-122]. However, high-stakes applications such as drafting multi-million-dollar RFPs demand strong safeguards and likely retain a human-in-the-loop for final validation [148-154]. Concrete use-cases highlighted were AI-driven flood-plain analysis for water infrastructure and police-assistant agents deployed in the UK and Tasmania, demonstrating immediate value while larger ambitions continue [170-174][185-188]. Speakers stressed the importance of data sovereignty-strategic control now and technical supply-chain control later-advocating a two-track approach so governments can benefit today while building longer-term capabilities [246-250][251-254]. Upskilling public officials and providing transparent “nutrition-label” disclosures were identified as critical guardrails, acknowledging that probabilistic systems can never be perfectly deterministic [253][301-304][218-222]. Success metrics proposed included vernacular AI tools that empower farmers, faster delivery of physical infrastructure, and measurable income growth for the bottom 50 % of earners [334-335][345-347][355-356]. The discussion concluded that AI agents can become a force multiplier for governance if standards, evaluation frameworks, and agile regulation evolve in step with rapid technological change [260-268][317-324].


Keypoints


Major discussion points


Shift from traditional AI to agentic AI – The Minister highlighted moving “from generative AI that simply answers… to agentic AI that acts now” [22-24]. Panelists echoed this transition, describing it as a move to “end-to-end AI-led execution of business processes or government processes” [110-113] and noting that “the biggest change… is the emergence of agentic AI” [116-117]. Mike added that the evolution is “from task specific to systems level” [120-122], while Srini observed a shift “from co-pilot human in the loop to agents which can act and really provide value” [124-125].


Concrete Telangana initiatives using AI agents – The Minister gave multiple examples: AI-driven flood prediction for the “Moosey” river [45-48], a Telugu-first AI that “records land records, interprets satellite indicators and compresses the time between the climate event and an incident settlement” [57-59], satellite-driven heat analysis shaping urban cooling strategies [60-62], solar-powered edge compute nodes keeping services alive during grid failures [63-64], and the creation of a “sovereign AI nerve centre” and a state-wide data exchange platform that powers health-risk anticipation and climate-resilient planning [73-84][85-88].


Guardrails, trust, and human oversight – Srini stressed that an agent must have a “trust layer” with guardrails to prevent hallucinations, bias, and toxicity [144-148]. Lee warned of “risks of over-reliance” and emphasized careful use-case selection, sandboxes, and clear liability [190-203]. Mike highlighted the need for transparency (e.g., “nutrition-label” style cards) and human-in-the-loop control to build trust [216-231][301-306]. Saibal pointed out that up-skilling public-sector staff is essential because “the person… is not an AI engineer” [253-254].


Strategic and technical sovereignty over data and AI – The Minister described a “sovereign AI nerve centre” and an open data pipeline that keeps “all the data… on this platform” [73-77]. Srini differentiated “strategic sovereignty” (control over data and policies) from “technical sovereignty” (control over the full supply chain) and urged governments to pursue both tracks [246-251].


Future success metrics and vision – In closing, panelists offered concrete measures of progress: a farmer being able to get vernacular advice from a small language model [334-335]; an active AI safety evaluation ecosystem shared globally [339-343]; infrastructure built faster and safely with public confidence [345-351]; and measurable uplift in income for the bottom 50 % of the population [355-357].


Overall purpose / goal


The discussion aimed to showcase how AI agents can become “force multipliers” for public governance-illustrated by Telangana’s pioneering projects-while jointly exploring the policy, technical, and ethical frameworks needed to deploy them responsibly. Participants sought to define practical use-cases, outline necessary guardrails, and envision measurable outcomes for a “better tomorrow” powered by trustworthy, sovereign AI.


Overall tone


The conversation began with an optimistic and visionary tone, celebrating Telangana’s breakthroughs and the promise of agentic AI. As the panel moved into technical details, the tone became analytical and cautionary, focusing on risks, guardrails, and the need for human oversight. In the final segment, the tone shifted to forward-looking and hopeful, emphasizing concrete success metrics and collaborative pathways for governments and industry. Throughout, the dialogue remained constructive and collaborative.


Speakers

Victoria Espinel – Panel moderator and discussion facilitator; representative of Salesforce (thanked Salesforce team)[S12]


Minister Sridhar Babu – Minister (Telangana), policymaker and government official discussing AI governance[S5]


Srinivas Tallapragada – Engineering leader for a major AI platform (referred to as “Srini”), focuses on AI agents and trust layers[S8]


Saibal Chakraborty – Panelist, AI policy and public-sector expert[S9]


Lee Tiedrich – Professor, AI safety researcher; contributed to International AI Safety Report[S2]


Mike Haley – Senior Director of AI at Autodesk; discusses AI applications in infrastructure and guardrails[S1]


Additional speakers:


– None


Full session reportComprehensive analysis and detailed insights

The session opened with Victoria Espinel welcoming Minister Sridhar Babu, describing him as a “very special guest” and inviting him to the podium [1-6]. The Minister began by greeting the audience, highlighting Delhi as the capital of India and noting the presence of distinguished panelists and industry leaders [7-12]. He framed the discussion around “AI agents for a Better Tomorrow” and positioned the present moment as a fundamental inflection point in governance [16-17].


A central theme introduced by the Minister was the transition from generative AI, which merely answers questions, to “agentic AI that acts now” [22-24]. He argued that the traditional search bar is dying, to be replaced by more profound, action-oriented systems [33-34]. This shift, he suggested, marks the third era, defined by the intelligence of the system, which should be treated as public infrastructure rather than a product [36-41]. He illustrated this with three “lives” of AI in the country: research, policy, and finally, real-world impact that addresses dust, drought, monsoons and markets [43-44].


He reiterated the conference theme “AI for everyone, AI for human welfare” [85-86] and, after outlining the vision, thanked the Salesforce team, the event organizers, and the audience for the opportunity to present Telangana’s work [90-95]. He also framed the future of governance as being forged in the “living laboratories of the Global South”, citing Hyderabad as a prime example [95-98].


The Minister then detailed several Telangana pilots that embody this vision. An AI co-governor is being used to predict floods on the Moosy river and allocate resources before a crisis materialises [45-48]. In agriculture, AI advisors are being trained together with farmers, incorporating local dialects, soil wisdom and lived patterns into the model [52-54]. A Telugu-first AI system now records land records, interprets satellite indicators and dramatically shortens the response time between climate events and incident settlement [57-59]. Satellite-driven heat analysis is already shaping zoning, green-belt creation and urban cooling strategies for Hyderabad, with a target implementation by 2035 [60-62]. Edge-compute nodes powered by solar energy keep government services operational during grid failures, a first for any Indian state [63-64].


Telangana has launched what the Minister called the country’s first sovereign AI nerve centre, ICOM, intended as an AI innovation hub that supports R&D, talent development and deep integration of intelligence into governance [72-73]. Complementing this is a state-wide data-exchange platform that hosts over 1 084 datasets, converting administrative exhaust into ecological signals and enabling anticipatory health care and climate-resilient services [73-84]. The real breakthrough, he stressed, lies not in isolated projects but in the architecture that binds them together, such as the upcoming AI-City and the net-zero Bharat Future City, which are envisioned as self-learning, sustainable territories that generate their own compute resources and serve as policy-advisory platforms [68-72].


Panelists then converged on a definition of an AI agent. Saibal Chakraborty described the shift as moving from solving discrete problems to end-to-end AI-led execution of business or government processes [110-113]. Lee Tiedrich added that agentic AI can act on behalf of people, extending beyond mere answer generation [115-117]. Mike Haley highlighted the evolution from task-specific bots to systems-level agents capable of chained reasoning and multi-agent orchestration [119-122]. Srinivas Tallapragada noted the transition from a “co-pilot human in the loop” to agents that can independently provide business value [124-125]. Srinivas Tallapragada enumerated the essential components of an agent: a defined role, knowledge, short- and long-term memory, actuation capability across digital channels, and a “trust layer” of guardrails to prevent hallucinations, bias and toxicity [133-147].


The panel collectively stressed that robust guardrails, auditability, and human-in-the-loop oversight are indispensable for high-stakes government applications [148-154][190-203][216-231]. They emphasized the need for a command-centre architecture that enables testing, auditing and independent verification before deployment [214-215], sandboxes and clear liability rules for use-case selection [190-203], and transparent control panels that allow engineers to intervene, reassess and override outputs, thereby building trust [216-231].


Capacity development was highlighted as essential. The Minister’s pilots with farmers exemplify how end-users can be trained to contribute data and benefit from AI [48-55]. Saibal stressed that public-sector officials, who are not AI engineers, must be up-skilled to understand trust limits and know when human checks are necessary [253-254]. This aligns with broader policy observations that AI governance requires multidisciplinary collaboration among policymakers, lawyers, engineers and sector specialists [S1,S54].


Data sovereignty emerged as another focal point. The Minister presented Telangana’s sovereign AI nerve centre and open data pipeline as a model for treating AI as core public infrastructure [72-77]. Srinivas Tallapragada distinguished “strategic sovereignty” (control over data and policies) from “technical sovereignty” (control over the full hardware supply chain), urging governments to pursue both tracks-immediate strategic control now, with a longer-term plan for technical independence [246-254].


Panelists agreed that standards and regulation must be agile to keep pace with rapid AI advances. Lee advocated for global, multidisciplinary standards and evaluation ecosystems that can be localised to respect cultural and legal differences [260-280,S1]. Srinivas Tallapragada proposed an “agile regulation” model, where policy frameworks incorporate feedback loops and can be updated iteratively, mirroring product-development cycles [317-324]. Mike described an industry-led approach: embedding “transparency cards”-nutrition-label-style disclosures of model provenance, data sources, accuracy and bias-into every AI feature, thereby giving governments clear information and pre-empting regulatory lag [301-306].


Concrete use-cases were explored. Mike detailed AI agents that analyse floodplains and optimise drainage in water-system design, illustrating how agents can assist early-stage infrastructure decisions despite imperfect inputs [170-174]. Srinivas Tallapragada cited police-assistant agents deployed in the UK (Bobby) and Tasmania (Terry), which handle non-emergency citizen queries and support field officers, demonstrating immediate value in public safety [185-188]. The panel also discussed AI-driven disaster response, with the Minister envisioning anticipatory actions that “counter dust, drought, monsoons and markets” [44-45], while others cautioned that probabilistic predictions require human oversight to avoid over-confidence [217-231][190-203].


Disagreements centred on the degree of autonomy appropriate for critical government functions. The Minister described AI as a “co-governor” that can act proactively (e.g., flood prediction) [45-48], whereas Mike and Lee highlighted the inherent uncertainty of AI outputs and the necessity of human-in-the-loop safeguards [217-231][190-203]. A second point of contention involved the primary mechanism for ensuring safe deployment: Saibal favoured procedural guardrails and human oversight, Lee pushed for internationally harmonised standards, Srinivas Tallapragada advocated for agile, feedback-driven policy, and Mike suggested industry-self-regulation via transparency cards [148-154][260-280][317-324][301-306].


When asked to envision success metrics three years hence, panelists offered concrete indicators. Saibal suggested that the true win would be a farmer receiving vernacular AI advice at scale across India [334-335]. Lee envisaged an active AI safety evaluation ecosystem, with AI Centres of Excellence sharing techniques globally [339-343]. Mike highlighted faster, safer infrastructure development that enjoys public confidence and engineer endorsement [345-351]. Srinivas Tallapragada proposed measurable uplift in per-capita income for the bottom 50 % of earners [355-357].


In closing, the Minister reaffirmed that AI agents can act as “force multipliers” for governance, provided that they are embedded within a trustworthy operating system, supported by sovereign data platforms and guided by robust guardrails [91-93]. Overall, the session highlighted a shared conviction that agentic AI, when built on sovereign data, transparent guardrails, and agile governance, can become a force multiplier for inclusive, resilient public services.


Session transcriptComplete transcript of the session
Victoria Espinel

We are going to start with a very special guest. Minister Bawu is going to join us for a keynote. Very excited to hear what you have to say, coming from Hyderabad, one of the centers of technology in India and in the world. So, Minister, thank you so much for joining us. And if I could ask you to come to the podium. Thank you so much, Minister.

Minister Sridhar Babu

Very good afternoon to all. In fact, we welcome you to our city of Delhi, a beautiful city, a capital of India. And many people are from India, too. And we welcome the distinguished panelists. eminent panelists who are sitting here to sit and discuss the quotes for Better Tomorrow. And I welcome the leaders of the industry and the delegates over here. And especially coming to the subject, AI agents for Better Tomorrow. You know, I wish to see that, you know, where we stand today. And where we would end up tomorrow. The point of discussion over here. We stand today at a fundamental inflection point in the history of governance. As a policymaker, I would like to mention a few points.

Because all the technocrats are sitting on all the eminent, you know, scientists maybe from physics or the maths may be sitting on the other side. To develop AI into next level. You know, for decades, the digital revolution in the government was defined by transition from paper to portals and from physics cues to digital clicks. But today, we are witnessing the birth of the new paradigm. We are moving beyond generative AI that simply answers. We are moving from them to agentic AI that acts now. What I’ve been discussing with Mr. Srinivas just now. And for 30 years, our relationship with technology was a series of commands. We used to give commands and used to get the answers.

We typed, we clicked, we prompted. We were the masters for the such bar. We used to, you know, we were the masters. Nobody can say that. But I stand here. I stand here today. I can see and everybody can see the search bar is dying. In its place, something more profound. Just now Mrs. Sweeney was just telling about agency. It’s just evolving. In the first era of our national building was defined by land. The second by the industry. And third has been defined. More illusory, the intelligence of the system. And the nation that leads this century are those that learn to treat intelligence not as a product but as a form of a public infrastructure.

The idea is no philosophical for our state of Telangana. It is the story of our everyday governance because it’s IT driven state as we are known for. And often say that artificial intelligence has three lives in the country. the first life is in the research labs the second we take into in the policy papers but the third ultimately both of this combined together how we are trying to affect the life that truly matters for each and everybody you know how do we see it is that when ai meets the real challenges of of our lives when artificial intelligence meets the dust where we face ai meets the drought when it meets the monsoons when it meets the markets of the living society and this is where its legitimacy is earned when it really counters this dust doubt monsoons and markets in telangana we see agents not as a tool here we would like to take them as a team mates.

You know as the way the pilots relay and the co -pilots. Tomorrow as our government here in Telangana also see that we relay on AI as co -governors system that can predict a flood before the first cloud gathers over the Moosey. Moosey is our river in the midst of our city. You know allocate resources before the crisis and deliver services before citizen ever need to ask. For example if you take the agriculture a small farmer I hail from a very remote area and that to a rural place. A farmer in my place or in some other place from the rural area when the climate is not environmental concept for them it is right now a daily negotiation with uncertainty.

So when we built our AI advisors we did something unconventional. Right now we are trying to do on the pilot stage. We asked farmers to train the system with us. You know the dialects the soil wisdom the lived patterns become the pattern of the model. This is where the governance comes into picture. To use the best of the technologies where you invent produce or do sitting in R &D use best of your grey matter to come up with some products until and unless we use and induce into our governance there will be no end result. That is what we believe in. That is why our Telugu first AI can record. Land records interpret satellite indicators and compress the time between the climate event and an incident settlement.

So this saved lots of time, you know, for our, you know, government agencies as well as to the end user as a farmer. Our satellite -driven heat analysis no longer stops at mapping temperatures. They now shape zoning, green bells and urban cooling strategies for Hyderabad. Which we are planning to take up to the core by 2035. And across 33 districts in our state, our solar power edge computer nodes ensure that the government service and the climate remains operational when the grid fails. And this is also one of the novel things what the Telangana is the first state where we have implemented. Yet I don’t claim that these are examples for climate. This is just a fact of a story.

This is just a beginning. This is the first preface we can say that because the real breakthrough is not from each project. It is from the architecture that binds together. Our future projects like our coming up the state of art infrastructure in the upcoming AI city, absolutely dedicated AI city and the Bharat future city which shall be the net zero city. Are designed not as a smart districts either for technology or for other aspects, but as a self -learning cities, territories that define sustainabilities, territories which can provide themselves for the compute and make them policy advisors. Our country’s first sovereign AI nerve center. ICOM you know this is our first ever initiative by any state in India that we have come up with the first sovereign AI nerve center that is supposed to be the AI innovation hub that is named as ICOM that the aim and objective is you know this intelligence should be shall go deep beyond just for incubation but also render into R &D and shall be the prime focus of creating AI ready talent for tomorrow’s world and I would like to mention here that Hyderabad and Telangana is the first state to come up with a platform that is This Telangana data exchange platform, the sovereign data open pipeline ensures that the intelligence is grounded in integrity.

So the platform is on the open. And this is the first state we have put all the data on this platform. You know, if we go through it by this open data pipeline, you know, 1 ,084 data sets, they have moved from administrative exhaust to ecological signal. We have created something rare in the global south, that state that generates its own intelligence at a scale. And we have seen the results too. And the results shown. The healthcare doesn’t wait for symptoms. It now anticipates risk. Because the data exchange we have done with our co -partners, even in the healthcare. or with the doctors or with the public health institutions. They are just not waiting to deliver the medication, but predicting the risk and try to put it into action.

And we are not waiting for the heat waves to come. We are trying to analyze through the data and how we should place ourselves and we are preparing corridors for the shed. And farmers also, we believe, using this AI technology, we don’t want farmers to wait for the loss. You know, they have to receive assurance before despair. And we are also planning that infrastructure doesn’t wait to break. You know, it has to whisper when it will fail. You know, when all this, the cutting -edge technologies, especially the AI, deployed with purpose and AI agents offer government something rare in public life. The ability to act before harm, to prepare before shock, to protect before loss and how resilient our infrastructure emerges, how safe the climate resilient cities take shape and how our public services become anticipatory humane and trusted.

And this is the future what we are imagining and we are trying to put all our actions into stream and it is this operating system we dreamt and we started running and I believe the next chapter of the statecraft will not be written in the boardrooms of traditional power centers but in the living laboratories of the global south. In the cities of Hyderabad, and the world can already see a preview of what an intelligent century of governance looks like. Let us leave Bharat Mandapam today. Here, while this great convention is taking place with a shared conviction that the tomorrow we are building is not just the smarter, it is braver. However, and you know, the great caption goes, AI for everyone, AI for human welfare should be the theme.

And also, we should, I as a policymaker, you as a technology expert sitting over there, should aim and anticipate for it. I thank the organizers for giving me, you know, a length of year to air. And I want to hear my pitch on behalf of our state of Telangana. I would like to thank the Salesforce team. especially the team management who are invited me over here for gracing this and having to see you know all the best brains sitting over here and the grey matter who would be doing much more for our welfare of our human being. Thank you very much.

Victoria Espinel

Minister, thank you so much for joining us. We very much appreciate it. It was very exciting to hear what’s happening in Hatsheba and in Teliana. Let’s kick our panel off. Alright, so I am going to start with an icebreaker. Everyone gets 30 seconds to respond. This panel is by AI agents, so what would you say, I’m going to start there and then go towards me, what would you say is the single biggest difference that you see between AI last year, we’ve been sitting here and We’ve been sitting here last year in the AI agents that we are seeing today. Saibul, can you kick us off?

Saibal Chakraborty

So I think in my mind the conversation has moved decisively towards agentic AI. We are no longer talking about, as Honorable Minister also said, about solving discrete problems or discrete searches. We are now looking at end -to -end AI -led execution of business processes or government processes. I think that’s the single biggest change in thinking that has come up.

Victoria Espinel

Professor Lee Tiedrich?

Lee Tiedrich

To put this in context, I was involved in the International AI Safety Report, and we just had our panel on that a little while ago. And Professor Bengio was saying the biggest change from 25 to 26 is the emergence of agentic AI. And my perspective, is its ability not only to do the end -to -end, but to also act on behalf of the… of people is really the big change.

Victoria Espinel

Mike?

Mike Haley

So I’m probably going to jump on the train here. You know, what we were seeing last year was narrow agents able to solve specific problems. What we see now are agents that are able to abstract the problem, chain of thought reasoning, being able to take that and turn it into sequenced action and turn the multiagent sort of systems level thinking. So it’s the move from task specific to systems level is the big shift that I’m seeing.

Victoria Espinel

And Srini? Srini Srinivasan

Srinivas Tallapragada

Yeah, so I think for me the big shift has been from co -pilot human in the loop to agents which can act and really provide value, business value. And that’s been the big shift.

Victoria Espinel

So let’s talk about that value. Let’s talk about AI agents as a forceful multiplier. I’m going to start here this time Srini, you lead engineering for one of the biggest platforms in the world. There’s a lot of discussion about AI agents. Can you demystify this? What does that mean?

Srinivas Tallapragada

Yeah. So I think what does that mean? An agent, just like a human, first of all, an agent has to act. It says agency and it acts. That’s the first big difference. And like any agent, it has to have a couple of things. It has to know a role. Just like a human, it needs to know what it’s supposed to do, what are the jobs to be done. It needs knowledge. Just like a human, if I have in my mind, an agent has to have knowledge, some memory, so both short -term and long -term memory. And then it should also be able to act. You know, it should be able to, in a digital world, should be able to act on an API or something.

And then it should be able to act wherever the surface is. Maybe it’s in WhatsApp channel, wherever the user is interacting with it, in a WhatsApp channel or web channel or a digital channel or a SMS text. More importantly, most important in all of this is we should have guardrails on what it’s not supposed to do. that’s the most important and then all of it has to be covered to make it useful with what we call a trust layer because these things can hallucinate it can have bias, it can have toxicity, avoid all of that and they are unpredictable ultimately so it should have governance then it’s auditability, so you can do all of this this and all is to do all of this is what an agent does so this is also the why even though there is a lot of hype in reality it hasn’t diffused enough, this is the business value which we are trying to bridge as the vendors

Victoria Espinel

Thank you. Saibal I’m going to go to you next, so let’s talk about governance, we sit here in Delhi, the capital of one of the greatest nations of the world, the public sector, are they ready for this, how do we think about that?

Saibal Chakraborty

So I think let me not answer that question, I think the public sector needs to be ready so all the way from managing public finances public procurement managing their workflows and processes better, there is no way that public sector can avoid this. However, as Shrini, you pointed out, the stakes here are very, very high. So imagine an agent crafting an RFP, a multi -million or a billion dollar RFP on behalf of the government. How do we and you know in public procurement, we often sacrifice speed for procedural tightness. So how do we actually, what guardrails do we put around an agent or more? So can it really be end -to -end? Can it really be fully autonomous?

Or do I still need that last human layer to make sure that the T’s are crossed, the I’s are dotted because the stakes are really high and a mistake can really, you know, lead to a lot of negative impact. So I think the public sector has to be ready. but I think some of these guardrails has to be thought through and in the context of public sector are agents fully autonomous or do they still automate or do they still operate with a little bit of that human layer I think that has to be thought through.

Victoria Espinel

That’s great thank you. I love that you said RFPs because that’s a concrete example so let’s talk a little bit about use cases and Mike I’m going to go to you let’s talk about resilient infrastructure one of the examples I hear a lot for AI agents they can help you make reservations and I love to eat I think making restaurant reservations actually pretty valuable to me but could an AI agent do something like design a bridge could it design an energy grid like where do we stand between reality and science fiction?

Mike Haley

Yes I think we’re tracking pretty quickly to agents being able to do just those kinds of things In the past, what’s been difficult is using computational methods in AI, which has been around for a reasonable time for these things, has been very difficult. Because if you’re using some form of computational method or AI to design a bridge, you have to specify that bridge perfectly. You have to give it perfect inputs. Now, it turns out that when a designer is designing something, they don’t have perfect inputs. That’s the process of design is actually figuring out what your inputs are, right? So this has always been a little bit of a barrier for people to use these advanced methods.

With AI, and specifically AI agents, you’ve now got a much easier way of interacting. It’s more forgiving towards fuzzy requirements and earlier stages of thinking. It’s able to give you things that inspire you. So one of the things I talk a lot about publicly is that the notion of agents and creatives working in a loop together, that it’s breaking the cycle where the engineer has to come up with every idea from scratch, from a beginning. Rather describe what you’re doing, let the agents explore. So I’ll give you one example specifically in infrastructure because you wanted to get concrete. I mean, something that we work with is water systems, for example. So we’ve built AI agents that can analyze floodplains.

They can analyze how you might want to think of water drainage and these kind of things. So every time you’re making a decision early on in your design, you can let this thing run through, and it’s going to optimize your design in order to ensure that drainage is going to be successful on that. Now, drainage seems like a small little side thing, but it’s a pretty massive part of infrastructure. And having an agent handle that for you, it’s a pretty big deal.

Victoria Espinel

Mike, I have very close family ties to Louisiana, so drainage and flood zones, that is not a small thing. That is a very, very big thing. And actually, that’s a perfect segue to the question I wanted to ask Srini. So one of the most complex things that a government might have to deal with is disaster response. Is that a place where AI agents could be helpful?

Srinivas Tallapragada

I really like the theme, welfare for all. And I think while we can think of very big things of where the AI is doing, AI can add value right now. And disaster response is one good example. Another small example. Another example which I wanted to give was like, you know, the key is to give back time to the people. That’s very valuable. Giving back time is a very noble goal in my opinion to everybody. So we have this very interesting use case where there is a city in New Thames in UK where they created an agent called Bobby. It’s like Bobby is a UK term for policeman and the citizens are asking a lot of questions which are not emergency and Bobby is answering them.

More than 90 % of them they get a lot of value. What was interesting for me was we have another city in Tasmania which is using a product agent force to roll out agents to their police people more than thousand police people because lot of times when they are in the field the policemen new or more experienced they have lot of questions and they are asking and they call this agent Terry and lot of policemen say Terry is their best partner. you know they have been so I think while we can think about futuristic ways here and now there are a lot of things we can provide right now with the technology guardrails in the public sector in private sector obviously where with if you have the right platform where you have trust governance as a foundational value with all the right guardrails we can still add a lot of value and we are seeing thousands of examples across public and private sector where you have the crawl walk run mode you know you start something basic you can still add value you still have the most esoteric cases with multi agent orchestrations I feel like but you can start with basic today and still get a lot of value that’s what we are seeing

Victoria Espinel

that’s great so professor Tiedrich we’ve talked a little bit about how agents can help governments serve their publics are there are there risks there Are there risks of over -reliance?

Lee Tiedrich

Yeah, I mean, there are definitely risks, and I think I share the view of my co -panelists that I think there’s a lot of benefits to using AI in government and improving government services worldwide. But like everything else, we have to do it cautiously and smartly, and I think some of it kind of comes back to the human factor, like pick your use cases wisely. One of the themes in the safety report is that AI is emerging very jaggedly. We have some use cases like computer programming that are really good. There are others that may not be quite ready for prime time. So I think when we think about over -reliance is thinking about where AI is excelling, focusing on those use cases, and maybe doing sandboxes around some of the others to give them a little bit more time to mature.

I think also the over -reliance, picking up on some of the great points, is the guardrails. You know, one of the things in the safety report is good news. We’ve made a lot of progress on guardrails and risk management, but still as the technology moves quickly, a lot more work to be done. And so not relying too much that we overlook guardrails and thinking about where humans should be in the loop. And then the third thing I’ll just mention is, you know, the interoperability of different agents. And as agents start to call upon third -party agents, it’s just thinking through, you know, what guardrails, how do you choose that? How do you allocate liability? How do you test the agents that you’re going to bring into your system?

Victoria Espinel

So guardrails have come up. Srini mentioned it. You just mentioned it. Let’s talk about guardrails a little bit. So, Srini, we hear about chatbots. We hear about hallucinations. Those can be annoying. When you’re talking about a government deploying an AI system, AI agents, the consequences can be extremely significant. They can be a hallucinating of an agent can be quite dangerous. So let’s talk about guardrails. How do you engineer trust into a system so that a minister, a secretary, a secretary can be able to say, feel confident that that’s a tool that they can use to serve their people?

Srinivas Tallapragada

drift, they can hallucinate. So you need a command center where you can say all of it is, this is the difference between a pilot or a demo, which you can find thousands of demos in YouTube versus real life, where these things become So we had to build all of these things for both the customers or governments to build confidence, they can audit, they can test, not even if themselves an independent party also can test, all of this infrastructure is what is required to make this a reality, but once you do that, there’s a huge value you can immediately provide to either the customers or the citizens.

Mike Haley

Can I just add to that quickly? Because I think you hit a really interesting point at the end there. When people talk about guardrails, they think of guardrails as this perfect thing, that at some point the guardrails are going to get strong enough that every result is perfect, it’s completely predictable, and we’re good. And I think we need to talk about the honesty of that. We’re talking about systems that are inherently probabilistic systems. You’re never going to make a probabilistic system 100 % deterministic. It’s an acronymism. Right. So what we’ve discovered is that, I mean, you do do all the guardrails work that we’re all talking about, but where you were going at the end there about making systems that can look at the accuracy of what’s produced, give you some feedback on how accurate the solution or how well it’s going to perform, and then, and this is very important, what we’ve discovered is giving control to the human being, giving control, in our case, to an engineer, right, who is able to say, oh, I get it.

This is kind of, the result is a little off. I’m going to give it some more feedback. I’m going to reassess the results. I’m going to run it again. Or I might even go in myself and kind of tweak that information. And what we’ve discovered, when I’m talking to an engineer and explaining how this stuff works, if I don’t give them that level of control, they don’t trust the system. The minute they know they can actually control it, so it’s not, trust doesn’t depend on a perfect answer. Trust actually depends on transparency and understanding and then the ability to come in and control something.

Victoria Espinel

But I think that’s also because the engineers understand this. This is the tool. It’s a tool for them to use. to help them. It’s not something that is going to take control. Is there anything specifically with respect to infrastructure that you think government should be mindful of?

Mike Haley

Yeah. Well, look, so, I mean, infrastructure is not known as the easiest and quickest thing to build, right, in countries. And I think, you know, one of the really boring things but absolutely necessary things with infrastructure is to make sure your digital ecosystem around that infrastructure is set. And I see a lot of places in the world getting into building infrastructure trying to do this quickly without getting all that digital infrastructure in place. So building information, modelling, ensuring that every part of your infrastructure is correctly modelled, it’s represented at the right level. AI is not going to just magically come in and solve a bunch of problems unless you’ve got a lot of that digital stuff in place already.

So it’s kind of a little bit of the boring work, but getting that stuff in place early is one of the biggest things. I mean, I’ve had a number of conversations here this week about the 2047 initiative in India, the Man. of infrastructure that needs to be built in this country and the importance of using something like building information modeling, getting standard data, getting that in place now. If you get that in place now, all this AI goodness is way easier to deploy against it.

Victoria Espinel

Yeah, please.

Srinivas Tallapragada

Yeah, so I think I heard a lot of discussion around sovereignty, and I think the way we should think of sovereignty is two levels. There’s strategic sovereignty and technical sovereignty. So by strategic sovereignty, I mean is like you get control on data, your governance policies, you know, and your operational policies. That I think you can implement it right now and get value. And I think and then on the technical one where people want to control their entire supply chain from the chips and all, I would like to for governments to and public officials and policy officials to think is as two tracks. One. One takes longer in a lot of capital investment. Don’t let the second track.

stop getting the benefit of the first track the first track is easy you can ensure the data doesn’t leave your country your policy guardrails have control human in the loop you still get a lot of benefits while you still want to continue on the second track that would be my request to all the government

Saibal Chakraborty

can I just make a quick build on what Mike said because I do a lot of my work in the public sector with governments I think one of the biggest guardrails beyond policies is actually the skilling the upskilling like Mike you said it’s a probabilistic system inherently right so you cannot expect it to give correct results all the time there’s nothing called a correct result so the person who’s actually using this who’s using the tool at the district level, at the state level to make real government decisions that person is not an AI engineer that person needs to be upskilled and needs to be told what can be trusted and what requires that additional layer of check.

So I think if agent TKI has to take off in public sector at scale, then that upskilling at various levels of the government on what can be trusted and what cannot be trusted is also a very, very big component.

Victoria Espinel

Yes, I totally agree. Professor, I wanted to ask you, so it feels so trite to say technology is moving really quickly, but in the last few years, I mean, AI is moving very, very quickly. We’ve talked a lot about guardrails. How should governments think about this? I mean, how are governments going to be able to keep up in terms of setting government expectation, setting potentially regulation for a technology that is moving so quickly?

Lee Tiedrich

It’s a hard one. I think, you know, AI has evolved into a global multi – disciplinary field. And I think, you know, we need to bring the global community together. I think we need policymakers, lawyers, talking with engineers, talking with sector specialists to really inform the policy in real time. I mean, I’m a big fan. I spent a year working at NIST, the U .S. National Institute of Standards and Technology. And, you know, we need to figure out how to do some of the guardrails, you know, starting with the science. And then the science can inform, you know, how to develop the standards, how to develop the evals. And then it becomes, you know, a question.

I mean, different countries have different views on whether we should regulate or not regulate. You know, the U .S. has a very deregulatory approach. Europe is the opposite. But if we can kind of agree on what those common standards are for evaluation and testing, then governments can be free to decide, do we mandate this or not mandate that? And I think one important thing is that we need to be able to do that. And I think that’s one of the things that we need to be able to do. And I think that’s one of the important nuance to add to the mix. And this has been a theme of the conference. is, you know, we have to, well, we want some standardization on these evaluation mechanisms.

We have to recognize that we speak different languages. We have different cultural norms. So when we want to have standardization, we’ve got to be able to localize what the evaluation looks like is what might be appropriate in one country isn’t going to be appropriate in another country. So it’s hard, but I think, you know, starting with the science, the scientific report, I would point people to, you know, building on that, working through the AC network, working through standards organizations and all these other initiatives to develop the evaluation, build that evaluation ecosystem, and then regulations can kind of overlay on top of that as policymakers think appropriate for their jurisdictions.

Victoria Espinel

But if I could ask a follow -up question to you or any of the panelists, I mean, I think one of the challenges there for companies is that it’s really helpful for companies. I also speak for enterprise software companies that I represent. It’s helpful to know what those government expectations are. Like industry is looking for clarity and predictability. And I think that’s one of the things that I think is really important. And I think that’s one of the things that I think is really important. And I think that’s one of the things that I think is really important. And I think that’s one of the things that I think is really important. And

Mike Haley

Should I take a shot at it? Yeah, I see. As a software provider, you know, at Autodesk, we definitely deal with that, Victoria. You know, we’ve had a couple approaches. One, I mean, we’re obviously going to stay on top of this all the time, working with governments, making this part of a conversation. I spend a good part of my year traveling around the world, talking to governments and trying to sort of help them understand what needs to happen, but also help us understand, like you said, what they’re wanting. But the main problem is just the sheer variance. I mean, even within the United States, we have things between different state efforts, right? And then you get around the world, it just gets even more complicated.

What we’ve tried to do is we’ve – We’ve tried to run as far ahead of this as we can. So if there is a way that we can build in good controls right from the beginning, we actually build those controls to the maximum extent that we can within reason, right? So what we’ve done is we’ve found now that we’ve run, I’ll give you an example. In every AI feature we have in our software, we have something called a transparency card, which looks like a nutrition label on food. But that nutrition label tells you what kind of model is behind it, what data was used to train it, what kind of level of control you have, what accuracy it has, any bias that we know about in the model, that kind of stuff.

And it’s a standard thing. So we rolled that out about a year ago really to try and stay ahead of things. So if governments started asking for these things, well, we’ve got a transparency card. What’s actually happened now is that there’s a bunch of interest in that becoming part of a standard. So I mean I’m not saying that really just to tout us because I think other companies are doing great things in this space as well. You guys are doing a bunch of good stuff in this space too. I think this is an opportunity for us. For us in industry to run ahead to try and help define some of these things because it is moving so fast.

And I hate to, maybe I shouldn’t say this publicly, but the government doesn’t always have the best answers, right? So, I mean, we can work with government to help them develop those answers and come up with good things, which helps us then, you know, resist some of the complexity that’s coming down the line.

Srinivas Tallapragada

Yeah, so one of the challenges in this is you can project it too much. It’s an exponential curve. It’s very hard to project. So I think sometimes it’s learned by doing. I think the biggest thing the government can, all governments can do is the policy framework on how to update these standards. Today, usually it takes a long time, and so everybody’s afraid, and then it takes even harder to change a standard. So then everybody, so then they try to solve everything, and things are changing. And so I think the main thing policymakers could do is, just like the feedback loop, if there’s a way to improve. If there’s a way to improve the policy framework, because then you don’t need to be afraid of getting everything right.

You know, you understand that, hey, you told me some basics, and as new data comes in, you can update it. And then I think that, I think that in engineering and product, we call this the product feedback loop and agile development. If we have something equivalent on that, then I think then everybody is clear because we all want the right thing. I think there’s no disconnect in the foundational thing. We want AI to help, correct, in a positive way to our net positive way to our entire community. And the regulatory framework in a changing technology, if the regulatory framework is able to change, if we can change that, then we are not afraid to say that we need to get everything right on day one.

And we can learn by doing it. So agile regulation.

Victoria Espinel

I have loved this panel. Unfortunately, we’re coming to a close. So I’m going to ask each of you one final question and Saibala, I’m going to start with you and then head this way. But if we were so fortunate to meet again in three years, so fortunate to meet in Delhi again in three years, looking back what would you say is the one thing that you think would be the best way to determine whether or not we have succeeded in addressing some of these challenges i know it’s a big question sorry but uh thank you

Saibal Chakraborty

so since we’re in uh delhi i’ll give the answer in the indian context i think as um the primary theme or one of the primary themes of this particular conference is inclusivity so for me the success of ai the true success of ai will be if a farmer could talk to a small language model powered tool in his or her own vernacular language and get practical advice on how to manage the crop how to manage the cattle and if that could be scaled up uh across the board i think that would be a very good idea across the length and breadth of india Then I think that, for me, is the real win for AI.

That’s a

Victoria Espinel

big win. I mean, that’s a significant impact. Thank you. Great. Professor Tietrich? Yeah, so

Lee Tiedrich

I’m kind of coming back to the evaluation ecosystem. We’ve made a lot of progress over the last couple of years, but more work needs to be done. You know, more countries, including the Global South, are launching ACs, you know, AI safety or security institutes, which is not hard regulation, binding regulation, but it’s governments weighing in. And I think real progress three years from now, we have an active AC institute that’s sharing information, making real progress on evaluation techniques, and one of the commitments that came out of some of the companies yesterday is, you know, also localizing that so everybody can benefit from that Global North, Global South. Thank you.

Victoria Espinel

Mike? So earlier

Mike Haley

on, I spoke about infrastructure and, you know, physical infrastructure that is in countries. What I would hope to see is, in a couple of years, in a couple of years’ time, we’re actually seeing infrastructure genuinely get developed faster than it’s ever been developed, which is a really, really tough problem, making that happen in the physical world. So as a measure of AI truly doing this, that’s an incredible measure. But on top of that, it needs to be doing that without compromising safety, without compromising. It’s not a big black box that nobody understands, right? So what I would love to see is not only is that infrastructure being developed faster, but the public is engaged with it.

The engineers and people that are doing it feel comfortable with it. They feel secure. They feel fine signing off on that because they feel that this is reliable. Thank you.

Victoria Espinel

Srinu? If AI

Srinivas Tallapragada

is so revolutionary as we all assume, I would hope in three years, the bottom 50 % income percentile, their per capita income has been measurable. That for me is the real impact of this technology. That’s

Victoria Espinel

fantastic. I want to say thank you to all of our panelists I want to say a special thank you to Srini and to Salesforce for bringing us all together here today thank you to our audience for joining us big round of applause for our panelists thank you you Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (14)
Factual NotesClaims verified against the Diplo knowledge base (3)
Confirmedhigh

“Victoria Espinel welcomed Minister Sridhar Babu as a “very special guest” and invited him to the podium”

The opening remarks in the knowledge base explicitly refer to the minister as a “very special guest” and thank him for joining the keynote [S1] and [S2].

Additional Contextmedium

“AI should be treated as public infrastructure rather than a product”

The knowledge base contains statements that “Intelligence is not an asset, it’s infrastructure” and that “Information should be treated as a public good rather than a commercial commodity,” providing supporting context for treating AI as public infrastructure [S82] and [S80].

Additional Contextmedium

“AI advisors are being trained together with farmers, incorporating local dialects, soil wisdom and lived patterns into the model”

A related point in the knowledge base notes that farmers are using AI weather forecasts, illustrating how AI is being applied in agriculture to support farmers, which adds nuance to the claim about farmer-centric AI advisors [S85].

External Sources (89)
S1
S2
https://dig.watch/event/india-ai-impact-summit-2026/agents-of-change-ai-for-government-services-climate-resilience — Professor Lee Tiedrich? big win. I mean, that’s a significant impact. Thank you. Great. Professor Tietrich? Yeah, so
S3
https://dig.watch/event/india-ai-impact-summit-2026/ai-safety-at-the-global-level-insights-from-digital-ministers-of — I think Yashua and Alondra’s comments tee up the next question for Adam. These risks are evolving quite rapidly, and one…
S4
https://dig.watch/event/india-ai-impact-summit-2026/welfare-for-all-ensuring-equitable-ai-in-the-worlds-democracies — And I think sort of working from the bottom up with the science, developing the evaluation technique, taking into accoun…
S5
Agents of Change AI for Government Services & Climate Resilience — – Minister Sridhar Babu- Srinivas Tallapragada
S6
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — This panel discussion on heterogeneous computing and AI infrastructure in India brought together leading experts from in…
S7
https://dig.watch/event/india-ai-impact-summit-2026/agents-of-change-ai-for-government-services-climate-resilience — And we can learn by doing it. So agile regulation. I really like the theme, welfare for all. And I think while we can t…
S8
Agents of Change AI for Government Services & Climate Resilience — – Mike Haley- Srinivas Tallapragada – Minister Sridhar Babu- Srinivas Tallapragada – Saibal Chakraborty- Srinivas Tall…
S9
Agents of Change AI for Government Services & Climate Resilience — – Saibal Chakraborty- Srinivas Tallapragada
S10
https://dig.watch/event/india-ai-impact-summit-2026/panel-discussion-ai-in-digital-public-infrastructure-dpi-india-ai-impact-summit — Government data, it’s early days, it’s very early days, but government data is being provided access to through platform…
S11
https://dig.watch/event/india-ai-impact-summit-2026/agents-of-change-ai-for-government-services-climate-resilience — And Srini? Srini Srinivasan But I think that’s also because the engineers understand this. This is the tool. It’s a too…
S12
Agents of Change AI for Government Services & Climate Resilience — -Victoria Espinel- Panel moderator and discussion facilitator
S13
Pre 4: Dynamic Coalition on data and trust: Stakeholders Speak – Perspectives on Age Verification — – **Regina Filipová Fuchsová**: Industry Relations Manager at EURID, session moderator Regina Filipová Fuchsová: Excuse…
S14
Need and Impact of Full Stack Sovereign AI by CoRover BharatGPT — I think it should be soon. I think there are ministers, Ashniv Ashton sir. We should add one more role to them. We shoul…
S15
Building Indias Digital and Industrial Future with AI — As India advances in digital public infrastructure and its AI ambitions, the key is how we ensure these systems remain t…
S16
Multistakeholder Partnerships for Thriving AI Ecosystems — This comment introduces a sophisticated understanding of AI infrastructure needs, moving beyond simple data collection t…
S17
Agentic AI in Focus Opportunities Risks and Governance — So everyone needs to know that it’s a legitimate agent and not a rogue robot or a fraudster. Important, right? The secon…
S18
UNGA/DAY 1/PART 2 — The advancement of AI is outpacing regulation and responsibility, with its control concentrated in a few hands. (UN Secr…
S19
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — And as several of our panelists emphasized, if we don’t address that gap deliberately, the shift towards AI agents is on…
S20
Safe and Responsible AI at Scale Practical Pathways — Artificial intelligence | Building confidence and security in the use of ICTs Guardrails, Human‑in‑the‑Loop, and Risk‑A…
S21
WS #283 AI Agents: Ensuring Responsible Deployment — Prendergast frames agentic AI as a critical technological shift where AI has evolved beyond reactive tools to become pro…
S22
Heat action plans in India struggle to match rising urban temperatures — On 11 June, the India Meteorological Department (IMD)issued a red alert for Delhias temperatures exceeded 45°C, with rea…
S23
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Policy needs to be at a principle level because if it becomes too detailed, it becomes hard to maintain, especially with…
S24
Building the Next Wave of AI_ Responsible Frameworks & Standards — Bhattacharya explained that trust ranks first among Salesforce’s five core values—trust, customer success, innovation, e…
S25
Partnering on American AI Exports Powering the Future India AI Impact Summit 2026 — This comment demonstrates sophisticated understanding that ‘AI sovereignty’ isn’t a monolithic concept but represents di…
S26
Transforming Agriculture_ AI for Resilient and Inclusive Food Systems — The tone was consistently optimistic yet pragmatic throughout the conversation. Speakers maintained an encouraging outlo…
S27
The Declaration for the Future of the Internet: Principles to Action — A balanced scorecard with certain parameters provides a measurable indicator of the progress made. A nuanced shift in p…
S28
Comprehensive Report: Preventing Jobless Growth in the Age of AI — Moderate disagreement with significant implications. While speakers agreed on broad goals, their different assessments o…
S29
From summer disillusionment to autumn clarity: Ten lessons for AI — Overall, what’s notable in all these political developments is pragmatism. The lofty narratives of last year – like fear…
S30
New plan outlines how India will democratise AI infrastructure — Indiais moving to rebalance access to AI infrastructureas part of a new national push to close gaps in computing power a…
S31
Multistakeholder Partnerships for Thriving AI Ecosystems — This comment introduces a sophisticated understanding of AI infrastructure needs, moving beyond simple data collection t…
S32
AI as critical infrastructure for continuity in public services — Artificial intelligence | Data governance | Building confidence and security in the use of ICTs Data sovereignty requir…
S33
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — ## Capacity Building and Implementation Virginia Dignam: Thank you very much, Isadora. No pressure, I see. You want me …
S34
Open Forum #3 Cyberdefense and AI in Developing Economies — – Ram Mohan- Wolfgang Kleinwachter- Philipp Grabensee Capacity Building and Human Resources Development | Legal and re…
S35
AI and international peace and security: Key issues and relevance for Geneva — Capacity-Building Initiatives: Capacity-building initiatives are vital for equipping states with the knowledge and skill…
S36
Open Forum #17 AI Regulation Insights From Parliaments — Capacity Building and Education Capacity building and education are essential for all stakeholders Development | Capac…
S37
Agents of Change AI for Government Services & Climate Resilience — Artificial intelligence The minister says AI is moving beyond simple question answering toward agents that can act auto…
S38
WS #283 AI Agents: Ensuring Responsible Deployment — These key comments fundamentally transformed what could have been a technical discussion about AI governance into a nuan…
S39
Survival Tech Harnessing AI to Manage Global Climate Extremes — In the large scale model, it defiles. So the AI can downscale better way in the localized, suppose one kilometer resolut…
S40
WS #466 AI at a Crossroads Between Sovereignty and Sustainability — Pedro Ivo Ferraz da Silva: Yeah, thank you very much, José Renato, Alexandra, and also other colleagues in the panel. It…
S41
Agentic AI in Focus Opportunities Risks and Governance — -Enterprise Guardrails and Risk Management: Panelists emphasized the critical importance of implementing robust safety m…
S42
Keynote-António Guterres — We need guardrails that preserve human agency, human oversight and human accountability
S43
Agentic AI and the new industrial diplomacy — How this looks in practice:The European AI Act,which came into force in2024, classifies many industrial AI systems as ‘h…
S44
Digital Embassies for Sovereign AI — This addresses the need for adaptive governance frameworks that can keep pace with rapid technological change
S45
WS #162 Overregulation: Balance Policy and Innovation in Technology — This workshop focused on balancing AI regulation and innovation, exploring how to foster technological advancement while…
S46
Evolving AI, evolving governance: from principles to action | IGF 2023 WS #196 — In conclusion, it is crucial for AI regulation to keep pace with the rapid advancements in technology. The perceived ina…
S47
Opening address of the co-chairs of the AI Governance Dialogue — International technical standards and their role to make sure that policy and regulation is flexible and agile Standard…
S48
Lightning Talk #173 Artificial Intelligence in Agrotech and Foodtech — The discussion suggests several key implications for agricultural development. First, AI tools must be designed with acc…
S49
Transforming Agriculture_ AI for Resilient and Inclusive Food Systems — AI in this regard offers significant potential. We’re seeing AI systems and tools being applied to optimize the use of c…
S50
Turbocharging Digital Transformation in Emerging Markets: Unleashing the Power of AI in Agritech (ITC) — In conclusion, data, AI, and new technologies offer great potential in revolutionising and improving agriculture. Howeve…
S51
Open Forum #58 Collaborating for Trustworthy AI an Oecd Toolkit and Spotlight on AI in Government — Jungwook Kim: Thank you. So the question is dealing with the safety or security issues around the AI and it’s a public o…
S52
Engineering Accountable AI Agents in a Global Arms Race: A Panel Discussion Report — Kolbe-Guyot explains that public administration faces unique constraints because citizens cannot choose alternative gove…
S53
Safe, secure, and trustworthy AI: What is it and how do we get there? — While global agreements on core principles are welcome, they need to turn into concrete action. So what does it mean to …
S54
Agents of Change AI for Government Services & Climate Resilience — It’s a hard one. I think, you know, AI has evolved into a global multi – disciplinary field. And I think, you know, we n…
S55
Agentic AI and the new industrial diplomacy — The shift from ‘pilot to plant’ is happening globally, but the motivations, players, and governance challenges vary shar…
S56
WS #283 AI Agents: Ensuring Responsible Deployment — Prendergast frames agentic AI as a critical technological shift where AI has evolved beyond reactive tools to become pro…
S57
https://dig.watch/event/india-ai-impact-summit-2026/agents-of-change-ai-for-government-services-climate-resilience — So when we built our AI advisors we did something unconventional. Right now we are trying to do on the pilot stage. We a…
S59
Telangana launches Aikam to scale AI deployment — The Telangana government haslaunchedAikam, a new autonomous body aimed at positioning the state as a global proving grou…
S60
Keynote-António Guterres — I urge Member States. Industry and civil society to contribute to the panel’s work. work. Second, launching a global dia…
S61
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Lt Gen Vipul Shinghal — Third, ensuring transparency in AI systems:Commanders must understand the data sources, training methodologies, and deci…
S62
Building the Next Wave of AI_ Responsible Frameworks & Standards — Bhattacharya explained that trust ranks first among Salesforce’s five core values—trust, customer success, innovation, e…
S63
European Tech Sovereignty: Feasibility, Challenges, and Strategic Pathways Forward — -Data sovereignty: Where Europe should maintain complete control -Operational sovereignty: Ensuring continuity under ex…
S64
The Declaration for the Future of the Internet: Principles to Action — A balanced scorecard with certain parameters provides a measurable indicator of the progress made.
S65
Building Population-Scale Digital Public Infrastructure for AI — The tone is optimistic and collaborative throughout, with speakers sharing concrete examples of successful implementatio…
S66
Open Forum #18 Digital Cooperation for Development Ungis in Action — Establishing concrete metrics and evaluation frameworks for measuring WSIS implementation progress
S67
Scaling AI Beyond Pilots: A World Economic Forum Panel Discussion — Outcome Focus: Success should be measured by meaningful business and human outcomes rather than just productivity metric…
S68
WAIGF Opening Ceremony & Keynote — Hajia Sani: I’m sure we can do much better than that. Another round of applause for the Minister. Thank you so much. You…
S69
(Day 3) General Debate – General Assembly, 79th session: morning session — President: On behalf of the Assembly, I wish to thank the President of the Republic of the Gambia. The Assembly will h…
S70
https://dig.watch/event/india-ai-impact-summit-2026/trusted-connections_-ethical-ai-in-telecom-6g-networks — This is not a science fiction. This is the power of AI in telecommunication. Today, AI is transforming industries. And a…
S71
Opening remarks — Morning greetings were extended to participants at the conference, including those joining virtually, with particular ac…
S72
Keynote-Sundar Pichai — Namaste. Thank you. Thank you. Prime Minister Modi and distinguished leaders. It’s wonderful to be back in India. Every …
S73
Steering the future of AI — **Major Discussion Points:**
S74
Comprehensive Summary: AI Governance and Societal Transformation – A Keynote Discussion — This presentation is structured as a single, extended keynote rather than a traditional discussion, but Hunter-Torricke’…
S75
Comprehensive Discussion Report: AI Agents and Fiduciary Standards — Pentland presented a future where AI agents would handle virtually every business and government process, essentially ad…
S76
The Future of the Internet: Navigating the Transition to an Agentic Web — Historical dominance of browser-based search experiences; emerging possibilities in voice, thought understanding, and ro…
S77
Thinking through Augmentation — While Ucuzoglu is optimistic about the long-term impact of transformative technology, he acknowledges that it is not an …
S78
Fireside Conversation: 02 — This discussion features AI pioneer Yann LeCun, known as the “godfather of deep learning,” speaking with moderator Maria…
S79
CLOSING CEREMONY | IGF 2023 — Rodney Taylor:Thank you. Distinguished ladies and gentlemen, good evening. I am honored to speak this evening, and I had…
S80
Towards a Resilient Information Ecosystem: Balancing Platform Governance and Technology — Information should be treated as a public good rather than a commercial commodity
S81
Capacity Building in Digital Health — Chopra illustrated this with a tuberculosis detection example: rather than training more healthcare workers to detect tu…
S82
Keynote-Rajesh Subramanian — Intelligence is not an asset, it’s infrastructure, the foundation of the future of global progress, productivity, and ec…
S83
Open Forum #33 Building an International AI Cooperation Ecosystem — Kurbalija argues that AI has transformed from being a mysterious technology controlled by a few developers and top labs …
S84
GEO-politics/economics/emotions in the AI era — This analysis has framed this recalibration through three interconnected lenses:
S85
How AI Drives Innovation and Economic Growth — Evidence from around the world is consistent with this. Farmers respond to these AI weather forecasts. So I think that’s…
S86
Welfare for All Ensuring Equitable AI in the Worlds Democracies — -Audience- Various audience members asking questions
S87
Beyond human: AI, superhumans, and the quest for limitless performance & longevity — This discussion from the AI for Good conference featured presentations on using artificial intelligence to combat aging …
S88
Panel Discussion Data Sovereignty India AI Impact Summit — So you’re not left behind. See, AI is a journey where we don’t want any country to be left behind. One, lack of… resou…
S89
Leaders’ Plenary | Global Vision for AI Impact and Governance Morning Session Part 1 — Because innovation means progress, for us humans and for our planet. So indeed, what better motto than People, Planet, P…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
M
Minister Sridhar Babu
5 arguments122 words per minute1656 words811 seconds
Argument 1
Transition from generative AI to “agentic” AI; the traditional search bar is being replaced (Minister Sridhar Babu)
EXPLANATION
The Minister describes a shift from generative AI that merely provides answers to agentic AI that can take actions autonomously. He notes that the classic search‑bar interface is becoming obsolete as more proactive AI systems emerge.
EVIDENCE
He states that we are moving beyond generative AI that simply answers and are now moving to agentic AI that acts now, indicating a new paradigm in AI development [22-23]. He also observes that the search bar is dying and being replaced by something more profound [33-34].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The discussion notes a shift from answer-based generative AI to action-oriented agentic AI and the decline of the classic search bar, which is highlighted in the opening remarks about AI as an inflection point [S1].
MAJOR DISCUSSION POINT
Shift from answer‑based to action‑based AI
AGREED WITH
Saibal Chakraborty, Lee Tiedrich, Mike Haley, Srinivas Tallapragada
Argument 2
AI advisors for farmers, flood prediction, climate‑responsive services, and AI‑driven urban planning (Minister Sridhar Babu)
EXPLANATION
The Minister outlines several government‑level AI applications, including farmer advisory systems, flood forecasting, climate‑responsive service delivery, and satellite‑driven urban planning. These examples illustrate how AI agents are being integrated into everyday governance to improve resilience and efficiency.
EVIDENCE
He explains that AI can act as a co-governor to predict floods before clouds gather over the Moosey river and allocate resources pre-emptively [45-48]. He describes pilots where farmers train the system with local dialects and soil wisdom, turning lived patterns into model inputs [48-55]. He mentions satellite-driven heat analysis that now informs zoning, green belts, and urban cooling strategies for Hyderabad [58-62]. He also notes solar-power edge computer nodes that keep services operational when the grid fails across 33 districts [63-64].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Examples of AI-driven farmer advisories, flood forecasting and climate-responsive planning are described in the panel on AI for climate resilience and disaster response [S2].
MAJOR DISCUSSION POINT
Practical AI use cases in agriculture and climate
AGREED WITH
Saibal Chakraborty
Argument 3
Creation of a sovereign AI nerve centre, AI city, and open data exchange platform for statewide services (Minister Sridhar Babu)
EXPLANATION
The Minister announces the development of a state‑level AI hub, an AI‑focused city, and a data exchange platform that will serve as a sovereign AI infrastructure. These initiatives aim to foster AI research, talent development, and secure data handling within Telangana.
EVIDENCE
He describes upcoming state-of-the-art infrastructure in an AI city and a net-zero Bharat future city designed as self-learning territories that provide compute and policy advice [70-72]. He introduces ICOM, the first sovereign AI nerve centre intended as an innovation hub and talent pipeline [73]. He details the Telangana data exchange platform that hosts 1,084 datasets, converting administrative exhaust into ecological signals and enabling a sovereign data pipeline [73-77].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The minister’s announcement of a sovereign AI nerve centre and an AI-city aligns with references to a state-level AI hub and data exchange platform in the policy briefing [S1] and the full-stack sovereign AI discussion [S14].
MAJOR DISCUSSION POINT
Building sovereign AI infrastructure
AGREED WITH
Srinivas Tallapragada
Argument 4
AI should be treated as a co‑governor, with policies that embed human‑pilot oversight (Minister Sridhar Babu)
EXPLANATION
The Minister proposes that AI systems function as co‑governors alongside human decision‑makers, providing predictive capabilities while retaining human oversight. This approach is presented as a way to enhance public service delivery without relinquishing control.
EVIDENCE
He likens AI to a co-pilot, stating that the government will rely on AI as co-governors that can predict floods and allocate resources before citizens request services [45-48]. Earlier, he reflects on the historical shift from command-based interactions to a partnership with technology, emphasizing the need for human oversight [25-30].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The co-pilot framing of AI as a partner to human decision-makers is reiterated in the opening remarks about AI as public infrastructure and governance tool [S1].
MAJOR DISCUSSION POINT
AI as collaborative governance tool
AGREED WITH
Srinivas Tallapragada, Lee Tiedrich, Mike Haley, Saibal Chakraborty
DISAGREED WITH
Mike Haley, Lee Tiedrich
Argument 5
AI is a form of public infrastructure; Telangana is building a sovereign AI nerve centre and data exchange platform (Minister Sridhar Babu)
EXPLANATION
The Minister frames AI as essential public infrastructure, comparable to roads or electricity, and highlights Telangana’s efforts to establish a sovereign AI nerve centre and an open data exchange. This positions AI as a foundational element of state development.
EVIDENCE
He declares that the nation leading this century will treat intelligence as public infrastructure rather than a product [40-41]. He reiterates the creation of the sovereign AI nerve centre and the data exchange platform that ensures intelligence is grounded in integrity and kept within the state [70-77].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The minister’s view of AI as essential public infrastructure is echoed in the opening statements that compare intelligence to roads and electricity [S1] and in broader discussions of trusted digital infrastructure [S15].
MAJOR DISCUSSION POINT
AI as public infrastructure
S
Srinivas Tallapragada
6 arguments171 words per minute1282 words449 seconds
Argument 1
An AI agent must have a defined role, knowledge, memory, actuation ability, and guardrails (Srinivas Tallapragada)
EXPLANATION
Srinivas outlines the essential components of an AI agent: a clear role, domain knowledge, both short‑term and long‑term memory, the ability to act via APIs or channels, and robust guardrails to prevent misuse. These elements together constitute a trustworthy, functional agent.
EVIDENCE
He explains that an agent needs to know its role, possess knowledge, retain short-term and long-term memory, be able to act on digital interfaces such as APIs, and operate across channels like WhatsApp or web [136-144]. He stresses the importance of guardrails and a trust layer to mitigate hallucinations, bias, and toxicity [190-196].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Guidelines for trustworthy agents emphasise role definition, knowledge bases, memory, actuation via APIs and strong guardrails, as outlined in the safe-AI at scale framework [S20].
MAJOR DISCUSSION POINT
Core attributes of AI agents
AGREED WITH
Minister Sridhar Babu, Saibal Chakraborty, Lee Tiedrich, Mike Haley
Argument 2
AI agents support disaster response, police assistance, and public safety operations (Srinivas Tallapragada)
EXPLANATION
Srinivas shares examples where AI agents are deployed for non‑emergency citizen queries and to assist police officers, demonstrating their utility in public safety and disaster response contexts. These pilots show that agents can provide timely information and support to frontline personnel.
EVIDENCE
He cites a city in New Thames, UK, where an agent called Bobby answers over 90 % of citizen non-emergency questions [185-188]. He also mentions a Tasmanian city using an agent named Terry to support more than a thousand police officers in the field, providing answers to operational questions [186-188].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The panel on AI for climate resilience cites concrete deployments of agents for disaster response and public safety, matching the described use cases [S2].
MAJOR DISCUSSION POINT
Public safety applications of AI agents
Argument 3
Robust guardrails, auditability, and a command‑center are required for confidence in AI deployments (Srinivas Tallapragada)
EXPLANATION
Srinivas argues that trustworthy AI deployment demands a centralized command centre, comprehensive auditability, and the ability for independent parties to test systems. These mechanisms build confidence for governments and citizens alike.
EVIDENCE
He states that a command centre is needed to differentiate pilot demos from real-life deployments, allowing customers or governments to audit, test, and even have independent parties verify the system [214-215].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for auditability, a central command centre and independent testing are central to the safe-AI monitoring and assurance discussion [S19] and the broader responsible AI guidelines [S20].
MAJOR DISCUSSION POINT
Need for oversight infrastructure
AGREED WITH
Minister Sridhar Babu, Lee Tiedrich, Mike Haley, Saibal Chakraborty
Argument 4
Distinction between strategic sovereignty (data and policy control) and technical sovereignty (full supply‑chain control) (Srinivas Tallapragada)
EXPLANATION
Srinivas differentiates two layers of sovereignty: strategic, which concerns control over data and policy, and technical, which involves ownership of the entire hardware and software supply chain. He urges governments to pursue both tracks, emphasizing that strategic sovereignty can deliver immediate benefits.
EVIDENCE
He defines strategic sovereignty as control over data, governance policies, and operational policies, which can be implemented now for value [247-250]. He describes technical sovereignty as control over the full supply chain, including chips, and recommends governments treat these as separate tracks, not letting the longer-term technical track delay the benefits of the strategic one [251-252].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The differentiation between strategic and technical AI sovereignty is explicitly addressed in the minister’s remarks on data governance and the full-stack sovereign AI briefing [S1] and [S14].
MAJOR DISCUSSION POINT
Two‑level AI sovereignty
AGREED WITH
Minister Sridhar Babu
Argument 5
Policy frameworks should be agile, allowing rapid updates as AI technology evolves (Srinivas Tallapragada)
EXPLANATION
Srinivas advocates for agile regulation that can be quickly revised as AI capabilities change, likening it to a product feedback loop. This approach would reduce fear of getting standards perfect from day one and enable continuous improvement.
EVIDENCE
He notes that current policy frameworks are slow, causing fear, and suggests a feedback loop that allows standards to be updated as new data emerges, akin to agile development in engineering [317-324].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for agile regulation and learning-by-doing is highlighted in the discussion on rapid AI adoption and agile policy cycles [S2].
MAJOR DISCUSSION POINT
Agile AI regulation
AGREED WITH
Lee Tiedrich
DISAGREED WITH
Saibal Chakraborty, Lee Tiedrich, Mike Haley
Argument 6
Success is reflected in measurable income growth for the bottom 50 % of the population (Srinivas Tallapragada)
EXPLANATION
Srinivas envisions that within three years AI should have lifted the per‑capita income of the lowest half of the population, using this metric as a gauge of AI’s societal impact. He frames income uplift as the ultimate indicator of technology’s benefit.
EVIDENCE
He states his hope that in three years the bottom 50 % income percentile will have measurable per-capita income growth, describing this as the real impact of the technology [355-357].
MAJOR DISCUSSION POINT
Income‑based impact metric
S
Saibal Chakraborty
5 arguments152 words per minute569 words224 seconds
Argument 1
Agentic AI enables end‑to‑end execution of business and government processes (Saibal Chakraborty)
EXPLANATION
Saibal asserts that the conversation has moved from solving isolated problems to enabling AI agents that can execute entire business or governmental workflows from start to finish. This represents a fundamental change in how AI is applied.
EVIDENCE
He remarks that the discussion has moved decisively towards agentic AI and that we are now looking at end-to-end AI-led execution of business processes or government processes [110-112].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The shift to agentic AI that can execute full workflows, rather than isolated tasks, is noted in the agentic AI security-by-design overview [S17].
MAJOR DISCUSSION POINT
End‑to‑end AI execution
AGREED WITH
Minister Sridhar Babu, Lee Tiedrich, Mike Haley, Srinivas Tallapragada
Argument 2
AI can draft multi‑million‑dollar RFPs and automate public procurement workflows (Saibal Chakraborty)
EXPLANATION
Saibal raises the scenario where an AI agent prepares large‑scale procurement documents, highlighting the need to consider appropriate guardrails and human oversight for high‑value transactions. He questions the extent of autonomy such agents should have.
EVIDENCE
He describes an agent crafting a multi-million or billion-dollar RFP on behalf of the government and asks what guardrails are needed, whether full autonomy is possible, or if a final human layer is required to ensure correctness [148-154].
MAJOR DISCUSSION POINT
AI in public procurement
Argument 3
Public sector must decide the level of autonomy versus human oversight for high‑stakes tasks (Saibal Chakraborty)
EXPLANATION
Saibal emphasizes that governments need to determine how much autonomy to grant AI agents, especially for critical functions like procurement, balancing speed with procedural rigor. He underscores the importance of maintaining human checkpoints.
EVIDENCE
He asks whether an agent can be fully autonomous in high-stakes contexts or if a human layer must remain to cross the T’s and dot the I’s, noting the potential negative impact of mistakes [149-154].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The UN GA statement calls for universal guardrails, clear accountability and a balance between automation and human oversight for high-impact AI applications [S18].
MAJOR DISCUSSION POINT
Balancing autonomy and oversight
AGREED WITH
Minister Sridhar Babu, Srinivas Tallapragada, Lee Tiedrich, Mike Haley
DISAGREED WITH
Lee Tiedrich, Srinivas Tallapragada, Mike Haley
Argument 4
Public officials need upskilling to understand AI trust limits and when human checks are required (Saibal Chakraborty)
EXPLANATION
Saibal points out that many public officials lack AI engineering expertise, so systematic upskilling is essential for them to recognize where AI outputs are trustworthy and where additional human verification is needed.
EVIDENCE
He notes that district-level officials are not AI engineers and must be upskilled to know what can be trusted and what requires extra human checks, highlighting this as a major component for AI adoption in the public sector [253-254].
MAJOR DISCUSSION POINT
Upskilling government staff
AGREED WITH
Minister Sridhar Babu
Argument 5
Success is achieved when a farmer can receive vernacular, AI‑driven advice at scale across India (Saibal Chakraborty)
EXPLANATION
Saibal defines success as the ability for every farmer to interact with an AI tool in their native language and obtain practical, actionable advice, thereby demonstrating inclusive AI impact.
EVIDENCE
He states that the true win for AI would be if a farmer could talk to a small language-model-powered tool in his or her own vernacular and receive practical advice on crops and cattle, scaled across India [334-335].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The climate-resilience panel describes farmer-focused AI tools that deliver advice in local languages, illustrating the envisioned scalable vernacular service [S2].
MAJOR DISCUSSION POINT
Inclusive AI for agriculture
M
Mike Haley
6 arguments213 words per minute1516 words426 seconds
Argument 1
Shift from narrow, task‑specific agents to systems‑level reasoning and chained actions (Mike Haley)
EXPLANATION
Mike observes that earlier AI agents were limited to narrow tasks, whereas current agents can perform chain‑of‑thought reasoning and coordinate multiple actions across systems. This marks a transition to more complex, integrated AI capabilities.
EVIDENCE
He contrasts last year’s narrow agents that solved specific problems with today’s agents that can abstract problems, perform chain-of-thought reasoning, and operate at a systems level [119-122].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The evolution toward chain-of-thought reasoning and systems-level AI agents is discussed in the agentic AI capabilities briefing [S17].
MAJOR DISCUSSION POINT
From narrow to systems‑level AI
AGREED WITH
Minister Sridhar Babu, Saibal Chakraborty, Lee Tiedrich, Srinivas Tallapragada
Argument 2
AI agents can analyze floodplains, optimise drainage, and assist in infrastructure design (Mike Haley)
EXPLANATION
Mike provides a concrete use case where AI agents evaluate floodplain characteristics and suggest drainage optimizations, illustrating how agents can augment civil‑engineering design processes.
EVIDENCE
He explains that AI agents can analyze floodplains, evaluate water drainage, and optimize design decisions early in the process, thereby improving infrastructure outcomes [169-174].
MAJOR DISCUSSION POINT
AI‑assisted infrastructure design
Argument 3
Because AI is probabilistic, human engineers need transparent control and the ability to intervene (Mike Haley)
EXPLANATION
Mike stresses that AI systems are inherently probabilistic and cannot be made perfectly deterministic; therefore, engineers must retain the ability to review, adjust, and re‑run outputs, which builds trust through transparency and control.
EVIDENCE
He notes that guardrails cannot guarantee perfect results, so systems should provide accuracy feedback and allow engineers to intervene, tweak, reassess, or rerun the model, emphasizing that this control is essential for trust [217-231].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Responsible AI guidelines stress human-in-the-loop control, transparency and the ability to adjust probabilistic model outputs [S20].
MAJOR DISCUSSION POINT
Human‑in‑the‑loop for probabilistic AI
AGREED WITH
Minister Sridhar Babu, Srinivas Tallapragada, Lee Tiedrich, Saibal Chakraborty
DISAGREED WITH
Minister Sridhar Babu, Lee Tiedrich
Argument 4
Engineers must retain the ability to review, adjust, and re‑run AI outputs to maintain trust (Mike Haley)
EXPLANATION
Mike reiterates that giving engineers the capacity to modify AI results and understand the underlying processes is crucial for confidence in AI deployments. This aligns with the broader theme of transparent, controllable systems.
EVIDENCE
He describes how engineers can give feedback, adjust parameters, and rerun models, and that this ability to control the system underpins trust rather than expecting flawless outputs [224-231].
MAJOR DISCUSSION POINT
Control loops for trustworthy AI
Argument 5
Industry can pre‑empt regulation by providing “transparency cards” that disclose model provenance, accuracy, and bias (Mike Haley)
EXPLANATION
Mike outlines a proactive industry measure where each AI feature includes a “transparency card” similar to a nutrition label, detailing model type, training data, accuracy, and known biases. This aims to give governments clear information and potentially shape future standards.
EVIDENCE
He explains that every AI feature in their software now includes a transparency card showing model details, training data, accuracy, and bias information, and that this practice has attracted interest as a possible standard [301-304].
MAJOR DISCUSSION POINT
Proactive disclosure for regulatory alignment
DISAGREED WITH
Saibal Chakraborty, Lee Tiedrich, Srinivas Tallapragada
Argument 6
Success means faster, safer infrastructure development with public confidence and engineer endorsement (Mike Haley)
EXPLANATION
Mike envisions AI enabling infrastructure projects to be completed more quickly and safely, while also ensuring that engineers and the public trust the technology. He links speed, safety, and confidence as key success metrics.
EVIDENCE
He states that in a few years we should see infrastructure built faster than ever, without compromising safety, and that engineers and the public must feel comfortable and secure with the AI-enabled processes [345-353].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
India’s digital and industrial AI strategy emphasizes trusted, interoperable infrastructure that accelerates projects while maintaining safety and public confidence [S15].
MAJOR DISCUSSION POINT
Accelerated, trustworthy infrastructure
L
Lee Tiedrich
5 arguments197 words per minute833 words252 seconds
Argument 1
AI agents can act on behalf of people, moving beyond answering queries (Lee Tiedrich)
EXPLANATION
Lee highlights that the emergence of agentic AI allows systems not only to provide answers but also to take actions on behalf of users, representing a major shift in AI capability.
EVIDENCE
He notes that the biggest change is the ability of AI not only to do end-to-end tasks but also to act on behalf of people [115-117].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The agentic AI overview highlights the new capability of agents to act on users’ behalf, not just provide answers [S17].
MAJOR DISCUSSION POINT
AI acting for users
AGREED WITH
Minister Sridhar Babu, Saibal Chakraborty, Mike Haley, Srinivas Tallapragada
Argument 2
Over‑reliance risks demand sandboxes, human‑in‑the‑loop safeguards, and clear liability rules (Lee Tiedrich)
EXPLANATION
Lee warns that excessive reliance on AI without proper safeguards can be dangerous, recommending sandbox environments, human‑in‑the‑loop controls, and clear liability frameworks to mitigate risks.
EVIDENCE
He discusses the need for sandboxes, human-in-the-loop safeguards, and considerations of liability and testing when agents call third-party agents [190-203].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The UN GA call for universal guardrails and the safe-AI at scale paper both stress sandbox testing, human-in-the-loop safeguards and clear liability frameworks [S18].
MAJOR DISCUSSION POINT
Risk mitigation for AI deployment
AGREED WITH
Minister Sridhar Babu, Srinivas Tallapragada, Mike Haley, Saibal Chakraborty
DISAGREED WITH
Minister Sridhar Babu, Mike Haley
Argument 3
Human judgment is essential for selecting safe use cases and applying guardrails (Lee Tiedrich)
EXPLANATION
Lee stresses that selecting appropriate use cases and implementing guardrails requires human judgment, emphasizing a cautious and smart approach to AI adoption in government.
EVIDENCE
He advises picking use cases wisely, noting that AI excels in some areas while others are not ready for prime time, and that over-reliance can cause neglect of necessary guardrails and human oversight [191-196].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Guidance on responsible AI deployment underscores the role of human judgment in use-case selection and guardrail design [S18].
MAJOR DISCUSSION POINT
Human‑centric AI governance
Argument 4
Global, multi‑disciplinary standards and evaluation ecosystems are needed, with localisation for different jurisdictions (Lee Tiedrich)
EXPLANATION
Lee calls for internationally coordinated standards and evaluation frameworks for AI, while allowing localisation to respect different legal and cultural contexts. He sees this as a foundation for effective regulation.
EVIDENCE
He describes the need for global, multi-disciplinary standards, evaluation ecosystems, and the necessity to localise standards for different jurisdictions, noting differing regulatory approaches in the U.S. and Europe [260-280].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The UN GA statement calls for globally coordinated, multi-disciplinary AI standards that can be localized to regional legal and cultural contexts [S18].
MAJOR DISCUSSION POINT
International AI standards with local adaptation
AGREED WITH
Srinivas Tallapragada
DISAGREED WITH
Saibal Chakraborty, Srinivas Tallapragada, Mike Haley
Argument 5
Success includes active AI safety institutes sharing evaluation techniques worldwide (Lee Tiedrich)
EXPLANATION
Lee envisions a future where AI safety institutes are active, collaborating globally to develop and share evaluation methods, thereby strengthening AI safety practices across regions.
EVIDENCE
He mentions that within three years there should be active AI safety institutes sharing evaluation techniques, with efforts to localise these practices for both Global North and South [339-343].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The same UN GA discussion envisions active AI safety institutes that share evaluation methods across regions to strengthen global AI safety [S18].
MAJOR DISCUSSION POINT
Global AI safety collaboration
Agreements
Agreement Points
Recognition of a major shift from generative, answer‑based AI to agentic, action‑oriented AI
Speakers: Minister Sridhar Babu, Saibal Chakraborty, Lee Tiedrich, Mike Haley, Srinivas Tallapragada
Transition from generative AI to “agentic” AI; the traditional search bar is being replaced (Minister Sridhar Babu) Agentic AI enables end‑to‑end execution of business and government processes (Saibal Chakraborty) AI agents can act on behalf of people, moving beyond answering queries (Lee Tiedrich) Shift from narrow, task‑specific agents to systems‑level reasoning and chained actions (Mike Haley) An AI agent must have a defined role, knowledge, memory, actuation ability, and guardrails (Srinivas Tallapragada)
All speakers highlighted that AI is moving beyond simple answer generation toward autonomous, action-taking agents that can execute whole workflows, signalling a new paradigm in AI development [22-23][110-112][115-117][119-122][136-144].
POLICY CONTEXT (KNOWLEDGE BASE)
The minister highlighted that AI is moving beyond simple question answering toward autonomous agents that can take real-world actions, marking a transition from generative models to action-oriented systems [S37].
Need for robust guardrails, auditability and human‑in‑the‑loop oversight for AI agents
Speakers: Minister Sridhar Babu, Srinivas Tallapragada, Lee Tiedrich, Mike Haley, Saibal Chakraborty
AI should be treated as a co‑governor, with policies that embed human‑pilot oversight (Minister Sridhar Babu) Robust guardrails, auditability, and a command‑center are required for confidence in AI deployments (Srinivas Tallapragada) Over‑reliance risks demand sandboxes, human‑in‑the‑loop safeguards, and clear liability rules (Lee Tiedrich) Because AI is probabilistic, human engineers need transparent control and the ability to intervene (Mike Haley) Public sector must decide the level of autonomy versus human oversight for high‑stakes tasks (Saibal Chakraborty)
Every panelist stressed that AI agents must be bounded by clear guardrails, be auditable, and retain human oversight, especially for high-impact government functions [25-30][136-144][214-215][190-203][217-231][148-154].
POLICY CONTEXT (KNOWLEDGE BASE)
Panelists stressed the critical importance of enterprise guardrails, auditability and human-in-the-loop (or on-the-loop) oversight for agentic AI, especially in high-risk environments [S41][S42].
Capacity development and training are essential for effective AI deployment
Speakers: Minister Sridhar Babu, Saibal Chakraborty
AI advisors for farmers, flood prediction, climate‑responsive services, and AI‑driven urban planning (Minister Sridhar Babu) Public officials need upskilling to understand AI trust limits and when human checks are required (Saibal Chakraborty)
Both the minister and Saibal highlighted the importance of training end-users-farmers in rural Telangana and public officials across India-to ensure AI systems are used responsibly and effectively [48-55][253-254].
POLICY CONTEXT (KNOWLEDGE BASE)
Multiple workshops and roadmaps identify capacity building-technical training and policy-level education-as a prerequisite for responsible AI implementation across sectors [S33][S34][S36][S38].
AI is being positioned as core public infrastructure and a sovereign data ecosystem
Speakers: Minister Sridhar Babu, Srinivas Tallapragada
Creation of a sovereign AI nerve centre, AI city, and open data exchange platform for statewide services (Minister Sridhar Babu) Distinction between strategic sovereignty (data and policy control) and technical sovereignty (full supply‑chain control) (Srinivas Tallapragada)
The minister and Srinivas concur that AI should be treated like roads or electricity-public infrastructure-backed by a sovereign data platform that guarantees strategic control over data and policy while charting a longer-term technical sovereignty path [40-41][70-77][247-252].
POLICY CONTEXT (KNOWLEDGE BASE)
India’s white paper treats AI compute, datasets and models as a digital public good, reflecting a broader view of AI as essential public infrastructure and a sovereign data ecosystem [S30][S31][S32].
Regulatory frameworks for AI must be agile and adaptable to rapid technological change
Speakers: Srinivas Tallapragada, Lee Tiedrich
Policy frameworks should be agile, allowing rapid updates as AI technology evolves (Srinivas Tallapragada) Global, multi‑disciplinary standards and evaluation ecosystems are needed, with localisation for different jurisdictions (Lee Tiedrich)
Both speakers argue for flexible, evolving policy and standards mechanisms that can keep pace with AI advances, combining agile domestic regulation with internationally coordinated, locally-adapted standards [317-324][260-280].
POLICY CONTEXT (KNOWLEDGE BASE)
Analyses call for agile, risk-based regulatory approaches that can keep pace with fast-moving AI technologies, echoing recommendations from the EU AI Act and IGF discussions [S45][S46][S47].
Similar Viewpoints
Both panelists see the emergence of agentic AI as a catalyst that moves applications from isolated, task‑specific tools toward integrated, system‑wide process automation, enabling governments to streamline complex workflows [110-112][119-122].
Speakers: Saibal Chakraborty, Mike Haley
Agentic AI enables end‑to‑end execution of business and government processes (Saibal Chakraborty) Shift from narrow, task‑specific agents to systems‑level reasoning and chained actions (Mike Haley)
Unexpected Consensus
AI‑driven farmer support as a key success metric
Speakers: Minister Sridhar Babu, Saibal Chakraborty
AI advisors for farmers, flood prediction, climate‑responsive services, and AI‑driven urban planning (Minister Sridhar Babu) Success is achieved when a farmer can receive vernacular, AI‑driven advice at scale across India (Saibal Chakraborty)
While the minister discussed pilot projects that train AI with farmer dialects and local knowledge, Saibal framed nationwide vernacular advisory capability as the ultimate measure of AI success-showing an unexpected alignment between a state-level implementation focus and a pan-India inclusive impact goal [48-55][334-335].
POLICY CONTEXT (KNOWLEDGE BASE)
Agritech literature emphasizes AI-enabled farmer support-optimising water, fertilizer and pest use-as a primary metric for impact, while noting data accessibility and ecosystem support as critical enablers [S48][S49][S50].
Overall Assessment

The panel displayed strong convergence on several fronts: the transition to agentic AI, the necessity of guardrails and human oversight, the importance of capacity building, the framing of AI as sovereign public infrastructure, and the need for agile, standards‑based regulation. These shared positions cut across government, academia and industry, indicating a common understanding of both opportunities and risks associated with AI agents.

High consensus – the speakers largely agree on the direction of AI development and the policy/operational safeguards required, which bodes well for coordinated action on AI governance, capacity building and infrastructure investment.

Differences
Different Viewpoints
Confidence in AI’s predictive capability for disaster/flood response and the degree of autonomy it should have
Speakers: Minister Sridhar Babu, Mike Haley, Lee Tiedrich
AI should be treated as a co‑governor, with policies that embed human‑pilot oversight (Minister Sridhar Babu) Because AI is probabilistic, human engineers need transparent control and the ability to intervene (Mike Haley) Over‑reliance risks demand sandboxes, human‑in‑the‑loop safeguards, and clear liability rules (Lee Tiedrich)
The Minister claims AI can act as a co-governor that predicts floods before clouds gather and allocates resources pre-emptively [45-48], while Mike stresses that AI is inherently probabilistic, cannot guarantee perfect predictions and therefore requires human engineers to retain control and intervene [217-231]. Lee adds that over-reliance on such autonomous systems is risky and calls for sandboxes, human-in-the-loop safeguards and liability frameworks [190-203]. These positions reflect a clash between a high-confidence, autonomous vision and a cautious, human-centric safety stance.
POLICY CONTEXT (KNOWLEDGE BASE)
Discussions on climate-resilient AI note its potential for high-resolution flood forecasting but also highlight the need for reliable data, human oversight and clear limits on autonomy in emergency contexts [S39][S41].
Preferred mechanism for ensuring safe, trustworthy AI deployment in the public sector
Speakers: Saibal Chakraborty, Lee Tiedrich, Srinivas Tallapragada, Mike Haley
Public sector must decide the level of autonomy versus human oversight for high‑stakes tasks (Saibal Chakraborty) Global, multi‑disciplinary standards and evaluation ecosystems are needed, with localisation for different jurisdictions (Lee Tiedrich) Policy frameworks should be agile, allowing rapid updates as AI technology evolves (Srinivas Tallapragada) Industry can pre‑empt regulation by providing “transparency cards” that disclose model provenance, accuracy, and bias (Mike Haley)
Saibal argues that governments need to set guardrails and decide how much autonomy to grant AI agents, especially for critical functions like procurement [148-154]. Lee proposes a top-down solution: develop global, multi-disciplinary standards and evaluation ecosystems that can be localised [260-280]. Srinivas suggests a bottom-up, agile regulatory approach where standards are continuously updated through feedback loops [317-324]. Mike offers a market-driven answer, where industry voluntarily adds transparency cards to each AI feature to inform regulators and users [301-304]. The disagreement lies in whether the primary driver of safe AI should be government-mandated standards, agile policy cycles, or industry self-regulation.
POLICY CONTEXT (KNOWLEDGE BASE)
Workshops on trustworthy AI stress the role of robust guardrails, auditability, international standards and human accountability as mechanisms for safe public-sector AI deployment [S41][S51][S53].
Unexpected Differences
The Minister’s optimistic claim that AI can replace the traditional search bar and act as a proactive co‑governor versus panelists’ caution about AI’s probabilistic limits and need for human control
Speakers: Minister Sridhar Babu, Mike Haley, Lee Tiedrich
AI should be treated as a co‑governor, with policies that embed human‑pilot oversight (Minister Sridhar Babu) Because AI is probabilistic, human engineers need transparent control and the ability to intervene (Mike Haley) Over‑reliance risks demand sandboxes, human‑in‑the‑loop safeguards, and clear liability rules (Lee Tiedrich)
The Minister declares that the search bar is dying and that AI will act proactively as a co-governor for flood prediction and resource allocation [33-34][45-48]. This confident, near-autonomous vision was unexpected given the panel’s consistent emphasis on AI’s probabilistic nature, the necessity of human oversight, and the risks of over-reliance [217-231][190-203]. The contrast highlights a surprising gap between policy optimism and technical caution.
POLICY CONTEXT (KNOWLEDGE BASE)
The minister’s vision of AI as a proactive co-governor contrasts with panelist concerns about probabilistic outputs and the necessity of human oversight, reflecting an ongoing debate on agency, safety and governance of AI systems [S37][S41][S42][S38].
Overall Assessment

The discussion revealed a core consensus that AI agents must be governed by robust guardrails, auditability, and human oversight. The main points of contention centered on how much autonomy to grant AI systems—especially for high‑stakes public functions like disaster prediction—and on the best pathway to achieve safe deployment, whether through government‑driven standards, agile policy cycles, or industry‑led transparency measures. The unexpected optimism expressed by the Minister about AI’s autonomous capabilities contrasted sharply with the panel’s cautionary stance, underscoring a tension between policy ambition and technical realism.

Moderate to high. While participants share the overarching goal of trustworthy AI, they diverge significantly on the degree of autonomy and the primary mechanism for regulation, which could affect the speed and effectiveness of AI integration into public services.

Partial Agreements
All four speakers concur that AI agents need strong guardrails, auditability, and human oversight before being deployed in critical public‑sector contexts. However, they diverge on the concrete mechanisms: Saibal focuses on procedural guardrails, Lee on sandboxing and liability, Srinivas on a centralized command‑center and auditability, and Mike on engineer‑level transparency and control [148-154][190-203][214-215][217-231].
Speakers: Saibal Chakraborty, Lee Tiedrich, Srinivas Tallapragada, Mike Haley
Public sector must decide the level of autonomy versus human oversight for high‑stakes tasks (Saibal Chakraborty) Over‑reliance risks demand sandboxes, human‑in‑the‑loop safeguards, and clear liability rules (Lee Tiedrich) Robust guardrails, auditability, and a command‑center are required for confidence in AI deployments (Srinivas Tallapragada) Because AI is probabilistic, human engineers need transparent control and the ability to intervene (Mike Haley)
Takeaways
Key takeaways
AI is moving from narrow, query‑based tools to agentic systems that can act end‑to‑end on behalf of users and governments. Agentic AI requires a defined role, knowledge base, memory, actuation capability, and robust guardrails to be trustworthy. Governments can leverage AI agents for concrete public‑service challenges such as flood prediction, agricultural advice, disaster response, public procurement, and infrastructure design. Treating AI as a form of public infrastructure demands sovereign data strategies, exemplified by Telangana’s AI nerve centre and open data exchange platform. Human‑in‑the‑loop oversight, upskilling of public officials, and transparent control mechanisms are essential because AI systems are probabilistic and can hallucinate. Standardisation, evaluation ecosystems, and agile policy frameworks are needed to keep regulation in step with rapid AI advances, with localisation for different jurisdictions. Success metrics should focus on tangible societal impact – e.g., vernacular AI assistance for farmers, income growth for low‑income populations, and faster, safer infrastructure development.
Resolutions and action items
Telangana will continue building its sovereign AI nerve centre (ICOM) and expand the Telangana data exchange platform to support AI‑driven governance. Governments are encouraged to establish a ‘command‑center’ architecture for auditing, testing and monitoring AI agents before deployment. Public sector bodies should launch up‑skilling programmes so officials understand AI trust limits and can apply human‑in‑the‑loop checks. Adopt a phased ‘crawl‑walk‑run’ rollout model for AI agents, starting with low‑risk pilots (e.g., farmer advisory bots) and expanding as guardrails mature. Industry participants (e.g., Autodesk) will provide “transparency cards” that disclose model provenance, accuracy, bias and data sources for AI features. Create sandbox environments for high‑stakes use cases (e.g., AI‑generated RFPs) to test guardrails, liability rules and interoperability of third‑party agents. Policymakers should design agile regulatory frameworks that allow standards and evaluation criteria to be updated iteratively as technology evolves.
Unresolved issues
Determining the optimal balance between full AI autonomy and required human oversight for high‑impact government tasks. Establishing clear liability and accountability mechanisms when AI agents invoke third‑party services. Defining concrete, universally accepted evaluation metrics and certification processes for AI agents across diverse jurisdictions. Achieving technical sovereignty (full control over hardware supply chains) while still reaping benefits from strategic data sovereignty. Ensuring equitable access to vernacular AI tools for all farmers and marginalized communities at scale. How to synchronise global standards with local cultural, legal, and linguistic requirements without stalling innovation.
Suggested compromises
Implement human‑in‑the‑loop safeguards for critical processes while allowing agents to operate autonomously on lower‑risk tasks. Adopt a two‑track sovereignty approach: pursue immediate strategic data sovereignty and plan for longer‑term technical sovereignty. Use a “crawl‑walk‑run” methodology: start with simple, well‑guarded pilots, then progressively expand functionality as confidence grows. Combine agile policy updates with sandbox testing, enabling rapid iteration of standards without waiting for full legislative cycles. Pair AI agents with transparent control panels that let engineers intervene, adjust parameters, and override outputs when needed.
Thought Provoking Comments
Follow-up Questions
What guardrails should be put around AI agents when they generate large public procurement documents (RFPs), and should a human oversight layer remain?
High‑stakes procurement requires accountability and safeguards to prevent costly errors, making it essential to define appropriate guardrails and determine the necessity of human‑in‑the‑loop review.
Speaker: Saibal Chakraborty
Should AI agents in the public sector be fully autonomous or always include a human‑in‑the‑loop for critical decisions?
Clarifying the degree of autonomy influences governance design, risk management, and public trust in AI‑driven government processes.
Speaker: Saibal Chakraborty
How can governments engineer trust into AI agent systems so that ministers and secretaries feel confident using them?
Establishing trust mechanisms (auditability, transparency, control) is prerequisite for adoption of AI agents in high‑impact governmental roles.
Speaker: Victoria Espinel
What are the risks of over‑reliance on AI agents in government services, and how can they be mitigated?
Identifying over‑reliance hazards (e.g., blind trust, lack of human oversight) helps shape safeguards and balanced deployment strategies.
Speaker: Victoria Espinel
How can governments balance strategic and technical AI sovereignty, achieving data control now while pursuing full supply‑chain sovereignty later?
Strategic sovereignty (data governance) can deliver immediate benefits, while technical sovereignty (hardware/control of supply chain) requires longer‑term investment; balancing both is crucial for national security and autonomy.
Speaker: Srinivas Tallapragada
What upskilling programs are needed for public‑sector staff to effectively work with AI agents and understand their limitations?
Public officials often lack AI expertise; targeted training ensures they can interpret outputs, apply guardrails, and maintain oversight.
Speaker: Saibal Chakraborty
What standards and evaluation mechanisms should be developed for AI agents, and how can they be localized for different cultural and regulatory contexts?
Common standards enable consistent safety assessments, while localization respects regional legal, cultural, and ethical differences.
Speaker: Lee Tiedrich
How can regulatory frameworks be made agile to keep pace with the rapid evolution of AI technologies?
Agile regulation allows policies to be updated as AI capabilities change, preventing regulatory lag and fostering innovation.
Speaker: Srinivas Tallapragada
How should an evaluation ecosystem for AI safety and security be built, especially involving AI Centers in the Global South?
A coordinated evaluation infrastructure, including regional AI safety institutes, is needed to test, certify, and share best practices globally.
Speaker: Lee Tiedrich
What digital infrastructure (e.g., BIM, standardized data models) is required to enable AI agents to assist in designing and managing physical infrastructure?
Accurate, standardized digital representations of assets are prerequisite for AI agents to generate reliable designs and operational recommendations.
Speaker: Mike Haley
How can the impact of AI on inclusive outcomes be measured, such as providing vernacular language tools for farmers across India?
Defining metrics for accessibility and effectiveness of AI tools in local languages is essential to assess inclusive benefits.
Speaker: Saibal Chakraborty
What methodologies can be used to assess whether AI deployments are raising income for the bottom 50 % income percentile?
Quantifying socio‑economic impact on low‑income populations provides a concrete measure of AI’s societal value.
Speaker: Srinivas Tallapragada
What governance models are effective for sovereign AI nerve centers and state‑run data exchange platforms?
Understanding organizational, legal, and operational frameworks for state‑owned AI hubs informs replication and scaling.
Speaker: Minister Sridhar Babu
How can AI agents be integrated with real‑time climate data pipelines to predict floods, droughts, and other events for proactive governance?
Linking AI agents to live environmental data can enable anticipatory actions, reducing disaster impact.
Speaker: Minister Sridhar Babu
What ethical and liability frameworks are needed for multi‑agent ecosystems where agents invoke third‑party agents?
When agents call external services, clear rules for responsibility, testing, and risk allocation are required.
Speaker: Lee Tiedrich
What technical guardrails (e.g., hallucination detection, bias mitigation) are required for government‑deployed AI agents?
Ensuring outputs are accurate, unbiased, and free from hallucinations is critical for trustworthy public‑sector AI applications.
Speaker: Srinivas Tallapragada, Mike Haley

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.