Agents of Change AI for Government Services & Climate Resilience
20 Feb 2026 11:00h - 12:00h
Agents of Change AI for Government Services & Climate Resilience
Summary
The session focused on the emerging role of AI agents in public governance, introduced by Minister Sridhar Babu and explored by a panel of experts. The Minister framed the current moment as an inflection point where the shift from generative AI that merely answers to agentic AI that can act is redefining policy making [22-24]. He argued that intelligence should be treated as public infrastructure and described Telangana’s vision of AI as a co-governor that can forecast floods on the Moosy river and pre-allocate resources before crises hit [45-48]. Pilot projects cited include AI-assisted farmers trained with local dialects, a Telugu-language land-record AI that compresses response times, and satellite-driven heat analysis that will guide Hyderabad’s urban cooling strategy by 2035 [53-62]. The state also launched a sovereign AI nerve centre (ICOM) and an open data exchange platform with over 1,000 datasets, which he says already powers anticipatory health care and climate-resilient services [72-78][80-86]. Panelists defined an AI agent as a role-aware system with memory that can act across digital channels, insisting that guardrails and a trust layer are essential to curb hallucinations and bias [133-141][142-144][145-147]. They agreed that the biggest change is the move from narrow, task-specific bots to end-to-end, systems-level agents capable of autonomously executing business or government processes [110-113][119-122]. However, high-stakes applications such as drafting multi-million-dollar RFPs demand strong safeguards and likely retain a human-in-the-loop for final validation [148-154]. Concrete use-cases highlighted were AI-driven flood-plain analysis for water infrastructure and police-assistant agents deployed in the UK and Tasmania, demonstrating immediate value while larger ambitions continue [170-174][185-188]. Speakers stressed the importance of data sovereignty-strategic control now and technical supply-chain control later-advocating a two-track approach so governments can benefit today while building longer-term capabilities [246-250][251-254]. Upskilling public officials and providing transparent “nutrition-label” disclosures were identified as critical guardrails, acknowledging that probabilistic systems can never be perfectly deterministic [253][301-304][218-222]. Success metrics proposed included vernacular AI tools that empower farmers, faster delivery of physical infrastructure, and measurable income growth for the bottom 50 % of earners [334-335][345-347][355-356]. The discussion concluded that AI agents can become a force multiplier for governance if standards, evaluation frameworks, and agile regulation evolve in step with rapid technological change [260-268][317-324].
Keypoints
Major discussion points
– Shift from traditional AI to agentic AI – The Minister highlighted moving “from generative AI that simply answers… to agentic AI that acts now” [22-24]. Panelists echoed this transition, describing it as a move to “end-to-end AI-led execution of business processes or government processes” [110-113] and noting that “the biggest change… is the emergence of agentic AI” [116-117]. Mike added that the evolution is “from task specific to systems level” [120-122], while Srini observed a shift “from co-pilot human in the loop to agents which can act and really provide value” [124-125].
– Concrete Telangana initiatives using AI agents – The Minister gave multiple examples: AI-driven flood prediction for the “Moosey” river [45-48], a Telugu-first AI that “records land records, interprets satellite indicators and compresses the time between the climate event and an incident settlement” [57-59], satellite-driven heat analysis shaping urban cooling strategies [60-62], solar-powered edge compute nodes keeping services alive during grid failures [63-64], and the creation of a “sovereign AI nerve centre” and a state-wide data exchange platform that powers health-risk anticipation and climate-resilient planning [73-84][85-88].
– Guardrails, trust, and human oversight – Srini stressed that an agent must have a “trust layer” with guardrails to prevent hallucinations, bias, and toxicity [144-148]. Lee warned of “risks of over-reliance” and emphasized careful use-case selection, sandboxes, and clear liability [190-203]. Mike highlighted the need for transparency (e.g., “nutrition-label” style cards) and human-in-the-loop control to build trust [216-231][301-306]. Saibal pointed out that up-skilling public-sector staff is essential because “the person… is not an AI engineer” [253-254].
– Strategic and technical sovereignty over data and AI – The Minister described a “sovereign AI nerve centre” and an open data pipeline that keeps “all the data… on this platform” [73-77]. Srini differentiated “strategic sovereignty” (control over data and policies) from “technical sovereignty” (control over the full supply chain) and urged governments to pursue both tracks [246-251].
– Future success metrics and vision – In closing, panelists offered concrete measures of progress: a farmer being able to get vernacular advice from a small language model [334-335]; an active AI safety evaluation ecosystem shared globally [339-343]; infrastructure built faster and safely with public confidence [345-351]; and measurable uplift in income for the bottom 50 % of the population [355-357].
Overall purpose / goal
The discussion aimed to showcase how AI agents can become “force multipliers” for public governance-illustrated by Telangana’s pioneering projects-while jointly exploring the policy, technical, and ethical frameworks needed to deploy them responsibly. Participants sought to define practical use-cases, outline necessary guardrails, and envision measurable outcomes for a “better tomorrow” powered by trustworthy, sovereign AI.
Overall tone
The conversation began with an optimistic and visionary tone, celebrating Telangana’s breakthroughs and the promise of agentic AI. As the panel moved into technical details, the tone became analytical and cautionary, focusing on risks, guardrails, and the need for human oversight. In the final segment, the tone shifted to forward-looking and hopeful, emphasizing concrete success metrics and collaborative pathways for governments and industry. Throughout, the dialogue remained constructive and collaborative.
Speakers
– Victoria Espinel – Panel moderator and discussion facilitator; representative of Salesforce (thanked Salesforce team)[S12]
– Minister Sridhar Babu – Minister (Telangana), policymaker and government official discussing AI governance[S5]
– Srinivas Tallapragada – Engineering leader for a major AI platform (referred to as “Srini”), focuses on AI agents and trust layers[S8]
– Saibal Chakraborty – Panelist, AI policy and public-sector expert[S9]
– Lee Tiedrich – Professor, AI safety researcher; contributed to International AI Safety Report[S2]
– Mike Haley – Senior Director of AI at Autodesk; discusses AI applications in infrastructure and guardrails[S1]
Additional speakers:
– None
The session opened with Victoria Espinel welcoming Minister Sridhar Babu, describing him as a “very special guest” and inviting him to the podium [1-6]. The Minister began by greeting the audience, highlighting Delhi as the capital of India and noting the presence of distinguished panelists and industry leaders [7-12]. He framed the discussion around “AI agents for a Better Tomorrow” and positioned the present moment as a fundamental inflection point in governance [16-17].
A central theme introduced by the Minister was the transition from generative AI, which merely answers questions, to “agentic AI that acts now” [22-24]. He argued that the traditional search bar is dying, to be replaced by more profound, action-oriented systems [33-34]. This shift, he suggested, marks the third era, defined by the intelligence of the system, which should be treated as public infrastructure rather than a product [36-41]. He illustrated this with three “lives” of AI in the country: research, policy, and finally, real-world impact that addresses dust, drought, monsoons and markets [43-44].
He reiterated the conference theme “AI for everyone, AI for human welfare” [85-86] and, after outlining the vision, thanked the Salesforce team, the event organizers, and the audience for the opportunity to present Telangana’s work [90-95]. He also framed the future of governance as being forged in the “living laboratories of the Global South”, citing Hyderabad as a prime example [95-98].
The Minister then detailed several Telangana pilots that embody this vision. An AI co-governor is being used to predict floods on the Moosy river and allocate resources before a crisis materialises [45-48]. In agriculture, AI advisors are being trained together with farmers, incorporating local dialects, soil wisdom and lived patterns into the model [52-54]. A Telugu-first AI system now records land records, interprets satellite indicators and dramatically shortens the response time between climate events and incident settlement [57-59]. Satellite-driven heat analysis is already shaping zoning, green-belt creation and urban cooling strategies for Hyderabad, with a target implementation by 2035 [60-62]. Edge-compute nodes powered by solar energy keep government services operational during grid failures, a first for any Indian state [63-64].
Telangana has launched what the Minister called the country’s first sovereign AI nerve centre, ICOM, intended as an AI innovation hub that supports R&D, talent development and deep integration of intelligence into governance [72-73]. Complementing this is a state-wide data-exchange platform that hosts over 1 084 datasets, converting administrative exhaust into ecological signals and enabling anticipatory health care and climate-resilient services [73-84]. The real breakthrough, he stressed, lies not in isolated projects but in the architecture that binds them together, such as the upcoming AI-City and the net-zero Bharat Future City, which are envisioned as self-learning, sustainable territories that generate their own compute resources and serve as policy-advisory platforms [68-72].
Panelists then converged on a definition of an AI agent. Saibal Chakraborty described the shift as moving from solving discrete problems to end-to-end AI-led execution of business or government processes [110-113]. Lee Tiedrich added that agentic AI can act on behalf of people, extending beyond mere answer generation [115-117]. Mike Haley highlighted the evolution from task-specific bots to systems-level agents capable of chained reasoning and multi-agent orchestration [119-122]. Srinivas Tallapragada noted the transition from a “co-pilot human in the loop” to agents that can independently provide business value [124-125]. Srinivas Tallapragada enumerated the essential components of an agent: a defined role, knowledge, short- and long-term memory, actuation capability across digital channels, and a “trust layer” of guardrails to prevent hallucinations, bias and toxicity [133-147].
The panel collectively stressed that robust guardrails, auditability, and human-in-the-loop oversight are indispensable for high-stakes government applications [148-154][190-203][216-231]. They emphasized the need for a command-centre architecture that enables testing, auditing and independent verification before deployment [214-215], sandboxes and clear liability rules for use-case selection [190-203], and transparent control panels that allow engineers to intervene, reassess and override outputs, thereby building trust [216-231].
Capacity development was highlighted as essential. The Minister’s pilots with farmers exemplify how end-users can be trained to contribute data and benefit from AI [48-55]. Saibal stressed that public-sector officials, who are not AI engineers, must be up-skilled to understand trust limits and know when human checks are necessary [253-254]. This aligns with broader policy observations that AI governance requires multidisciplinary collaboration among policymakers, lawyers, engineers and sector specialists [S1,S54].
Data sovereignty emerged as another focal point. The Minister presented Telangana’s sovereign AI nerve centre and open data pipeline as a model for treating AI as core public infrastructure [72-77]. Srinivas Tallapragada distinguished “strategic sovereignty” (control over data and policies) from “technical sovereignty” (control over the full hardware supply chain), urging governments to pursue both tracks-immediate strategic control now, with a longer-term plan for technical independence [246-254].
Panelists agreed that standards and regulation must be agile to keep pace with rapid AI advances. Lee advocated for global, multidisciplinary standards and evaluation ecosystems that can be localised to respect cultural and legal differences [260-280,S1]. Srinivas Tallapragada proposed an “agile regulation” model, where policy frameworks incorporate feedback loops and can be updated iteratively, mirroring product-development cycles [317-324]. Mike described an industry-led approach: embedding “transparency cards”-nutrition-label-style disclosures of model provenance, data sources, accuracy and bias-into every AI feature, thereby giving governments clear information and pre-empting regulatory lag [301-306].
Concrete use-cases were explored. Mike detailed AI agents that analyse floodplains and optimise drainage in water-system design, illustrating how agents can assist early-stage infrastructure decisions despite imperfect inputs [170-174]. Srinivas Tallapragada cited police-assistant agents deployed in the UK (Bobby) and Tasmania (Terry), which handle non-emergency citizen queries and support field officers, demonstrating immediate value in public safety [185-188]. The panel also discussed AI-driven disaster response, with the Minister envisioning anticipatory actions that “counter dust, drought, monsoons and markets” [44-45], while others cautioned that probabilistic predictions require human oversight to avoid over-confidence [217-231][190-203].
Disagreements centred on the degree of autonomy appropriate for critical government functions. The Minister described AI as a “co-governor” that can act proactively (e.g., flood prediction) [45-48], whereas Mike and Lee highlighted the inherent uncertainty of AI outputs and the necessity of human-in-the-loop safeguards [217-231][190-203]. A second point of contention involved the primary mechanism for ensuring safe deployment: Saibal favoured procedural guardrails and human oversight, Lee pushed for internationally harmonised standards, Srinivas Tallapragada advocated for agile, feedback-driven policy, and Mike suggested industry-self-regulation via transparency cards [148-154][260-280][317-324][301-306].
When asked to envision success metrics three years hence, panelists offered concrete indicators. Saibal suggested that the true win would be a farmer receiving vernacular AI advice at scale across India [334-335]. Lee envisaged an active AI safety evaluation ecosystem, with AI Centres of Excellence sharing techniques globally [339-343]. Mike highlighted faster, safer infrastructure development that enjoys public confidence and engineer endorsement [345-351]. Srinivas Tallapragada proposed measurable uplift in per-capita income for the bottom 50 % of earners [355-357].
In closing, the Minister reaffirmed that AI agents can act as “force multipliers” for governance, provided that they are embedded within a trustworthy operating system, supported by sovereign data platforms and guided by robust guardrails [91-93]. Overall, the session highlighted a shared conviction that agentic AI, when built on sovereign data, transparent guardrails, and agile governance, can become a force multiplier for inclusive, resilient public services.
We are going to start with a very special guest. Minister Bawu is going to join us for a keynote. Very excited to hear what you have to say, coming from Hyderabad, one of the centers of technology in India and in the world. So, Minister, thank you so much for joining us. And if I could ask you to come to the podium. Thank you so much, Minister.
Very good afternoon to all. In fact, we welcome you to our city of Delhi, a beautiful city, a capital of India. And many people are from India, too. And we welcome the distinguished panelists. eminent panelists who are sitting here to sit and discuss the quotes for Better Tomorrow. And I welcome the leaders of the industry and the delegates over here. And especially coming to the subject, AI agents for Better Tomorrow. You know, I wish to see that, you know, where we stand today. And where we would end up tomorrow. The point of discussion over here. We stand today at a fundamental inflection point in the history of governance. As a policymaker, I would like to mention a few points.
Because all the technocrats are sitting on all the eminent, you know, scientists maybe from physics or the maths may be sitting on the other side. To develop AI into next level. You know, for decades, the digital revolution in the government was defined by transition from paper to portals and from physics cues to digital clicks. But today, we are witnessing the birth of the new paradigm. We are moving beyond generative AI that simply answers. We are moving from them to agentic AI that acts now. What I’ve been discussing with Mr. Srinivas just now. And for 30 years, our relationship with technology was a series of commands. We used to give commands and used to get the answers.
We typed, we clicked, we prompted. We were the masters for the such bar. We used to, you know, we were the masters. Nobody can say that. But I stand here. I stand here today. I can see and everybody can see the search bar is dying. In its place, something more profound. Just now Mrs. Sweeney was just telling about agency. It’s just evolving. In the first era of our national building was defined by land. The second by the industry. And third has been defined. More illusory, the intelligence of the system. And the nation that leads this century are those that learn to treat intelligence not as a product but as a form of a public infrastructure.
The idea is no philosophical for our state of Telangana. It is the story of our everyday governance because it’s IT driven state as we are known for. And often say that artificial intelligence has three lives in the country. the first life is in the research labs the second we take into in the policy papers but the third ultimately both of this combined together how we are trying to affect the life that truly matters for each and everybody you know how do we see it is that when ai meets the real challenges of of our lives when artificial intelligence meets the dust where we face ai meets the drought when it meets the monsoons when it meets the markets of the living society and this is where its legitimacy is earned when it really counters this dust doubt monsoons and markets in telangana we see agents not as a tool here we would like to take them as a team mates.
You know as the way the pilots relay and the co -pilots. Tomorrow as our government here in Telangana also see that we relay on AI as co -governors system that can predict a flood before the first cloud gathers over the Moosey. Moosey is our river in the midst of our city. You know allocate resources before the crisis and deliver services before citizen ever need to ask. For example if you take the agriculture a small farmer I hail from a very remote area and that to a rural place. A farmer in my place or in some other place from the rural area when the climate is not environmental concept for them it is right now a daily negotiation with uncertainty.
So when we built our AI advisors we did something unconventional. Right now we are trying to do on the pilot stage. We asked farmers to train the system with us. You know the dialects the soil wisdom the lived patterns become the pattern of the model. This is where the governance comes into picture. To use the best of the technologies where you invent produce or do sitting in R &D use best of your grey matter to come up with some products until and unless we use and induce into our governance there will be no end result. That is what we believe in. That is why our Telugu first AI can record. Land records interpret satellite indicators and compress the time between the climate event and an incident settlement.
So this saved lots of time, you know, for our, you know, government agencies as well as to the end user as a farmer. Our satellite -driven heat analysis no longer stops at mapping temperatures. They now shape zoning, green bells and urban cooling strategies for Hyderabad. Which we are planning to take up to the core by 2035. And across 33 districts in our state, our solar power edge computer nodes ensure that the government service and the climate remains operational when the grid fails. And this is also one of the novel things what the Telangana is the first state where we have implemented. Yet I don’t claim that these are examples for climate. This is just a fact of a story.
This is just a beginning. This is the first preface we can say that because the real breakthrough is not from each project. It is from the architecture that binds together. Our future projects like our coming up the state of art infrastructure in the upcoming AI city, absolutely dedicated AI city and the Bharat future city which shall be the net zero city. Are designed not as a smart districts either for technology or for other aspects, but as a self -learning cities, territories that define sustainabilities, territories which can provide themselves for the compute and make them policy advisors. Our country’s first sovereign AI nerve center. ICOM you know this is our first ever initiative by any state in India that we have come up with the first sovereign AI nerve center that is supposed to be the AI innovation hub that is named as ICOM that the aim and objective is you know this intelligence should be shall go deep beyond just for incubation but also render into R &D and shall be the prime focus of creating AI ready talent for tomorrow’s world and I would like to mention here that Hyderabad and Telangana is the first state to come up with a platform that is This Telangana data exchange platform, the sovereign data open pipeline ensures that the intelligence is grounded in integrity.
So the platform is on the open. And this is the first state we have put all the data on this platform. You know, if we go through it by this open data pipeline, you know, 1 ,084 data sets, they have moved from administrative exhaust to ecological signal. We have created something rare in the global south, that state that generates its own intelligence at a scale. And we have seen the results too. And the results shown. The healthcare doesn’t wait for symptoms. It now anticipates risk. Because the data exchange we have done with our co -partners, even in the healthcare. or with the doctors or with the public health institutions. They are just not waiting to deliver the medication, but predicting the risk and try to put it into action.
And we are not waiting for the heat waves to come. We are trying to analyze through the data and how we should place ourselves and we are preparing corridors for the shed. And farmers also, we believe, using this AI technology, we don’t want farmers to wait for the loss. You know, they have to receive assurance before despair. And we are also planning that infrastructure doesn’t wait to break. You know, it has to whisper when it will fail. You know, when all this, the cutting -edge technologies, especially the AI, deployed with purpose and AI agents offer government something rare in public life. The ability to act before harm, to prepare before shock, to protect before loss and how resilient our infrastructure emerges, how safe the climate resilient cities take shape and how our public services become anticipatory humane and trusted.
And this is the future what we are imagining and we are trying to put all our actions into stream and it is this operating system we dreamt and we started running and I believe the next chapter of the statecraft will not be written in the boardrooms of traditional power centers but in the living laboratories of the global south. In the cities of Hyderabad, and the world can already see a preview of what an intelligent century of governance looks like. Let us leave Bharat Mandapam today. Here, while this great convention is taking place with a shared conviction that the tomorrow we are building is not just the smarter, it is braver. However, and you know, the great caption goes, AI for everyone, AI for human welfare should be the theme.
And also, we should, I as a policymaker, you as a technology expert sitting over there, should aim and anticipate for it. I thank the organizers for giving me, you know, a length of year to air. And I want to hear my pitch on behalf of our state of Telangana. I would like to thank the Salesforce team. especially the team management who are invited me over here for gracing this and having to see you know all the best brains sitting over here and the grey matter who would be doing much more for our welfare of our human being. Thank you very much.
Minister, thank you so much for joining us. We very much appreciate it. It was very exciting to hear what’s happening in Hatsheba and in Teliana. Let’s kick our panel off. Alright, so I am going to start with an icebreaker. Everyone gets 30 seconds to respond. This panel is by AI agents, so what would you say, I’m going to start there and then go towards me, what would you say is the single biggest difference that you see between AI last year, we’ve been sitting here and We’ve been sitting here last year in the AI agents that we are seeing today. Saibul, can you kick us off?
So I think in my mind the conversation has moved decisively towards agentic AI. We are no longer talking about, as Honorable Minister also said, about solving discrete problems or discrete searches. We are now looking at end -to -end AI -led execution of business processes or government processes. I think that’s the single biggest change in thinking that has come up.
Professor Lee Tiedrich?
To put this in context, I was involved in the International AI Safety Report, and we just had our panel on that a little while ago. And Professor Bengio was saying the biggest change from 25 to 26 is the emergence of agentic AI. And my perspective, is its ability not only to do the end -to -end, but to also act on behalf of the… of people is really the big change.
Mike?
So I’m probably going to jump on the train here. You know, what we were seeing last year was narrow agents able to solve specific problems. What we see now are agents that are able to abstract the problem, chain of thought reasoning, being able to take that and turn it into sequenced action and turn the multiagent sort of systems level thinking. So it’s the move from task specific to systems level is the big shift that I’m seeing.
And Srini? Srini Srinivasan
Yeah, so I think for me the big shift has been from co -pilot human in the loop to agents which can act and really provide value, business value. And that’s been the big shift.
So let’s talk about that value. Let’s talk about AI agents as a forceful multiplier. I’m going to start here this time Srini, you lead engineering for one of the biggest platforms in the world. There’s a lot of discussion about AI agents. Can you demystify this? What does that mean?
Yeah. So I think what does that mean? An agent, just like a human, first of all, an agent has to act. It says agency and it acts. That’s the first big difference. And like any agent, it has to have a couple of things. It has to know a role. Just like a human, it needs to know what it’s supposed to do, what are the jobs to be done. It needs knowledge. Just like a human, if I have in my mind, an agent has to have knowledge, some memory, so both short -term and long -term memory. And then it should also be able to act. You know, it should be able to, in a digital world, should be able to act on an API or something.
And then it should be able to act wherever the surface is. Maybe it’s in WhatsApp channel, wherever the user is interacting with it, in a WhatsApp channel or web channel or a digital channel or a SMS text. More importantly, most important in all of this is we should have guardrails on what it’s not supposed to do. that’s the most important and then all of it has to be covered to make it useful with what we call a trust layer because these things can hallucinate it can have bias, it can have toxicity, avoid all of that and they are unpredictable ultimately so it should have governance then it’s auditability, so you can do all of this this and all is to do all of this is what an agent does so this is also the why even though there is a lot of hype in reality it hasn’t diffused enough, this is the business value which we are trying to bridge as the vendors
Thank you. Saibal I’m going to go to you next, so let’s talk about governance, we sit here in Delhi, the capital of one of the greatest nations of the world, the public sector, are they ready for this, how do we think about that?
So I think let me not answer that question, I think the public sector needs to be ready so all the way from managing public finances public procurement managing their workflows and processes better, there is no way that public sector can avoid this. However, as Shrini, you pointed out, the stakes here are very, very high. So imagine an agent crafting an RFP, a multi -million or a billion dollar RFP on behalf of the government. How do we and you know in public procurement, we often sacrifice speed for procedural tightness. So how do we actually, what guardrails do we put around an agent or more? So can it really be end -to -end? Can it really be fully autonomous?
Or do I still need that last human layer to make sure that the T’s are crossed, the I’s are dotted because the stakes are really high and a mistake can really, you know, lead to a lot of negative impact. So I think the public sector has to be ready. but I think some of these guardrails has to be thought through and in the context of public sector are agents fully autonomous or do they still automate or do they still operate with a little bit of that human layer I think that has to be thought through.
That’s great thank you. I love that you said RFPs because that’s a concrete example so let’s talk a little bit about use cases and Mike I’m going to go to you let’s talk about resilient infrastructure one of the examples I hear a lot for AI agents they can help you make reservations and I love to eat I think making restaurant reservations actually pretty valuable to me but could an AI agent do something like design a bridge could it design an energy grid like where do we stand between reality and science fiction?
Yes I think we’re tracking pretty quickly to agents being able to do just those kinds of things In the past, what’s been difficult is using computational methods in AI, which has been around for a reasonable time for these things, has been very difficult. Because if you’re using some form of computational method or AI to design a bridge, you have to specify that bridge perfectly. You have to give it perfect inputs. Now, it turns out that when a designer is designing something, they don’t have perfect inputs. That’s the process of design is actually figuring out what your inputs are, right? So this has always been a little bit of a barrier for people to use these advanced methods.
With AI, and specifically AI agents, you’ve now got a much easier way of interacting. It’s more forgiving towards fuzzy requirements and earlier stages of thinking. It’s able to give you things that inspire you. So one of the things I talk a lot about publicly is that the notion of agents and creatives working in a loop together, that it’s breaking the cycle where the engineer has to come up with every idea from scratch, from a beginning. Rather describe what you’re doing, let the agents explore. So I’ll give you one example specifically in infrastructure because you wanted to get concrete. I mean, something that we work with is water systems, for example. So we’ve built AI agents that can analyze floodplains.
They can analyze how you might want to think of water drainage and these kind of things. So every time you’re making a decision early on in your design, you can let this thing run through, and it’s going to optimize your design in order to ensure that drainage is going to be successful on that. Now, drainage seems like a small little side thing, but it’s a pretty massive part of infrastructure. And having an agent handle that for you, it’s a pretty big deal.
Mike, I have very close family ties to Louisiana, so drainage and flood zones, that is not a small thing. That is a very, very big thing. And actually, that’s a perfect segue to the question I wanted to ask Srini. So one of the most complex things that a government might have to deal with is disaster response. Is that a place where AI agents could be helpful?
I really like the theme, welfare for all. And I think while we can think of very big things of where the AI is doing, AI can add value right now. And disaster response is one good example. Another small example. Another example which I wanted to give was like, you know, the key is to give back time to the people. That’s very valuable. Giving back time is a very noble goal in my opinion to everybody. So we have this very interesting use case where there is a city in New Thames in UK where they created an agent called Bobby. It’s like Bobby is a UK term for policeman and the citizens are asking a lot of questions which are not emergency and Bobby is answering them.
More than 90 % of them they get a lot of value. What was interesting for me was we have another city in Tasmania which is using a product agent force to roll out agents to their police people more than thousand police people because lot of times when they are in the field the policemen new or more experienced they have lot of questions and they are asking and they call this agent Terry and lot of policemen say Terry is their best partner. you know they have been so I think while we can think about futuristic ways here and now there are a lot of things we can provide right now with the technology guardrails in the public sector in private sector obviously where with if you have the right platform where you have trust governance as a foundational value with all the right guardrails we can still add a lot of value and we are seeing thousands of examples across public and private sector where you have the crawl walk run mode you know you start something basic you can still add value you still have the most esoteric cases with multi agent orchestrations I feel like but you can start with basic today and still get a lot of value that’s what we are seeing
that’s great so professor Tiedrich we’ve talked a little bit about how agents can help governments serve their publics are there are there risks there Are there risks of over -reliance?
Yeah, I mean, there are definitely risks, and I think I share the view of my co -panelists that I think there’s a lot of benefits to using AI in government and improving government services worldwide. But like everything else, we have to do it cautiously and smartly, and I think some of it kind of comes back to the human factor, like pick your use cases wisely. One of the themes in the safety report is that AI is emerging very jaggedly. We have some use cases like computer programming that are really good. There are others that may not be quite ready for prime time. So I think when we think about over -reliance is thinking about where AI is excelling, focusing on those use cases, and maybe doing sandboxes around some of the others to give them a little bit more time to mature.
I think also the over -reliance, picking up on some of the great points, is the guardrails. You know, one of the things in the safety report is good news. We’ve made a lot of progress on guardrails and risk management, but still as the technology moves quickly, a lot more work to be done. And so not relying too much that we overlook guardrails and thinking about where humans should be in the loop. And then the third thing I’ll just mention is, you know, the interoperability of different agents. And as agents start to call upon third -party agents, it’s just thinking through, you know, what guardrails, how do you choose that? How do you allocate liability? How do you test the agents that you’re going to bring into your system?
So guardrails have come up. Srini mentioned it. You just mentioned it. Let’s talk about guardrails a little bit. So, Srini, we hear about chatbots. We hear about hallucinations. Those can be annoying. When you’re talking about a government deploying an AI system, AI agents, the consequences can be extremely significant. They can be a hallucinating of an agent can be quite dangerous. So let’s talk about guardrails. How do you engineer trust into a system so that a minister, a secretary, a secretary can be able to say, feel confident that that’s a tool that they can use to serve their people?
drift, they can hallucinate. So you need a command center where you can say all of it is, this is the difference between a pilot or a demo, which you can find thousands of demos in YouTube versus real life, where these things become So we had to build all of these things for both the customers or governments to build confidence, they can audit, they can test, not even if themselves an independent party also can test, all of this infrastructure is what is required to make this a reality, but once you do that, there’s a huge value you can immediately provide to either the customers or the citizens.
Can I just add to that quickly? Because I think you hit a really interesting point at the end there. When people talk about guardrails, they think of guardrails as this perfect thing, that at some point the guardrails are going to get strong enough that every result is perfect, it’s completely predictable, and we’re good. And I think we need to talk about the honesty of that. We’re talking about systems that are inherently probabilistic systems. You’re never going to make a probabilistic system 100 % deterministic. It’s an acronymism. Right. So what we’ve discovered is that, I mean, you do do all the guardrails work that we’re all talking about, but where you were going at the end there about making systems that can look at the accuracy of what’s produced, give you some feedback on how accurate the solution or how well it’s going to perform, and then, and this is very important, what we’ve discovered is giving control to the human being, giving control, in our case, to an engineer, right, who is able to say, oh, I get it.
This is kind of, the result is a little off. I’m going to give it some more feedback. I’m going to reassess the results. I’m going to run it again. Or I might even go in myself and kind of tweak that information. And what we’ve discovered, when I’m talking to an engineer and explaining how this stuff works, if I don’t give them that level of control, they don’t trust the system. The minute they know they can actually control it, so it’s not, trust doesn’t depend on a perfect answer. Trust actually depends on transparency and understanding and then the ability to come in and control something.
But I think that’s also because the engineers understand this. This is the tool. It’s a tool for them to use. to help them. It’s not something that is going to take control. Is there anything specifically with respect to infrastructure that you think government should be mindful of?
Yeah. Well, look, so, I mean, infrastructure is not known as the easiest and quickest thing to build, right, in countries. And I think, you know, one of the really boring things but absolutely necessary things with infrastructure is to make sure your digital ecosystem around that infrastructure is set. And I see a lot of places in the world getting into building infrastructure trying to do this quickly without getting all that digital infrastructure in place. So building information, modelling, ensuring that every part of your infrastructure is correctly modelled, it’s represented at the right level. AI is not going to just magically come in and solve a bunch of problems unless you’ve got a lot of that digital stuff in place already.
So it’s kind of a little bit of the boring work, but getting that stuff in place early is one of the biggest things. I mean, I’ve had a number of conversations here this week about the 2047 initiative in India, the Man. of infrastructure that needs to be built in this country and the importance of using something like building information modeling, getting standard data, getting that in place now. If you get that in place now, all this AI goodness is way easier to deploy against it.
Yeah, please.
Yeah, so I think I heard a lot of discussion around sovereignty, and I think the way we should think of sovereignty is two levels. There’s strategic sovereignty and technical sovereignty. So by strategic sovereignty, I mean is like you get control on data, your governance policies, you know, and your operational policies. That I think you can implement it right now and get value. And I think and then on the technical one where people want to control their entire supply chain from the chips and all, I would like to for governments to and public officials and policy officials to think is as two tracks. One. One takes longer in a lot of capital investment. Don’t let the second track.
stop getting the benefit of the first track the first track is easy you can ensure the data doesn’t leave your country your policy guardrails have control human in the loop you still get a lot of benefits while you still want to continue on the second track that would be my request to all the government
can I just make a quick build on what Mike said because I do a lot of my work in the public sector with governments I think one of the biggest guardrails beyond policies is actually the skilling the upskilling like Mike you said it’s a probabilistic system inherently right so you cannot expect it to give correct results all the time there’s nothing called a correct result so the person who’s actually using this who’s using the tool at the district level, at the state level to make real government decisions that person is not an AI engineer that person needs to be upskilled and needs to be told what can be trusted and what requires that additional layer of check.
So I think if agent TKI has to take off in public sector at scale, then that upskilling at various levels of the government on what can be trusted and what cannot be trusted is also a very, very big component.
Yes, I totally agree. Professor, I wanted to ask you, so it feels so trite to say technology is moving really quickly, but in the last few years, I mean, AI is moving very, very quickly. We’ve talked a lot about guardrails. How should governments think about this? I mean, how are governments going to be able to keep up in terms of setting government expectation, setting potentially regulation for a technology that is moving so quickly?
It’s a hard one. I think, you know, AI has evolved into a global multi – disciplinary field. And I think, you know, we need to bring the global community together. I think we need policymakers, lawyers, talking with engineers, talking with sector specialists to really inform the policy in real time. I mean, I’m a big fan. I spent a year working at NIST, the U .S. National Institute of Standards and Technology. And, you know, we need to figure out how to do some of the guardrails, you know, starting with the science. And then the science can inform, you know, how to develop the standards, how to develop the evals. And then it becomes, you know, a question.
I mean, different countries have different views on whether we should regulate or not regulate. You know, the U .S. has a very deregulatory approach. Europe is the opposite. But if we can kind of agree on what those common standards are for evaluation and testing, then governments can be free to decide, do we mandate this or not mandate that? And I think one important thing is that we need to be able to do that. And I think that’s one of the things that we need to be able to do. And I think that’s one of the important nuance to add to the mix. And this has been a theme of the conference. is, you know, we have to, well, we want some standardization on these evaluation mechanisms.
We have to recognize that we speak different languages. We have different cultural norms. So when we want to have standardization, we’ve got to be able to localize what the evaluation looks like is what might be appropriate in one country isn’t going to be appropriate in another country. So it’s hard, but I think, you know, starting with the science, the scientific report, I would point people to, you know, building on that, working through the AC network, working through standards organizations and all these other initiatives to develop the evaluation, build that evaluation ecosystem, and then regulations can kind of overlay on top of that as policymakers think appropriate for their jurisdictions.
But if I could ask a follow -up question to you or any of the panelists, I mean, I think one of the challenges there for companies is that it’s really helpful for companies. I also speak for enterprise software companies that I represent. It’s helpful to know what those government expectations are. Like industry is looking for clarity and predictability. And I think that’s one of the things that I think is really important. And I think that’s one of the things that I think is really important. And I think that’s one of the things that I think is really important. And I think that’s one of the things that I think is really important. And
Should I take a shot at it? Yeah, I see. As a software provider, you know, at Autodesk, we definitely deal with that, Victoria. You know, we’ve had a couple approaches. One, I mean, we’re obviously going to stay on top of this all the time, working with governments, making this part of a conversation. I spend a good part of my year traveling around the world, talking to governments and trying to sort of help them understand what needs to happen, but also help us understand, like you said, what they’re wanting. But the main problem is just the sheer variance. I mean, even within the United States, we have things between different state efforts, right? And then you get around the world, it just gets even more complicated.
What we’ve tried to do is we’ve – We’ve tried to run as far ahead of this as we can. So if there is a way that we can build in good controls right from the beginning, we actually build those controls to the maximum extent that we can within reason, right? So what we’ve done is we’ve found now that we’ve run, I’ll give you an example. In every AI feature we have in our software, we have something called a transparency card, which looks like a nutrition label on food. But that nutrition label tells you what kind of model is behind it, what data was used to train it, what kind of level of control you have, what accuracy it has, any bias that we know about in the model, that kind of stuff.
And it’s a standard thing. So we rolled that out about a year ago really to try and stay ahead of things. So if governments started asking for these things, well, we’ve got a transparency card. What’s actually happened now is that there’s a bunch of interest in that becoming part of a standard. So I mean I’m not saying that really just to tout us because I think other companies are doing great things in this space as well. You guys are doing a bunch of good stuff in this space too. I think this is an opportunity for us. For us in industry to run ahead to try and help define some of these things because it is moving so fast.
And I hate to, maybe I shouldn’t say this publicly, but the government doesn’t always have the best answers, right? So, I mean, we can work with government to help them develop those answers and come up with good things, which helps us then, you know, resist some of the complexity that’s coming down the line.
Yeah, so one of the challenges in this is you can project it too much. It’s an exponential curve. It’s very hard to project. So I think sometimes it’s learned by doing. I think the biggest thing the government can, all governments can do is the policy framework on how to update these standards. Today, usually it takes a long time, and so everybody’s afraid, and then it takes even harder to change a standard. So then everybody, so then they try to solve everything, and things are changing. And so I think the main thing policymakers could do is, just like the feedback loop, if there’s a way to improve. If there’s a way to improve the policy framework, because then you don’t need to be afraid of getting everything right.
You know, you understand that, hey, you told me some basics, and as new data comes in, you can update it. And then I think that, I think that in engineering and product, we call this the product feedback loop and agile development. If we have something equivalent on that, then I think then everybody is clear because we all want the right thing. I think there’s no disconnect in the foundational thing. We want AI to help, correct, in a positive way to our net positive way to our entire community. And the regulatory framework in a changing technology, if the regulatory framework is able to change, if we can change that, then we are not afraid to say that we need to get everything right on day one.
And we can learn by doing it. So agile regulation.
I have loved this panel. Unfortunately, we’re coming to a close. So I’m going to ask each of you one final question and Saibala, I’m going to start with you and then head this way. But if we were so fortunate to meet again in three years, so fortunate to meet in Delhi again in three years, looking back what would you say is the one thing that you think would be the best way to determine whether or not we have succeeded in addressing some of these challenges i know it’s a big question sorry but uh thank you
so since we’re in uh delhi i’ll give the answer in the indian context i think as um the primary theme or one of the primary themes of this particular conference is inclusivity so for me the success of ai the true success of ai will be if a farmer could talk to a small language model powered tool in his or her own vernacular language and get practical advice on how to manage the crop how to manage the cattle and if that could be scaled up uh across the board i think that would be a very good idea across the length and breadth of india Then I think that, for me, is the real win for AI.
That’s a
big win. I mean, that’s a significant impact. Thank you. Great. Professor Tietrich? Yeah, so
I’m kind of coming back to the evaluation ecosystem. We’ve made a lot of progress over the last couple of years, but more work needs to be done. You know, more countries, including the Global South, are launching ACs, you know, AI safety or security institutes, which is not hard regulation, binding regulation, but it’s governments weighing in. And I think real progress three years from now, we have an active AC institute that’s sharing information, making real progress on evaluation techniques, and one of the commitments that came out of some of the companies yesterday is, you know, also localizing that so everybody can benefit from that Global North, Global South. Thank you.
Mike? So earlier
on, I spoke about infrastructure and, you know, physical infrastructure that is in countries. What I would hope to see is, in a couple of years, in a couple of years’ time, we’re actually seeing infrastructure genuinely get developed faster than it’s ever been developed, which is a really, really tough problem, making that happen in the physical world. So as a measure of AI truly doing this, that’s an incredible measure. But on top of that, it needs to be doing that without compromising safety, without compromising. It’s not a big black box that nobody understands, right? So what I would love to see is not only is that infrastructure being developed faster, but the public is engaged with it.
The engineers and people that are doing it feel comfortable with it. They feel secure. They feel fine signing off on that because they feel that this is reliable. Thank you.
Srinu? If AI
is so revolutionary as we all assume, I would hope in three years, the bottom 50 % income percentile, their per capita income has been measurable. That for me is the real impact of this technology. That’s
fantastic. I want to say thank you to all of our panelists I want to say a special thank you to Srini and to Salesforce for bringing us all together here today thank you to our audience for joining us big round of applause for our panelists thank you you Thank you.
It’s a hard one. I think, you know, AI has evolved into a global multi – disciplinary field. And I think, you know, we need to bring the global community together. I think we need policymakers, lawyer…
EventThe shift from ‘pilot to plant’ is happening globally, but the motivations, players, and governance challenges vary sharply depending on who’s deploying the technology and why.
BlogPrendergast frames agentic AI as a critical technological shift where AI has evolved beyond reactive tools to become proactive, autonomous agents capable of independent decision-making and action. He …
EventSo when we built our AI advisors we did something unconventional. Right now we are trying to do on the pilot stage. We asked farmers to train the system with us. You know the dialects the soil wisdom …
Event_reportingThe Telangana government haslaunchedAikam, a new autonomous body aimed at positioning the state as a global proving ground for large-scaleAIdeployment. Unveiled at theWorld Economic Forumannual meetin…
UpdatesI urge Member States. Industry and civil society to contribute to the panel’s work. work. Second, launching a global dialogue on AI governance within the United Nations, where all countries, together …
EventThird, ensuring transparency in AI systems:Commanders must understand the data sources, training methodologies, and decision-making processes underlying AI recommendations. This transparency is essent…
EventBhattacharya explained that trust ranks first among Salesforce’s five core values—trust, customer success, innovation, equality, and sustainability. Their trust layer addresses multiple dimensions bey…
Event-Data sovereignty: Where Europe should maintain complete control -Operational sovereignty: Ensuring continuity under external pressure Ezzat introduced a sophisticated framework arguing that “sovere…
EventA balanced scorecard with certain parameters provides a measurable indicator of the progress made.
EventThe tone is optimistic and collaborative throughout, with speakers sharing concrete examples of successful implementations and expressing confidence in achieving ambitious goals. There’s a sense of ur…
EventEstablishing concrete metrics and evaluation frameworks for measuring WSIS implementation progress
EventOutcome Focus: Success should be measured by meaningful business and human outcomes rather than just productivity metrics.
Event“AI should be treated as public infrastructure rather than a product”
The knowledge base contains statements that “Intelligence is not an asset, it’s infrastructure” and that “Information should be treated as a public good rather than a commercial commodity,” providing supporting context for treating AI as public infrastructure [S82] and [S80].
“AI advisors are being trained together with farmers, incorporating local dialects, soil wisdom and lived patterns into the model”
A related point in the knowledge base notes that farmers are using AI weather forecasts, illustrating how AI is being applied in agriculture to support farmers, which adds nuance to the claim about farmer-centric AI advisors [S85].
The panel displayed strong convergence on several fronts: the transition to agentic AI, the necessity of guardrails and human oversight, the importance of capacity building, the framing of AI as sovereign public infrastructure, and the need for agile, standards‑based regulation. These shared positions cut across government, academia and industry, indicating a common understanding of both opportunities and risks associated with AI agents.
High consensus – the speakers largely agree on the direction of AI development and the policy/operational safeguards required, which bodes well for coordinated action on AI governance, capacity building and infrastructure investment.
The discussion revealed a core consensus that AI agents must be governed by robust guardrails, auditability, and human oversight. The main points of contention centered on how much autonomy to grant AI systems—especially for high‑stakes public functions like disaster prediction—and on the best pathway to achieve safe deployment, whether through government‑driven standards, agile policy cycles, or industry‑led transparency measures. The unexpected optimism expressed by the Minister about AI’s autonomous capabilities contrasted sharply with the panel’s cautionary stance, underscoring a tension between policy ambition and technical realism.
Moderate to high. While participants share the overarching goal of trustworthy AI, they diverge significantly on the degree of autonomy and the primary mechanism for regulation, which could affect the speed and effectiveness of AI integration into public services.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event

