Agents of Change AI for Government Services & Climate Resilience

20 Feb 2026 11:00h - 12:00h

Agents of Change AI for Government Services & Climate Resilience

Session at a glance

Summary

This discussion focused on the transformative potential of AI agents in governance and public service, featuring Minister Sridhar Babu from Telangana, India, and a panel of technology experts. Minister Babu opened by describing how AI represents a fundamental shift from traditional command-based technology to agentic AI that can act autonomously, positioning artificial intelligence as critical public infrastructure for modern governance. He highlighted Telangana’s pioneering initiatives, including AI-powered agricultural advisors that work with farmers in local dialects, satellite-driven urban planning for climate resilience, and the creation of India’s first sovereign AI nerve center called ICOM.


The panel discussion revealed consensus that the biggest change from the previous year was the evolution from narrow, task-specific AI tools to comprehensive agentic systems capable of end-to-end process execution. Panelists emphasized that effective AI agents require clearly defined roles, knowledge bases, memory systems, and most importantly, robust guardrails and governance frameworks. They discussed practical applications ranging from police assistance chatbots to infrastructure design and disaster response systems.


A central theme was the balance between AI autonomy and human oversight, particularly in high-stakes government applications where errors could have significant consequences. The experts stressed the importance of transparency, auditability, and maintaining human control over AI systems. They advocated for “agile regulation” that can adapt quickly to technological changes while ensuring safety and accountability. The discussion concluded with aspirational goals including measurable improvements in income for lower-income populations and faster, safer infrastructure development, demonstrating the potential for AI agents to create meaningful societal impact when properly implemented with appropriate safeguards.


Keypoints

Major Discussion Points:

Evolution from Generative AI to Agentic AI: The panelists emphasized the fundamental shift from AI that simply answers questions to AI agents that can act autonomously, execute end-to-end business processes, and operate with agency rather than just responding to commands.


Government Implementation and Readiness: Extensive discussion on how public sector organizations can deploy AI agents for governance, with examples from Telangana state including agricultural advisors, disaster response systems, and the creation of India’s first sovereign AI nerve center (ICOM).


Trust, Guardrails, and Risk Management: Critical focus on building reliable AI systems through proper guardrails, addressing hallucinations, ensuring auditability, and maintaining human oversight, especially given the high stakes of government decision-making.


Practical Applications and Use Cases: Concrete examples ranging from infrastructure design (bridges, water systems, flood management) to citizen services (police assistance agents like “Bobby” and “Terry”), demonstrating real-world value creation.


Regulatory Frameworks and Standards: Discussion of the challenges governments face in regulating rapidly evolving AI technology, with emphasis on developing agile policy frameworks, international standards, and evaluation mechanisms that can adapt to technological changes.


Overall Purpose:

The discussion aimed to explore how AI agents can serve as force multipliers for better governance and public service delivery, particularly focusing on practical implementation challenges, trust-building mechanisms, and the potential for AI to create inclusive benefits across society, especially in developing nations like India.


Overall Tone:

The discussion maintained an optimistic yet pragmatic tone throughout. It began with inspirational examples from Minister Sridhar Babu about Telangana’s AI initiatives, then evolved into a more technical and cautious conversation about implementation challenges. The panelists balanced enthusiasm for AI’s potential with realistic acknowledgment of risks and limitations. The tone remained collaborative and solution-oriented, with all participants emphasizing the importance of responsible AI deployment for societal benefit, concluding on an aspirational note about measurable impact on global income inequality.


Speakers

Speakers from the provided list:


Minister Sridhar Babu – Minister from Telangana state, India; policymaker focused on AI governance and digital transformation in government services


Victoria Espinel – Panel moderator and discussion facilitator


Srinivas Tallapragada – Engineering leader for a major platform; expert in AI agents and enterprise AI implementation


Mike Haley – Representative from Autodesk; expert in AI applications for infrastructure, engineering, and design


Lee Tiedrich – Professor involved in International AI Safety Report; expert in AI safety, evaluation, and policy frameworks


Saibal Chakraborty – Expert working with governments and public sector on AI implementation and governance


Additional speakers:


Minister Bawu – Mentioned at the beginning but appears to be the same person as Minister Sridhar Babu (likely a name confusion in the transcript)


Full session report

This comprehensive discussion on AI agents for governance and public service delivery featured Minister Sridhar Babu from Telangana, India, delivering a keynote address followed by a panel discussion moderated by Victoria Espinel. The conversation explored the transformative potential of artificial intelligence in government operations under the theme “AI for Better Tomorrow,” revealing a fundamental shift from reactive AI tools to proactive agents capable of autonomous action.


Minister Sridhar Babu’s Vision: AI as Public Infrastructure

Minister Sridhar Babu opened with a profound observation that artificial intelligence represents a fundamental inflection point in governance history. He articulated a vision where AI transitions from being merely a product to becoming essential public infrastructure, comparable to roads or electricity networks. The Minister emphasised that “we are moving beyond generative AI that simply answers” to “agentic AI that acts,” declaring that “I can see and everybody can see the search bar is dying” and being replaced by something more sophisticated and proactive.


The Minister provided compelling examples of how Telangana state has positioned itself at the forefront of AI implementation in governance. The state has developed AI advisors that work directly with farmers, incorporating local dialects, soil wisdom, and lived patterns into the models. This Telugu-first AI system can “record land records, interpret satellite indicators, and compress the time between the climate event and an incident settlement.”


Telangana’s satellite-driven heat analysis extends beyond temperature mapping to shape urban zoning, green belt planning, and cooling strategies for Hyderabad, with ambitious plans for implementation by 2035. Across the state’s 33 districts, solar-powered edge computing nodes ensure government services remain operational even when the electrical grid fails, representing a novel approach to resilient digital infrastructure.


Perhaps most significantly, Telangana has established India’s first sovereign AI nerve centre, ICOM (Intelligence Innovation Hub), designed to go beyond incubation to encompass research and development whilst creating AI-ready talent. The state has also launched the Telangana Data Exchange Platform, an open data pipeline that has transformed 1,084 datasets from administrative records into actionable intelligence signals, creating what the Minister described as “something rare in the global south” – a state that generates its own intelligence at scale.


The Panel Discussion: Defining the Paradigm Shift

Victoria Espinel, serving as moderator, opened the panel discussion by asking about the biggest difference between AI last year and AI agents today. This icebreaker question set the stage for unanimous agreement among the expert panel that the emergence of agentic AI represents the most significant development in the field over the past year.


Saibal Chakraborty noted that conversations have moved decisively towards end-to-end AI-led execution of business and government processes. Professor Lee Tiedrich, who spent a year working at the U.S. National Institute of Standards and Technology (NIST), highlighted AI’s newfound ability to act on behalf of people rather than merely assisting them. Mike Haley described the evolution from narrow, task-specific agents to systems capable of abstract reasoning and multi-agent orchestration. Srinivas Tallapragada, leading engineering for one of the biggest platforms in the world at Salesforce, emphasised the shift from co-pilot systems requiring human oversight to autonomous agents delivering tangible business value.


Technical Framework: Components and Capabilities of AI Agents

Srinivas Tallapragada provided a comprehensive framework for understanding AI agents, emphasising that like humans, agents must have clearly defined roles and understand their jobs. They require both short-term and long-term memory systems, knowledge bases, and the ability to act through digital interfaces such as APIs or communication channels like WhatsApp, web platforms, or SMS.


Crucially, agents must operate within well-defined guardrails that specify what they are not supposed to do, all underpinned by a trust layer that addresses hallucinations, bias, toxicity, and unpredictability through governance and auditability mechanisms. This technical foundation enables AI agents to move beyond simple query-response interactions to complex, multi-step processes that can anticipate needs and take proactive action.


The panellists provided concrete examples of successful implementations. “Bobby,” a police assistance chatbot in New Thames, UK, handles 90% of non-emergency citizen queries, while “Terry” in Tasmania supports over 1,000 police officers in the field and has become what many officers describe as their “best partner.”


Trust, Transparency, and Risk Management

The discussion revealed sophisticated thinking about the challenges of deploying AI agents in high-stakes government environments. Mike Haley made a crucial point about the inherent nature of AI systems, noting that “you’re never going to make a probabilistic system 100% deterministic” because “it’s an oxymoron.” This insight reframed the entire approach to trust-building, shifting focus from achieving perfect accuracy to ensuring transparency, understanding, and human control.


Saibal Chakraborty highlighted a critical but often overlooked aspect of AI deployment: the need for comprehensive upskilling of government workers who will use these systems. He emphasised that the person making real government decisions “at the district level, at the state level” is “not an AI engineer” and therefore requires training to understand what can be trusted and what requires additional verification.


The panellists agreed that effective guardrails must encompass multiple layers, including technical safeguards, governance frameworks, auditability mechanisms, and command centres for monitoring system performance and preventing drift. Srinivas Tallapragada stressed the importance of distinguishing between pilot demonstrations and real-world deployment, noting that whilst thousands of demos exist on platforms like YouTube, actual implementation requires robust infrastructure for testing, auditing, and independent verification.


Mike Haley described how industry can contribute to this process by proactively developing transparency measures and controls, citing Autodesk’s development of “transparency cards” that function like nutrition labels for AI features, providing users with information about underlying models, training data, accuracy levels, and known biases.


Practical Applications: Infrastructure, Agriculture, and Disaster Response

The discussion moved beyond theoretical possibilities to examine concrete applications where AI agents can deliver immediate value. Mike Haley described how AI agents can revolutionise infrastructure design by handling fuzzy requirements and early-stage thinking that traditional computational methods struggle with. He provided specific examples of AI agents analysing floodplains and optimising water drainage systems, noting that whilst drainage might seem like a minor consideration, it represents a massive component of infrastructure development, particularly relevant to India’s 2047 infrastructure initiative.


In disaster response scenarios, AI agents can provide critical support by processing vast amounts of information quickly and coordinating resources effectively. The technology’s ability to operate in multiple languages and cultural contexts makes it particularly valuable for diverse populations during emergencies.


The agricultural sector emerged as a particularly promising area for AI agent deployment. The vision articulated by Saibal Chakraborty of farmers being able to communicate with AI systems in their vernacular languages through small language model powered tools to receive practical advice on crop and livestock management represents a powerful example of inclusive technology deployment that could have transformative economic impacts.


Sovereignty and Implementation Strategies

Srinivas Tallapragada introduced an important distinction between strategic sovereignty and technical sovereignty that provides a practical framework for government AI adoption. Strategic sovereignty involves maintaining control over data, governance policies, and operational procedures – capabilities that governments can implement immediately. Technical sovereignty, which encompasses control over the entire supply chain including chips and hardware, requires longer-term investment and capital commitment.


This two-track approach allows governments to begin realising AI benefits whilst building towards greater technological independence. The framework addresses common concerns about dependency on foreign technology providers whilst providing a pragmatic path forward that doesn’t require complete technological self-sufficiency before implementation can begin.


Regulatory Challenges and Adaptive Frameworks

The rapid pace of AI development presents unprecedented challenges for regulatory frameworks designed for slower-moving technologies. Professor Lee Tiedrich emphasised the need for global collaboration between policymakers, lawyers, engineers, and sector specialists to develop common evaluation standards whilst respecting cultural differences and local contexts. He referenced the International AI Safety Report and the importance of building robust evaluation ecosystems through international AI safety institutes that share information and techniques globally.


The concept of “agile regulation” emerged as a key theme, with Srinivas Tallapragada advocating for policy frameworks designed for iteration and updates rather than permanent solutions. This approach acknowledges that the exponential nature of AI development makes it impossible to anticipate all future scenarios, requiring regulatory systems that can learn and adapt through implementation experience.


Success Metrics and Future Vision

The discussion concluded with aspirational but concrete visions for measuring AI success. Srinivas Tallapragada proposed that if AI is truly revolutionary, it should produce measurable improvements in per capita income for the bottom 50% of income earners within three years. This metric cuts through technical complexity to focus on tangible social impact.


Saibal Chakraborty envisioned success as enabling farmers across India to access practical agricultural advice in their vernacular languages, representing true inclusivity in AI deployment. Mike Haley focused on infrastructure development, hoping to see AI enable faster construction without compromising safety, with public engagement and professional confidence in the systems.


Minister Babu’s vision encompassed anticipatory governance where AI systems can “predict a flood before the first cloud gathers,” “allocate resources before the crisis,” and “deliver services before citizens ever need to ask.” This represents a fundamental transformation from reactive to proactive government service delivery, aligned with his broader vision of “AI for everyone, AI for human welfare.”


Conclusion

This discussion revealed remarkable consensus on key principles whilst acknowledging significant implementation challenges. All participants agreed that AI agents represent a fundamental shift from tools to autonomous actors, that robust guardrails and transparency are essential, and that success should be measured by inclusive impact rather than technical sophistication.


The examples from Telangana demonstrate that developing nations need not wait for perfect solutions before beginning AI implementation. Instead, they can pursue strategic approaches that maintain sovereignty whilst leveraging global technological capabilities to address local challenges and serve their populations more effectively.


The conversation ultimately presents AI agents not as a distant future possibility but as a present reality requiring thoughtful implementation, robust safeguards, and clear focus on inclusive benefits. The path forward requires balancing innovation with responsibility, autonomy with oversight, and global collaboration with local sovereignty – challenges that will define the success of AI in governance for years to come.


Session transcript

Victoria Espinel

We are going to start with a very special guest. Minister Bawu is going to join us for a keynote. Very excited to hear what you have to say, coming from Hyderabad, one of the centers of technology in India and in the world. So, Minister, thank you so much for joining us. And if I could ask you to come to the podium. Thank you so much, Minister.

Minister Sridhar Babu

Very good afternoon to all. In fact, we welcome you to our city of Delhi, a beautiful city, a capital of India. And many people are from India, too. And we welcome the distinguished panelists. eminent panelists who are sitting here to sit and discuss the quotes for Better Tomorrow. And I welcome the leaders of the industry and the delegates over here. And especially coming to the subject, AI agents for Better Tomorrow. You know, I wish to see that, you know, where we stand today. And where we would end up tomorrow. The point of discussion over here. We stand today at a fundamental inflection point in the history of governance. As a policymaker, I would like to mention a few points.

Because all the technocrats are sitting on all the eminent, you know, scientists maybe from physics or the maths may be sitting on the other side. To develop AI into next level. You know, for decades, the digital revolution in the government was defined by transition from paper to portals and from physics cues to digital clicks. But today, we are witnessing the birth of the new paradigm. We are moving beyond generative AI that simply answers. We are moving from them to agentic AI that acts now. What I’ve been discussing with Mr. Srinivas just now. And for 30 years, our relationship with technology was a series of commands. We used to give commands and used to get the answers.

We typed, we clicked, we prompted. We were the masters for the such bar. We used to, you know, we were the masters. Nobody can say that. But I stand here. I stand here today. I can see and everybody can see the search bar is dying. In its place, something more profound. Just now Mrs. Sweeney was just telling about agency. It’s just evolving. In the first era of our national building was defined by land. The second by the industry. And third has been defined. More illusory, the intelligence of the system. And the nation that leads this century are those that learn to treat intelligence not as a product but as a form of a public infrastructure.

The idea is no philosophical for our state of Telangana. It is the story of our everyday governance because it’s IT driven state as we are known for. And often say that artificial intelligence has three lives in the country. the first life is in the research labs the second we take into in the policy papers but the third ultimately both of this combined together how we are trying to affect the life that truly matters for each and everybody you know how do we see it is that when ai meets the real challenges of of our lives when artificial intelligence meets the dust where we face ai meets the drought when it meets the monsoons when it meets the markets of the living society and this is where its legitimacy is earned when it really counters this dust doubt monsoons and markets in telangana we see agents not as a tool here we would like to take them as a team mates.

You know as the way the pilots relay and the co -pilots. Tomorrow as our government here in Telangana also see that we relay on AI as co -governors system that can predict a flood before the first cloud gathers over the Moosey. Moosey is our river in the midst of our city. You know allocate resources before the crisis and deliver services before citizen ever need to ask. For example if you take the agriculture a small farmer I hail from a very remote area and that to a rural place. A farmer in my place or in some other place from the rural area when the climate is not environmental concept for them it is right now a daily negotiation with uncertainty.

So when we built our AI advisors we did something unconventional. Right now we are trying to do on the pilot stage. We asked farmers to train the system with us. You know the dialects the soil wisdom the lived patterns become the pattern of the model. This is where the governance comes into picture. To use the best of the technologies where you invent produce or do sitting in R &D use best of your grey matter to come up with some products until and unless we use and induce into our governance there will be no end result. That is what we believe in. That is why our Telugu first AI can record. Land records interpret satellite indicators and compress the time between the climate event and an incident settlement.

So this saved lots of time, you know, for our, you know, government agencies as well as to the end user as a farmer. Our satellite -driven heat analysis no longer stops at mapping temperatures. They now shape zoning, green bells and urban cooling strategies for Hyderabad. Which we are planning to take up to the core by 2035. And across 33 districts in our state, our solar power edge computer nodes ensure that the government service and the climate remains operational when the grid fails. And this is also one of the novel things what the Telangana is the first state where we have implemented. Yet I don’t claim that these are examples for climate. This is just a fact of a story.

This is just a beginning. This is the first preface we can say that because the real breakthrough is not from each project. It is from the architecture that binds together. Our future projects like our coming up the state of art infrastructure in the upcoming AI city, absolutely dedicated AI city and the Bharat future city which shall be the net zero city. Are designed not as a smart districts either for technology or for other aspects, but as a self -learning cities, territories that define sustainabilities, territories which can provide themselves for the compute and make them policy advisors. Our country’s first sovereign AI nerve center. ICOM you know this is our first ever initiative by any state in India that we have come up with the first sovereign AI nerve center that is supposed to be the AI innovation hub that is named as ICOM that the aim and objective is you know this intelligence should be shall go deep beyond just for incubation but also render into R &D and shall be the prime focus of creating AI ready talent for tomorrow’s world and I would like to mention here that Hyderabad and Telangana is the first state to come up with a platform that is This Telangana data exchange platform, the sovereign data open pipeline ensures that the intelligence is grounded in integrity.

So the platform is on the open. And this is the first state we have put all the data on this platform. You know, if we go through it by this open data pipeline, you know, 1 ,084 data sets, they have moved from administrative exhaust to ecological signal. We have created something rare in the global south, that state that generates its own intelligence at a scale. And we have seen the results too. And the results shown. The healthcare doesn’t wait for symptoms. It now anticipates risk. Because the data exchange we have done with our co -partners, even in the healthcare. or with the doctors or with the public health institutions. They are just not waiting to deliver the medication, but predicting the risk and try to put it into action.

And we are not waiting for the heat waves to come. We are trying to analyze through the data and how we should place ourselves and we are preparing corridors for the shed. And farmers also, we believe, using this AI technology, we don’t want farmers to wait for the loss. You know, they have to receive assurance before despair. And we are also planning that infrastructure doesn’t wait to break. You know, it has to whisper when it will fail. You know, when all this, the cutting -edge technologies, especially the AI, deployed with purpose and AI agents offer government something rare in public life. The ability to act before harm, to prepare before shock, to protect before loss and how resilient our infrastructure emerges, how safe the climate resilient cities take shape and how our public services become anticipatory humane and trusted.

And this is the future what we are imagining and we are trying to put all our actions into stream and it is this operating system we dreamt and we started running and I believe the next chapter of the statecraft will not be written in the boardrooms of traditional power centers but in the living laboratories of the global south. In the cities of Hyderabad, and the world can already see a preview of what an intelligent century of governance looks like. Let us leave Bharat Mandapam today. Here, while this great convention is taking place with a shared conviction that the tomorrow we are building is not just the smarter, it is braver. However, and you know, the great caption goes, AI for everyone, AI for human welfare should be the theme.

And also, we should, I as a policymaker, you as a technology expert sitting over there, should aim and anticipate for it. I thank the organizers for giving me, you know, a length of year to air. And I want to hear my pitch on behalf of our state of Telangana. I would like to thank the Salesforce team. especially the team management who are invited me over here for gracing this and having to see you know all the best brains sitting over here and the grey matter who would be doing much more for our welfare of our human being. Thank you very much.

Victoria Espinel

Minister, thank you so much for joining us. We very much appreciate it. It was very exciting to hear what’s happening in Hatsheba and in Teliana. Let’s kick our panel off. Alright, so I am going to start with an icebreaker. Everyone gets 30 seconds to respond. This panel is by AI agents, so what would you say, I’m going to start there and then go towards me, what would you say is the single biggest difference that you see between AI last year, we’ve been sitting here and We’ve been sitting here last year in the AI agents that we are seeing today. Saibul, can you kick us off?

Saibal Chakraborty

So I think in my mind the conversation has moved decisively towards agentic AI. We are no longer talking about, as Honorable Minister also said, about solving discrete problems or discrete searches. We are now looking at end -to -end AI -led execution of business processes or government processes. I think that’s the single biggest change in thinking that has come up.

Victoria Espinel

Professor Lee Tiedrich?

Lee Tiedrich

To put this in context, I was involved in the International AI Safety Report, and we just had our panel on that a little while ago. And Professor Bengio was saying the biggest change from 25 to 26 is the emergence of agentic AI. And my perspective, is its ability not only to do the end -to -end, but to also act on behalf of the… of people is really the big change.

Victoria Espinel

Mike?

Mike Haley

So I’m probably going to jump on the train here. You know, what we were seeing last year was narrow agents able to solve specific problems. What we see now are agents that are able to abstract the problem, chain of thought reasoning, being able to take that and turn it into sequenced action and turn the multiagent sort of systems level thinking. So it’s the move from task specific to systems level is the big shift that I’m seeing.

Victoria Espinel

And Srini? Srini Srinivasan

Srinivas Tallapragada

Yeah, so I think for me the big shift has been from co -pilot human in the loop to agents which can act and really provide value, business value. And that’s been the big shift.

Victoria Espinel

So let’s talk about that value. Let’s talk about AI agents as a forceful multiplier. I’m going to start here this time Srini, you lead engineering for one of the biggest platforms in the world. There’s a lot of discussion about AI agents. Can you demystify this? What does that mean?

Srinivas Tallapragada

Yeah. So I think what does that mean? An agent, just like a human, first of all, an agent has to act. It says agency and it acts. That’s the first big difference. And like any agent, it has to have a couple of things. It has to know a role. Just like a human, it needs to know what it’s supposed to do, what are the jobs to be done. It needs knowledge. Just like a human, if I have in my mind, an agent has to have knowledge, some memory, so both short -term and long -term memory. And then it should also be able to act. You know, it should be able to, in a digital world, should be able to act on an API or something.

And then it should be able to act wherever the surface is. Maybe it’s in WhatsApp channel, wherever the user is interacting with it, in a WhatsApp channel or web channel or a digital channel or a SMS text. More importantly, most important in all of this is we should have guardrails on what it’s not supposed to do. that’s the most important and then all of it has to be covered to make it useful with what we call a trust layer because these things can hallucinate it can have bias, it can have toxicity, avoid all of that and they are unpredictable ultimately so it should have governance then it’s auditability, so you can do all of this this and all is to do all of this is what an agent does so this is also the why even though there is a lot of hype in reality it hasn’t diffused enough, this is the business value which we are trying to bridge as the vendors

Victoria Espinel

Thank you. Saibal I’m going to go to you next, so let’s talk about governance, we sit here in Delhi, the capital of one of the greatest nations of the world, the public sector, are they ready for this, how do we think about that?

Saibal Chakraborty

So I think let me not answer that question, I think the public sector needs to be ready so all the way from managing public finances public procurement managing their workflows and processes better, there is no way that public sector can avoid this. However, as Shrini, you pointed out, the stakes here are very, very high. So imagine an agent crafting an RFP, a multi -million or a billion dollar RFP on behalf of the government. How do we and you know in public procurement, we often sacrifice speed for procedural tightness. So how do we actually, what guardrails do we put around an agent or more? So can it really be end -to -end? Can it really be fully autonomous?

Or do I still need that last human layer to make sure that the T’s are crossed, the I’s are dotted because the stakes are really high and a mistake can really, you know, lead to a lot of negative impact. So I think the public sector has to be ready. but I think some of these guardrails has to be thought through and in the context of public sector are agents fully autonomous or do they still automate or do they still operate with a little bit of that human layer I think that has to be thought through.

Victoria Espinel

That’s great thank you. I love that you said RFPs because that’s a concrete example so let’s talk a little bit about use cases and Mike I’m going to go to you let’s talk about resilient infrastructure one of the examples I hear a lot for AI agents they can help you make reservations and I love to eat I think making restaurant reservations actually pretty valuable to me but could an AI agent do something like design a bridge could it design an energy grid like where do we stand between reality and science fiction?

Mike Haley

Yes I think we’re tracking pretty quickly to agents being able to do just those kinds of things In the past, what’s been difficult is using computational methods in AI, which has been around for a reasonable time for these things, has been very difficult. Because if you’re using some form of computational method or AI to design a bridge, you have to specify that bridge perfectly. You have to give it perfect inputs. Now, it turns out that when a designer is designing something, they don’t have perfect inputs. That’s the process of design is actually figuring out what your inputs are, right? So this has always been a little bit of a barrier for people to use these advanced methods.

With AI, and specifically AI agents, you’ve now got a much easier way of interacting. It’s more forgiving towards fuzzy requirements and earlier stages of thinking. It’s able to give you things that inspire you. So one of the things I talk a lot about publicly is that the notion of agents and creatives working in a loop together, that it’s breaking the cycle where the engineer has to come up with every idea from scratch, from a beginning. Rather describe what you’re doing, let the agents explore. So I’ll give you one example specifically in infrastructure because you wanted to get concrete. I mean, something that we work with is water systems, for example. So we’ve built AI agents that can analyze floodplains.

They can analyze how you might want to think of water drainage and these kind of things. So every time you’re making a decision early on in your design, you can let this thing run through, and it’s going to optimize your design in order to ensure that drainage is going to be successful on that. Now, drainage seems like a small little side thing, but it’s a pretty massive part of infrastructure. And having an agent handle that for you, it’s a pretty big deal.

Victoria Espinel

Mike, I have very close family ties to Louisiana, so drainage and flood zones, that is not a small thing. That is a very, very big thing. And actually, that’s a perfect segue to the question I wanted to ask Srini. So one of the most complex things that a government might have to deal with is disaster response. Is that a place where AI agents could be helpful?

Srinivas Tallapragada

I really like the theme, welfare for all. And I think while we can think of very big things of where the AI is doing, AI can add value right now. And disaster response is one good example. Another small example. Another example which I wanted to give was like, you know, the key is to give back time to the people. That’s very valuable. Giving back time is a very noble goal in my opinion to everybody. So we have this very interesting use case where there is a city in New Thames in UK where they created an agent called Bobby. It’s like Bobby is a UK term for policeman and the citizens are asking a lot of questions which are not emergency and Bobby is answering them.

More than 90 % of them they get a lot of value. What was interesting for me was we have another city in Tasmania which is using a product agent force to roll out agents to their police people more than thousand police people because lot of times when they are in the field the policemen new or more experienced they have lot of questions and they are asking and they call this agent Terry and lot of policemen say Terry is their best partner. you know they have been so I think while we can think about futuristic ways here and now there are a lot of things we can provide right now with the technology guardrails in the public sector in private sector obviously where with if you have the right platform where you have trust governance as a foundational value with all the right guardrails we can still add a lot of value and we are seeing thousands of examples across public and private sector where you have the crawl walk run mode you know you start something basic you can still add value you still have the most esoteric cases with multi agent orchestrations I feel like but you can start with basic today and still get a lot of value that’s what we are seeing

Victoria Espinel

that’s great so professor Tiedrich we’ve talked a little bit about how agents can help governments serve their publics are there are there risks there Are there risks of over -reliance?

Lee Tiedrich

Yeah, I mean, there are definitely risks, and I think I share the view of my co -panelists that I think there’s a lot of benefits to using AI in government and improving government services worldwide. But like everything else, we have to do it cautiously and smartly, and I think some of it kind of comes back to the human factor, like pick your use cases wisely. One of the themes in the safety report is that AI is emerging very jaggedly. We have some use cases like computer programming that are really good. There are others that may not be quite ready for prime time. So I think when we think about over -reliance is thinking about where AI is excelling, focusing on those use cases, and maybe doing sandboxes around some of the others to give them a little bit more time to mature.

I think also the over -reliance, picking up on some of the great points, is the guardrails. You know, one of the things in the safety report is good news. We’ve made a lot of progress on guardrails and risk management, but still as the technology moves quickly, a lot more work to be done. And so not relying too much that we overlook guardrails and thinking about where humans should be in the loop. And then the third thing I’ll just mention is, you know, the interoperability of different agents. And as agents start to call upon third -party agents, it’s just thinking through, you know, what guardrails, how do you choose that? How do you allocate liability? How do you test the agents that you’re going to bring into your system?

Victoria Espinel

So guardrails have come up. Srini mentioned it. You just mentioned it. Let’s talk about guardrails a little bit. So, Srini, we hear about chatbots. We hear about hallucinations. Those can be annoying. When you’re talking about a government deploying an AI system, AI agents, the consequences can be extremely significant. They can be a hallucinating of an agent can be quite dangerous. So let’s talk about guardrails. How do you engineer trust into a system so that a minister, a secretary, a secretary can be able to say, feel confident that that’s a tool that they can use to serve their people?

Srinivas Tallapragada

drift, they can hallucinate. So you need a command center where you can say all of it is, this is the difference between a pilot or a demo, which you can find thousands of demos in YouTube versus real life, where these things become So we had to build all of these things for both the customers or governments to build confidence, they can audit, they can test, not even if themselves an independent party also can test, all of this infrastructure is what is required to make this a reality, but once you do that, there’s a huge value you can immediately provide to either the customers or the citizens.

Mike Haley

Can I just add to that quickly? Because I think you hit a really interesting point at the end there. When people talk about guardrails, they think of guardrails as this perfect thing, that at some point the guardrails are going to get strong enough that every result is perfect, it’s completely predictable, and we’re good. And I think we need to talk about the honesty of that. We’re talking about systems that are inherently probabilistic systems. You’re never going to make a probabilistic system 100 % deterministic. It’s an acronymism. Right. So what we’ve discovered is that, I mean, you do do all the guardrails work that we’re all talking about, but where you were going at the end there about making systems that can look at the accuracy of what’s produced, give you some feedback on how accurate the solution or how well it’s going to perform, and then, and this is very important, what we’ve discovered is giving control to the human being, giving control, in our case, to an engineer, right, who is able to say, oh, I get it.

This is kind of, the result is a little off. I’m going to give it some more feedback. I’m going to reassess the results. I’m going to run it again. Or I might even go in myself and kind of tweak that information. And what we’ve discovered, when I’m talking to an engineer and explaining how this stuff works, if I don’t give them that level of control, they don’t trust the system. The minute they know they can actually control it, so it’s not, trust doesn’t depend on a perfect answer. Trust actually depends on transparency and understanding and then the ability to come in and control something.

Victoria Espinel

But I think that’s also because the engineers understand this. This is the tool. It’s a tool for them to use. to help them. It’s not something that is going to take control. Is there anything specifically with respect to infrastructure that you think government should be mindful of?

Mike Haley

Yeah. Well, look, so, I mean, infrastructure is not known as the easiest and quickest thing to build, right, in countries. And I think, you know, one of the really boring things but absolutely necessary things with infrastructure is to make sure your digital ecosystem around that infrastructure is set. And I see a lot of places in the world getting into building infrastructure trying to do this quickly without getting all that digital infrastructure in place. So building information, modelling, ensuring that every part of your infrastructure is correctly modelled, it’s represented at the right level. AI is not going to just magically come in and solve a bunch of problems unless you’ve got a lot of that digital stuff in place already.

So it’s kind of a little bit of the boring work, but getting that stuff in place early is one of the biggest things. I mean, I’ve had a number of conversations here this week about the 2047 initiative in India, the Man. of infrastructure that needs to be built in this country and the importance of using something like building information modeling, getting standard data, getting that in place now. If you get that in place now, all this AI goodness is way easier to deploy against it.

Victoria Espinel

Yeah, please.

Srinivas Tallapragada

Yeah, so I think I heard a lot of discussion around sovereignty, and I think the way we should think of sovereignty is two levels. There’s strategic sovereignty and technical sovereignty. So by strategic sovereignty, I mean is like you get control on data, your governance policies, you know, and your operational policies. That I think you can implement it right now and get value. And I think and then on the technical one where people want to control their entire supply chain from the chips and all, I would like to for governments to and public officials and policy officials to think is as two tracks. One. One takes longer in a lot of capital investment. Don’t let the second track.

stop getting the benefit of the first track the first track is easy you can ensure the data doesn’t leave your country your policy guardrails have control human in the loop you still get a lot of benefits while you still want to continue on the second track that would be my request to all the government

Saibal Chakraborty

can I just make a quick build on what Mike said because I do a lot of my work in the public sector with governments I think one of the biggest guardrails beyond policies is actually the skilling the upskilling like Mike you said it’s a probabilistic system inherently right so you cannot expect it to give correct results all the time there’s nothing called a correct result so the person who’s actually using this who’s using the tool at the district level, at the state level to make real government decisions that person is not an AI engineer that person needs to be upskilled and needs to be told what can be trusted and what requires that additional layer of check.

So I think if agent TKI has to take off in public sector at scale, then that upskilling at various levels of the government on what can be trusted and what cannot be trusted is also a very, very big component.

Victoria Espinel

Yes, I totally agree. Professor, I wanted to ask you, so it feels so trite to say technology is moving really quickly, but in the last few years, I mean, AI is moving very, very quickly. We’ve talked a lot about guardrails. How should governments think about this? I mean, how are governments going to be able to keep up in terms of setting government expectation, setting potentially regulation for a technology that is moving so quickly?

Lee Tiedrich

It’s a hard one. I think, you know, AI has evolved into a global multi – disciplinary field. And I think, you know, we need to bring the global community together. I think we need policymakers, lawyers, talking with engineers, talking with sector specialists to really inform the policy in real time. I mean, I’m a big fan. I spent a year working at NIST, the U.S. National Institute of Standards and Technology. And, you know, we need to figure out how to do some of the guardrails, you know, starting with the science. And then the science can inform, you know, how to develop the standards, how to develop the evals. And then it becomes, you know, a question.

I mean, different countries have different views on whether we should regulate or not regulate. You know, the U.S. has a very deregulatory approach. Europe is the opposite. But if we can kind of agree on what those common standards are for evaluation and testing, then governments can be free to decide, do we mandate this or not mandate that? And I think one important thing is that we need to be able to do that. And I think that’s one of the things that we need to be able to do. And I think that’s one of the important nuance to add to the mix. And this has been a theme of the conference. is, you know, we have to, well, we want some standardization on these evaluation mechanisms.

We have to recognize that we speak different languages. We have different cultural norms. So when we want to have standardization, we’ve got to be able to localize what the evaluation looks like is what might be appropriate in one country isn’t going to be appropriate in another country. So it’s hard, but I think, you know, starting with the science, the scientific report, I would point people to, you know, building on that, working through the AC network, working through standards organizations and all these other initiatives to develop the evaluation, build that evaluation ecosystem, and then regulations can kind of overlay on top of that as policymakers think appropriate for their jurisdictions.

Victoria Espinel

But if I could ask a follow -up question to you or any of the panelists, I mean, I think one of the challenges there for companies is that it’s really helpful for companies. I also speak for enterprise software companies that I represent. It’s helpful to know what those government expectations are. Like industry is looking for clarity and predictability. And I think that’s one of the things that I think is really important. And I think that’s one of the things that I think is really important. And I think that’s one of the things that I think is really important. And I think that’s one of the things that I think is really important. And

Mike Haley

Should I take a shot at it? Yeah, I see. As a software provider, you know, at Autodesk, we definitely deal with that, Victoria. You know, we’ve had a couple approaches. One, I mean, we’re obviously going to stay on top of this all the time, working with governments, making this part of a conversation. I spend a good part of my year traveling around the world, talking to governments and trying to sort of help them understand what needs to happen, but also help us understand, like you said, what they’re wanting. But the main problem is just the sheer variance. I mean, even within the United States, we have things between different state efforts, right? And then you get around the world, it just gets even more complicated.

What we’ve tried to do is we’ve – We’ve tried to run as far ahead of this as we can. So if there is a way that we can build in good controls right from the beginning, we actually build those controls to the maximum extent that we can within reason, right? So what we’ve done is we’ve found now that we’ve run, I’ll give you an example. In every AI feature we have in our software, we have something called a transparency card, which looks like a nutrition label on food. But that nutrition label tells you what kind of model is behind it, what data was used to train it, what kind of level of control you have, what accuracy it has, any bias that we know about in the model, that kind of stuff.

And it’s a standard thing. So we rolled that out about a year ago really to try and stay ahead of things. So if governments started asking for these things, well, we’ve got a transparency card. What’s actually happened now is that there’s a bunch of interest in that becoming part of a standard. So I mean I’m not saying that really just to tout us because I think other companies are doing great things in this space as well. You guys are doing a bunch of good stuff in this space too. I think this is an opportunity for us. For us in industry to run ahead to try and help define some of these things because it is moving so fast.

And I hate to, maybe I shouldn’t say this publicly, but the government doesn’t always have the best answers, right? So, I mean, we can work with government to help them develop those answers and come up with good things, which helps us then, you know, resist some of the complexity that’s coming down the line.

Srinivas Tallapragada

Yeah, so one of the challenges in this is you can project it too much. It’s an exponential curve. It’s very hard to project. So I think sometimes it’s learned by doing. I think the biggest thing the government can, all governments can do is the policy framework on how to update these standards. Today, usually it takes a long time, and so everybody’s afraid, and then it takes even harder to change a standard. So then everybody, so then they try to solve everything, and things are changing. And so I think the main thing policymakers could do is, just like the feedback loop, if there’s a way to improve. If there’s a way to improve the policy framework, because then you don’t need to be afraid of getting everything right.

You know, you understand that, hey, you told me some basics, and as new data comes in, you can update it. And then I think that, I think that in engineering and product, we call this the product feedback loop and agile development. If we have something equivalent on that, then I think then everybody is clear because we all want the right thing. I think there’s no disconnect in the foundational thing. We want AI to help, correct, in a positive way to our net positive way to our entire community. And the regulatory framework in a changing technology, if the regulatory framework is able to change, if we can change that, then we are not afraid to say that we need to get everything right on day one.

And we can learn by doing it. So agile regulation.

Victoria Espinel

I have loved this panel. Unfortunately, we’re coming to a close. So I’m going to ask each of you one final question and Saibala, I’m going to start with you and then head this way. But if we were so fortunate to meet again in three years, so fortunate to meet in Delhi again in three years, looking back what would you say is the one thing that you think would be the best way to determine whether or not we have succeeded in addressing some of these challenges i know it’s a big question sorry but uh thank you

Saibal Chakraborty

so since we’re in uh delhi i’ll give the answer in the indian context i think as um the primary theme or one of the primary themes of this particular conference is inclusivity so for me the success of ai the true success of ai will be if a farmer could talk to a small language model powered tool in his or her own vernacular language and get practical advice on how to manage the crop how to manage the cattle and if that could be scaled up uh across the board i think that would be a very good idea across the length and breadth of india Then I think that, for me, is the real win for AI.

That’s a

Victoria Espinel

big win. I mean, that’s a significant impact. Thank you. Great. Professor Tietrich? Yeah, so

Lee Tiedrich

I’m kind of coming back to the evaluation ecosystem. We’ve made a lot of progress over the last couple of years, but more work needs to be done. You know, more countries, including the Global South, are launching ACs, you know, AI safety or security institutes, which is not hard regulation, binding regulation, but it’s governments weighing in. And I think real progress three years from now, we have an active AC institute that’s sharing information, making real progress on evaluation techniques, and one of the commitments that came out of some of the companies yesterday is, you know, also localizing that so everybody can benefit from that Global North, Global South. Thank you.

Victoria Espinel

Mike? So earlier

Mike Haley

on, I spoke about infrastructure and, you know, physical infrastructure that is in countries. What I would hope to see is, in a couple of years, in a couple of years’ time, we’re actually seeing infrastructure genuinely get developed faster than it’s ever been developed, which is a really, really tough problem, making that happen in the physical world. So as a measure of AI truly doing this, that’s an incredible measure. But on top of that, it needs to be doing that without compromising safety, without compromising. It’s not a big black box that nobody understands, right? So what I would love to see is not only is that infrastructure being developed faster, but the public is engaged with it.

The engineers and people that are doing it feel comfortable with it. They feel secure. They feel fine signing off on that because they feel that this is reliable. Thank you.

Victoria Espinel

Srinu? If AI

Srinivas Tallapragada

is so revolutionary as we all assume, I would hope in three years, the bottom 50 % income percentile, their per capita income has been measurable. That for me is the real impact of this technology. That’s

Victoria Espinel

fantastic. I want to say thank you to all of our panelists I want to say a special thank you to Srini and to Salesforce for bringing us all together here today thank you to our audience for joining us big round of applause for our panelists thank you you Thank you.

M

Minister Sridhar Babu

Speech speed

122 words per minute

Speech length

1656 words

Speech time

811 seconds

Evolution to Agentic AI

Explanation

The minister says AI is moving beyond simple question answering toward agents that can act autonomously. This marks a shift from generative models that only provide responses to systems that can take real‑world actions.


Evidence

“We are moving from them to agentic AI that acts now.” [1]. “We are moving beyond generative AI that simply answers.” [2].


Major discussion point

Evolution to Agentic AI


Topics

Artificial intelligence


AI as Co‑Governor for Anticipatory Services

Explanation

The minister describes AI agents as “co‑governors” that can predict floods before they happen and allocate resources proactively, delivering services before citizens request them.


Evidence

“Tomorrow as our government here in Telangana also see that we relay on AI as co -governors system that can predict a flood before the first cloud gathers over the Moosey.” [43]. “You know allocate resources before the crisis and deliver services before citizen ever need to ask.” [45].


Major discussion point

AI Agents as Public Sector Multipliers


Topics

Social and economic development | Artificial intelligence


AI Accelerates Infrastructure Design

Explanation

AI agents are already being used to analyse flood‑plains, showing how they can speed up critical infrastructure planning and resilience work.


Evidence

“So we’ve built AI agents that can analyze floodplains.” [9].


Major discussion point

AI Agents as Public Sector Multipliers


Topics

Environmental impacts | Artificial intelligence


Sovereign AI Nerve Centre & Data Exchange

Explanation

Telangana has created a sovereign AI nerve centre and an open data‑exchange platform to keep AI data under national control and ensure integrity.


Evidence

“our first sovereign AI nerve center… the sovereign data open pipeline ensures that the intelligence is grounded in integrity.” [112]. “Our country’s first sovereign AI nerve center.” [113].


Major discussion point

Data Sovereignty and Strategic Control


Topics

Data governance | Artificial intelligence | The enabling environment for digital development


S

Saibal Chakraborty

Speech speed

152 words per minute

Speech length

569 words

Speech time

224 seconds

End‑to‑End AI‑Led Execution

Explanation

The speaker notes that the conversation has moved toward AI that can execute entire business or government processes without human hand‑off.


Evidence

“We are now looking at end -to -end AI -led execution of business processes or government processes.” [11].


Major discussion point

Evolution to Agentic AI


Topics

Artificial intelligence


Agents Draft Multi‑Million RFPs (Need Guardrails)

Explanation

He imagines agents writing large procurement documents, but stresses that strong guardrails are required before such autonomy is trusted.


Evidence

“So imagine an agent crafting an RFP, a multi -million or a billion dollar RFP on behalf of the government.” [30].


Major discussion point

AI Agents as Public Sector Multipliers


Topics

Social and economic development | Artificial intelligence


Upskilling Public‑Sector Staff

Explanation

Because AI systems are probabilistic, the speaker argues that government users need training to know what to trust and when to intervene.


Evidence

“…the person who’s actually using this … is not an AI engineer that person needs to be upskilled and needs to be told what can be trusted and what requires that additional layer of check.” [65].


Major discussion point

Guardrails, Trust, and Human Oversight


Topics

Capacity development | Building confidence and security in the use of ICTs


Success: Vernacular Advice for Farmers

Explanation

He defines success as AI that can give farmers practical advice in their own language at scale, improving livelihoods across India.


Evidence

“the true success of ai will be if a farmer could talk to a small language model powered tool in his or her own vernacular language and get practical advice … and if that could be scaled up …” [146].


Major discussion point

Defining Success and Impact Metrics


Topics

Social and economic development | Monitoring and measurement


L

Lee Tiedrich

Speech speed

197 words per minute

Speech length

833 words

Speech time

252 seconds

Emergence of Agentic AI & Acting for People

Explanation

Lee points out that the biggest change is AI agents that not only complete tasks end‑to‑end but also act on behalf of users.


Evidence

“And Professor Bengio was saying the biggest change from 25 to 26 is the emergence of agentic AI.” [15]. “And my perspective, is its ability not only to do the end -to -end, but to also act on behalf of the… of people is really the big change.” [19].


Major discussion point

Evolution to Agentic AI


Topics

Artificial intelligence


International Standards & Evaluation Ecosystem

Explanation

He stresses the need for global multidisciplinary collaboration to build standards and evaluation mechanisms that can inform regulation.


Evidence

“…building on the AC network, working through standards organizations and all these other initiatives to develop the evaluation, build that evaluation ecosystem, and then regulations can kind of overlay on top of that…” [94]. “So when we want to have standardization, we’ve got to be able to localize what the evaluation looks like is what might be appropriate in one country isn’t going to be appropriate in another country.” [95].


Major discussion point

Regulation, Standards, and Agile Policy


Topics

Artificial intelligence | The enabling environment for digital development


Over‑Reliance Risks & Guardrails

Explanation

Lee warns that excessive reliance on AI without proper guardrails can be dangerous and calls for careful use‑case selection and sandboxing.


Evidence

“I think also the over -reliance, picking up on some of the great points, is the guardrails.” [62]. “So it’s hard, but I think, you know, starting with the science, the scientific report, I would point people to, you know, building on that, working through the AC network…” [94].


Major discussion point

Guardrails, Trust, and Human Oversight


Topics

Building confidence and security in the use of ICTs | Artificial intelligence


M

Mike Haley

Speech speed

213 words per minute

Speech length

1516 words

Speech time

426 seconds

Shift to Systems‑Level AI Agents

Explanation

Mike observes a transition from narrow, task‑specific agents to systems‑level agents that can orchestrate multiple steps.


Evidence

“So it’s the move from task specific to systems level is the big shift that I’m seeing.” [8].


Major discussion point

Evolution to Agentic AI


Topics

Artificial intelligence


Chain‑of‑Thought Reasoning & Orchestration

Explanation

He highlights that modern agents can perform chain‑of‑thought reasoning and coordinate multi‑agent workflows to solve complex problems.


Evidence

“What we see now are agents that are able to abstract the problem, chain of thought reasoning, being able to take that and turn it into sequenced action and turn the multiagent sort of systems level thinking.” [31].


Major discussion point

Evolution to Agentic AI


Topics

Artificial intelligence


Trust Depends on Transparency & Control

Explanation

Mike states that trust in probabilistic AI systems comes from transparency, understanding, and the ability for humans to intervene.


Evidence

“Trust actually depends on transparency and understanding and then the ability to come in and control something.” [72].


Major discussion point

Guardrails, Trust, and Human Oversight


Topics

Building confidence and security in the use of ICTs | Artificial intelligence


Transparency “Nutrition‑Label” Cards

Explanation

His company provides a model‑provenance card that acts like a nutrition label, giving regulators and users clear information about data, bias, and performance.


Evidence

“In every AI feature we have in our software, we have something called a transparency card, which looks like a nutrition label on food.” [137]. “But that nutrition label tells you what kind of model is behind it, what data was used to train it, what kind of level of control you have, what accuracy it has, any bias that we know about in the model…” [138].


Major discussion point

Regulation, Standards, and Agile Policy


Topics

Artificial intelligence | The enabling environment for digital development


Accelerating Infrastructure Design

Explanation

Mike notes that AI agents can analyse floodplains and speed up the design of resilient infrastructure.


Evidence

“So we’ve built AI agents that can analyze floodplains.” [9].


Major discussion point

AI Agents as Public Sector Multipliers


Topics

Environmental impacts | Artificial intelligence


Success: Faster, Safer Infrastructure Delivery

Explanation

He envisions a future where AI enables infrastructure to be built faster and with greater public confidence.


Evidence

“So what I would love to see is not only is that infrastructure being developed faster, but the public is engaged with it.” [101]. “What I would hope to see is, in a couple of years, we’re actually seeing infrastructure genuinely get developed faster than it’s ever been developed…” [150].


Major discussion point

Defining Success and Impact Metrics


Topics

Social and economic development | Monitoring and measurement


S

Srinivas Tallapragada

Speech speed

171 words per minute

Speech length

1282 words

Speech time

449 seconds

From Co‑Pilot to Value‑Adding Agents

Explanation

He describes the shift from AI as a supportive co‑pilot to agents that can act independently and deliver business value.


Evidence

“Yeah, so I think for me the big shift has been from co -pilot human in the loop to agents which can act and really provide value, business value.” [14].


Major discussion point

Evolution to Agentic AI


Topics

Artificial intelligence


Trust Layer, Auditability & Guardrails

Explanation

Srinivas stresses that agents need a trust layer with auditability and guardrails to prevent hallucinations, bias, and toxicity.


Evidence

“…they can hallucinate it can have bias, it can have toxicity, avoid all of that and they are unpredictable ultimately so it should have governance then it’s auditability…” [67].


Major discussion point

Guardrails, Trust, and Human Oversight


Topics

Building confidence and security in the use of ICTs | Artificial intelligence


Strategic vs Technical Sovereignty

Explanation

He differentiates between strategic sovereignty (control over data and policies) and technical sovereignty (control over chip‑level supply chains).


Evidence

“There’s strategic sovereignty and technical sovereignty.” [117]. “…strategic sovereignty … you get control on data, your governance policies…” [123]. “…technical one where people want to control their entire supply chain from the chips…” [124].


Major discussion point

Data Sovereignty and Strategic Control


Topics

Data governance | Artificial intelligence


Agile Regulation

Explanation

He calls for regulatory frameworks that can evolve quickly alongside AI technology.


Evidence

“So agile regulation.” [22]. “If the regulatory framework is able to change, if we can change that, then we are not afraid to say that we need to get everything right on day one.” [134].


Major discussion point

Regulation, Standards, and Agile Policy


Topics

The enabling environment for digital development | Artificial intelligence


Disaster Response as Multiplier

Explanation

He cites disaster response as a concrete example where AI agents can return valuable time to citizens.


Evidence

“And disaster response is one good example.” [63].


Major discussion point

AI Agents as Public Sector Multipliers


Topics

Social and economic development | Artificial intelligence


Success Metric: Bottom‑50% Income Uplift

Explanation

He proposes measuring success by a measurable increase in per‑capita income for the lowest 50 % of the population within three years.


Evidence

“I would hope in three years, the bottom 50 % income percentile, their per capita income has been measurable.” [155].


Major discussion point

Defining Success and Impact Metrics


Topics

Monitoring and measurement | Social and economic development


V

Victoria Espinel

Speech speed

155 words per minute

Speech length

1001 words

Speech time

387 seconds

Agentic AI as the Biggest Difference

Explanation

Victoria frames the rise of AI agents as the single most significant change compared with a year ago.


Evidence

“what would you say is the single biggest difference that you see between AI last year, we’ve been sitting here and We’ve been sitting here last year in the AI agents that we are seeing today.” [38]. “There’s a lot of discussion about AI agents.” [4].


Major discussion point

Evolution to Agentic AI


Topics

Artificial intelligence


Guardrails and Trust Discussed Extensively

Explanation

She highlights that guardrails, hallucination risks, and trust have been recurring themes throughout the panel.


Evidence

“We’ve talked a lot about guardrails.” [55]. “So guardrails have come up.” [57]. “They can be a hallucinating of an agent can be quite dangerous.” [59]. “Let’s talk about guardrails a little bit.” [60].


Major discussion point

Guardrails, Trust, and Human Oversight


Topics

Building confidence and security in the use of ICTs | Artificial intelligence


Industry Needs Clarity & Predictability

Explanation

She notes that enterprises seek clear, predictable regulatory expectations to safely adopt AI agents.


Evidence

“Like industry is looking for clarity and predictability.” [82]. “How should governments think about this?” [143].


Major discussion point

Regulation, Standards, and Agile Policy


Topics

The enabling environment for digital development | Artificial intelligence


AI Agents as Public‑Sector Multipliers

Explanation

Victoria asks the panel to consider how AI agents can amplify government impact, framing them as a forceful multiplier.


Evidence

“Let’s talk about AI agents as a forceful multiplier.” [35].


Major discussion point

AI Agents as Public Sector Multipliers


Topics

Social and economic development | Artificial intelligence


Agreements

Agreement points

AI has fundamentally evolved from tools to autonomous agents

Speakers

– Saibal Chakraborty
– Lee Tiedrich
– Mike Haley
– Srinivas Tallapragada

Arguments

AI has moved from discrete problem-solving to end-to-end execution of business and government processes


The biggest change is AI’s ability to act on behalf of people, not just assist them


AI has evolved from narrow task-specific agents to systems capable of abstract reasoning and multi-agent orchestration


The shift has been from co-pilot human-in-the-loop systems to autonomous agents that can provide real business value


Summary

All panelists agree that the most significant change in AI over the past year is the emergence of agentic AI – systems that can act autonomously rather than just assist humans. This represents a fundamental shift from AI as a tool to AI as an autonomous actor capable of end-to-end process execution.


Topics

Artificial intelligence


Guardrails and trust mechanisms are essential for AI deployment

Speakers

– Saibal Chakraborty
– Lee Tiedrich
– Mike Haley
– Srinivas Tallapragada

Arguments

Public sector must adopt AI agents but requires careful consideration of guardrails due to high stakes in government decisions


Governments should choose use cases wisely, focus on areas where AI excels, and maintain appropriate human oversight


Trust depends on transparency, understanding, and human control rather than perfect accuracy from probabilistic systems


AI systems require comprehensive trust layers including governance, auditability, and command centers to monitor performance and prevent drift


Summary

There is unanimous agreement that while AI agents offer significant benefits, robust guardrails, transparency, and trust mechanisms are crucial for safe deployment, especially in high-stakes government applications. All speakers emphasize that trust comes from transparency and human control rather than perfect system performance.


Topics

Artificial intelligence | Building confidence and security in the use of ICTs


AI can provide immediate practical value in government services

Speakers

– Minister Sridhar Babu
– Srinivas Tallapragada
– Mike Haley

Arguments

AI should enable anticipatory governance that acts before harm, prepares before shock, and protects before loss


AI agents can assist in disaster response, citizen services, and police support, providing immediate value through time-saving applications


AI agents can design complex infrastructure like bridges and water systems by handling fuzzy requirements and optimizing designs


Summary

Speakers agree that AI agents can deliver tangible value in government services right now, from citizen support and disaster response to infrastructure design and anticipatory governance. The focus is on practical applications that save time and improve service delivery.


Topics

Artificial intelligence | Social and economic development


Success should be measured by inclusive impact and broad-based benefits

Speakers

– Saibal Chakraborty
– Srinivas Tallapragada

Arguments

True AI success means inclusive access, such as farmers receiving practical advice in their own languages


Success should be measured by whether the bottom 50% income percentile sees measurable improvement in per capita income


Summary

Both speakers emphasize that AI success should be measured not by technological advancement but by inclusive impact – whether the technology benefits the most disadvantaged populations and creates broad-based economic improvements.


Topics

Artificial intelligence | Social and economic development | Closing all digital divides


Regulatory frameworks need to be adaptive and collaborative

Speakers

– Lee Tiedrich
– Mike Haley
– Srinivas Tallapragada

Arguments

Global collaboration between policymakers, engineers, and sector specialists is needed to develop common evaluation standards while respecting cultural differences


Industry should proactively develop controls and transparency measures to help define standards rather than wait for government regulation


Agile regulatory frameworks that can adapt and update quickly are needed to keep pace with rapidly evolving AI technology


Summary

Speakers agree that effective AI governance requires collaborative, adaptive regulatory approaches that bring together multiple stakeholders and can evolve quickly with the technology, rather than static, top-down regulation.


Topics

Artificial intelligence | The enabling environment for digital development


Similar viewpoints

Both emphasize that successful AI deployment requires not just technical solutions but human capacity building and comprehensive monitoring systems to ensure proper use and oversight

Speakers

– Saibal Chakraborty
– Srinivas Tallapragada

Arguments

Upskilling government workers to understand what can be trusted and what requires additional verification is crucial for AI adoption


AI systems require comprehensive trust layers including governance, auditability, and command centers to monitor performance and prevent drift


Topics

Capacity development | Artificial intelligence


Both advocate for proactive, collaborative approaches to AI governance where industry takes initiative in developing standards while working with adaptive regulatory frameworks

Speakers

– Mike Haley
– Srinivas Tallapragada

Arguments

Industry should proactively develop controls and transparency measures to help define standards rather than wait for government regulation


Agile regulatory frameworks that can adapt and update quickly are needed to keep pace with rapidly evolving AI technology


Topics

Artificial intelligence | The enabling environment for digital development


Both emphasize the importance of data sovereignty and local control in AI implementation, with practical examples of how governments can maintain control while benefiting from AI technology

Speakers

– Minister Sridhar Babu
– Srinivas Tallapragada

Arguments

Telangana has implemented AI-driven governance including Telugu-first AI for land records, satellite analysis for urban planning, and sovereign AI infrastructure


Governments can implement strategic sovereignty through data control and governance policies while pursuing longer-term technical sovereignty


Topics

Artificial intelligence | Data governance | Social and economic development


Unexpected consensus

Industry-government collaboration in standard setting

Speakers

– Mike Haley
– Lee Tiedrich
– Srinivas Tallapragada

Arguments

Industry should proactively develop controls and transparency measures to help define standards rather than wait for government regulation


Global collaboration between policymakers, engineers, and sector specialists is needed to develop common evaluation standards while respecting cultural differences


Agile regulatory frameworks that can adapt and update quickly are needed to keep pace with rapidly evolving AI technology


Explanation

Unexpectedly, there was strong consensus that industry should take a proactive role in developing AI standards rather than waiting for government regulation. This collaborative approach between industry and government was endorsed across speakers, suggesting a shift from traditional regulatory models.


Topics

Artificial intelligence | The enabling environment for digital development


Probabilistic nature of AI systems as acceptable

Speakers

– Mike Haley
– Saibal Chakraborty
– Srinivas Tallapragada

Arguments

Trust depends on transparency, understanding, and human control rather than perfect accuracy from probabilistic systems


Public sector must adopt AI agents but requires careful consideration of guardrails due to high stakes in government decisions


AI systems require comprehensive trust layers including governance, auditability, and command centers to monitor performance and prevent drift


Explanation

There was unexpected consensus that the inherently probabilistic and imperfect nature of AI systems is acceptable, even in high-stakes government applications, as long as proper transparency, control mechanisms, and guardrails are in place. This represents a mature understanding of AI limitations.


Topics

Artificial intelligence | Building confidence and security in the use of ICTs


Overall assessment

Summary

The panel demonstrated remarkably high consensus on key issues around AI agents, including their transformative potential, the need for robust guardrails, the importance of inclusive impact, and the necessity of collaborative governance approaches. All speakers agreed on the fundamental shift from AI as tool to AI as agent, while emphasizing practical implementation with proper safeguards.


Consensus level

Very high consensus with strong alignment on both opportunities and challenges. This suggests the field has matured to a point where there is shared understanding of both AI’s potential and its risks. The implications are positive for coordinated global action on AI governance and deployment, as stakeholders from government, industry, and academia share similar frameworks for thinking about AI agents.


Differences

Different viewpoints

Level of human oversight required in AI agent deployment

Speakers

– Saibal Chakraborty
– Srinivas Tallapragada

Arguments

Public sector must adopt AI agents but requires careful consideration of guardrails due to high stakes in government decisions


The shift has been from co-pilot human-in-the-loop systems to autonomous agents that can provide real business value


Summary

Chakraborty emphasizes the need for human oversight in high-stakes government decisions, questioning whether AI agents can be fully autonomous, while Tallapragada advocates for moving toward more autonomous agents that can act independently and provide business value


Topics

Artificial intelligence | Building confidence and security in the use of ICTs


Approach to AI system reliability and trust

Speakers

– Mike Haley
– Srinivas Tallapragada

Arguments

Trust depends on transparency, understanding, and human control rather than perfect accuracy from probabilistic systems


AI systems require comprehensive trust layers including governance, auditability, and command centers to monitor performance and prevent drift


Summary

Haley emphasizes that trust comes from user control and transparency rather than perfect systems, while Tallapragada focuses on building comprehensive monitoring infrastructure and governance systems to ensure reliability


Topics

Artificial intelligence | Building confidence and security in the use of ICTs


Regulatory approach and timeline

Speakers

– Lee Tiedrich
– Srinivas Tallapragada

Arguments

Global collaboration between policymakers, engineers, and sector specialists is needed to develop common evaluation standards while respecting cultural differences


Agile regulatory frameworks that can adapt and update quickly are needed to keep pace with rapidly evolving AI technology


Summary

Tiedrich advocates for developing comprehensive global standards through international collaboration before regulation, while Tallapragada emphasizes the need for agile, iterative regulatory frameworks that can be updated quickly


Topics

Artificial intelligence | The enabling environment for digital development


Unexpected differences

Measurement of AI success

Speakers

– Saibal Chakraborty
– Srinivas Tallapragada

Arguments

True AI success means inclusive access, such as farmers receiving practical advice in their own languages


Success should be measured by whether the bottom 50% income percentile sees measurable improvement in per capita income


Explanation

Topics

Artificial intelligence | Social and economic development | Closing all digital divides


Overall assessment

Summary

The main areas of disagreement center around the balance between AI autonomy and human oversight, approaches to building trust and reliability in AI systems, and regulatory strategies for keeping pace with technological change


Disagreement level

The disagreement level is moderate but significant for implementation. While all speakers agree on AI’s potential benefits and the need for responsible deployment, their different approaches to risk management, regulatory frameworks, and success metrics could lead to substantially different implementation strategies. These disagreements reflect deeper philosophical differences about technology adoption, risk tolerance, and the role of human oversight in AI systems.


Partial agreements

Partial agreements

Similar viewpoints

Both emphasize that successful AI deployment requires not just technical solutions but human capacity building and comprehensive monitoring systems to ensure proper use and oversight

Speakers

– Saibal Chakraborty
– Srinivas Tallapragada

Arguments

Upskilling government workers to understand what can be trusted and what requires additional verification is crucial for AI adoption


AI systems require comprehensive trust layers including governance, auditability, and command centers to monitor performance and prevent drift


Topics

Capacity development | Artificial intelligence


Both advocate for proactive, collaborative approaches to AI governance where industry takes initiative in developing standards while working with adaptive regulatory frameworks

Speakers

– Mike Haley
– Srinivas Tallapragada

Arguments

Industry should proactively develop controls and transparency measures to help define standards rather than wait for government regulation


Agile regulatory frameworks that can adapt and update quickly are needed to keep pace with rapidly evolving AI technology


Topics

Artificial intelligence | The enabling environment for digital development


Both emphasize the importance of data sovereignty and local control in AI implementation, with practical examples of how governments can maintain control while benefiting from AI technology

Speakers

– Minister Sridhar Babu
– Srinivas Tallapragada

Arguments

Telangana has implemented AI-driven governance including Telugu-first AI for land records, satellite analysis for urban planning, and sovereign AI infrastructure


Governments can implement strategic sovereignty through data control and governance policies while pursuing longer-term technical sovereignty


Topics

Artificial intelligence | Data governance | Social and economic development


Takeaways

Key takeaways

AI has fundamentally evolved from discrete problem-solving tools to autonomous agents capable of end-to-end process execution and acting on behalf of humans


Government adoption of AI agents requires careful implementation of guardrails, transparency, and human oversight due to high-stakes decision-making


Trust in AI systems depends on transparency, human control, and understanding rather than perfect accuracy, as these are inherently probabilistic systems


Strategic sovereignty (data control and governance policies) can be implemented immediately while technical sovereignty (full supply chain control) requires longer-term investment


Upskilling government workers to understand AI capabilities and limitations is crucial for successful public sector adoption


AI agents show immediate practical value in disaster response, citizen services, infrastructure design, and inclusive applications like vernacular language support for farmers


Success should be measured by tangible improvements to society, particularly for lower-income populations and inclusive access to AI benefits


Resolutions and action items

Industry should proactively develop transparency measures and controls (like nutrition label-style transparency cards) to help define standards ahead of regulation


Governments should focus on developing agile regulatory frameworks that can adapt quickly to technological changes rather than trying to solve everything upfront


Public and private sectors should collaborate to establish common evaluation standards while respecting cultural and linguistic differences


Digital infrastructure foundations (like building information modeling) must be established before deploying AI solutions for maximum effectiveness


AI safety institutes should actively share information and evaluation techniques globally, including localization for different regions


Unresolved issues

How to balance fully autonomous AI agents versus maintaining human oversight layers in high-stakes government decisions


Managing the complexity of varying regulatory approaches across different countries and jurisdictions


Determining optimal liability allocation when AI agents call upon third-party agents


Addressing the inherent unpredictability of probabilistic AI systems while maintaining public trust


Scaling AI benefits to reach the most vulnerable populations effectively


Establishing standardized evaluation mechanisms that work across different cultural contexts and languages


Suggested compromises

Implement a two-track approach to sovereignty: pursue strategic sovereignty (data control, governance) immediately while working toward technical sovereignty (supply chain control) as a longer-term goal


Use a ‘crawl, walk, run’ approach – start with basic AI applications that provide immediate value while building toward more sophisticated multi-agent systems


Maintain human-in-the-loop systems for critical government functions while allowing full automation for lower-risk applications


Focus on transparency and user control rather than pursuing perfect AI accuracy, giving users the ability to understand and modify AI outputs


Develop common evaluation standards internationally while allowing individual countries to decide whether to mandate or recommend their use


Thought provoking comments

We are moving beyond generative AI that simply answers. We are moving from them to agentic AI that acts now… I can see and everybody can see the search bar is dying. In its place, something more profound… The nation that leads this century are those that learn to treat intelligence not as a product but as a form of a public infrastructure.

Speaker

Minister Sridhar Babu


Reason

This comment reframes AI from a consumer product to critical public infrastructure, similar to roads or electricity. It’s profound because it suggests a fundamental shift in how governments should approach AI – not as a nice-to-have technology but as essential infrastructure for national competitiveness.


Impact

This set the conceptual foundation for the entire discussion, establishing the theme that AI agents represent a paradigm shift from reactive to proactive systems. It influenced subsequent speakers to focus on practical governance applications rather than theoretical possibilities.


You’re never going to make a probabilistic system 100% deterministic. It’s an acronymism… Trust actually depends on transparency and understanding and then the ability to come in and control something.

Speaker

Mike Haley


Reason

This cuts through the common misconception that AI systems can be made perfectly reliable through guardrails alone. It’s insightful because it redefines trust from ‘perfect results’ to ‘transparent control,’ which is more realistic and actionable for government implementation.


Impact

This comment shifted the guardrails discussion from seeking perfection to accepting probabilistic nature while maintaining human agency. It led other panelists to focus on practical control mechanisms rather than theoretical safety guarantees.


I think one of the biggest guardrails beyond policies is actually the skilling the upskilling… the person who’s actually using this who’s using the tool at the district level, at the state level to make real government decisions that person is not an AI engineer.

Speaker

Saibal Chakraborty


Reason

This identifies a critical gap often overlooked in AI deployment discussions – the human capacity building required for effective implementation. It’s particularly insightful because it recognizes that technical guardrails are insufficient without human understanding.


Impact

This comment broadened the discussion beyond technical solutions to include human factors, leading to recognition that successful AI agent deployment requires comprehensive training programs for government workers at all levels.


I think sometimes it’s learned by doing… the main thing policymakers could do is the policy framework on how to update these standards… So agile regulation.

Speaker

Srinivas Tallapragada


Reason

This introduces the concept of ‘agile regulation’ – adapting software development principles to governance. It’s thought-provoking because it suggests regulatory frameworks should be designed for iteration rather than permanence, which challenges traditional government approaches.


Impact

This comment reframed the regulation discussion from ‘getting it right the first time’ to ‘building systems that can evolve.’ It influenced the conversation toward practical, iterative approaches rather than comprehensive upfront solutions.


There’s strategic sovereignty and technical sovereignty… Don’t let the second track stop getting the benefit of the first track.

Speaker

Srinivas Tallapragada


Reason

This distinction between strategic and technical sovereignty provides a practical framework for governments to approach AI implementation. It’s insightful because it offers a path forward that doesn’t require complete technological independence before gaining benefits.


Impact

This comment provided a pragmatic solution to sovereignty concerns that had been implicit throughout the discussion, allowing governments to maintain control over data and policies while leveraging existing AI infrastructure.


If AI is so revolutionary as we all assume, I would hope in three years, the bottom 50% income percentile, their per capita income has been measurable.

Speaker

Srinivas Tallapragada


Reason

This cuts through all the technical discussion to focus on the ultimate measure of success – economic impact on the most vulnerable populations. It’s profound because it challenges the panel to think beyond efficiency gains to transformative social impact.


Impact

This closing comment elevated the entire discussion from technical implementation to social transformation, providing a concrete metric for evaluating whether AI agents truly deliver on their promise of inclusive development.


Overall assessment

These key comments shaped the discussion by progressively grounding abstract AI concepts in practical governance realities. Minister Babu’s opening reframed AI as public infrastructure, setting an ambitious tone. The subsequent comments by technical experts then systematically addressed implementation challenges – from the probabilistic nature of AI systems to the need for human capacity building and agile regulation. The sovereignty distinction provided a practical pathway forward, while the final comment on income impact elevated the conversation to focus on measurable social outcomes. Together, these insights transformed what could have been a theoretical discussion into a pragmatic roadmap for government AI adoption, emphasizing that successful implementation requires not just technical solutions but also human-centered design, adaptive governance frameworks, and clear success metrics tied to citizen welfare.


Follow-up questions

Can AI agents really be fully autonomous in public sector applications, or do they still need human oversight for high-stakes decisions?

Speaker

Saibal Chakraborty


Explanation

This is critical for determining the appropriate level of human involvement in government AI systems, especially for high-value procurement and policy decisions where mistakes can have significant negative impacts.


How do we establish liability and accountability when agents call upon third-party agents in government systems?

Speaker

Lee Tiedrich


Explanation

As AI systems become more interconnected and rely on external agents, determining responsibility for decisions and outcomes becomes crucial for governance and legal frameworks.


What specific evaluation mechanisms and standards should be developed for testing AI agents in different cultural and linguistic contexts?

Speaker

Lee Tiedrich


Explanation

This addresses the need for localized AI evaluation that respects different languages, cultural norms, and country-specific requirements while maintaining some level of standardization.


How can governments develop agile regulatory frameworks that can adapt quickly to rapidly evolving AI technology?

Speaker

Srinivas Tallapragada


Explanation

Traditional regulatory processes are too slow for the pace of AI development, requiring new approaches that allow for iterative policy updates based on real-world learning and feedback.


What digital infrastructure prerequisites must be in place before AI agents can effectively support physical infrastructure development?

Speaker

Mike Haley


Explanation

Understanding the foundational digital systems needed (like building information modeling) is essential for successful AI implementation in infrastructure projects.


How can we measure the real-world economic impact of AI agents on income inequality and wealth distribution?

Speaker

Srinivas Tallapragada


Explanation

This addresses the need for concrete metrics to evaluate whether AI is truly delivering on its promise of inclusive economic benefits, particularly for lower-income populations.


What specific upskilling programs are needed for government workers at different levels to effectively use AI agents?

Speaker

Saibal Chakraborty


Explanation

Since government workers using AI tools are not AI engineers, targeted training programs are essential to help them understand what can be trusted and what requires additional verification.


How can small language models be effectively scaled to serve farmers across diverse vernacular languages and agricultural contexts?

Speaker

Saibal Chakraborty


Explanation

This represents a key test case for AI inclusivity, requiring research into language localization, agricultural knowledge representation, and scalable deployment in rural areas.


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.