Agents of Change AI for Government Services & Climate Resilience

Agents of Change AI for Government Services & Climate Resilience

Session at a glanceSummary, keypoints, and speakers overview

Summary

The session focused on the emerging role of AI agents in public governance, introduced by Minister Sridhar Babu and explored by a panel of experts. The Minister framed the current moment as an inflection point where the shift from generative AI that merely answers to agentic AI that can act is redefining policy making [22-24]. He argued that intelligence should be treated as public infrastructure and described Telangana’s vision of AI as a co-governor that can forecast floods on the Moosy river and pre-allocate resources before crises hit [45-48]. Pilot projects cited include AI-assisted farmers trained with local dialects, a Telugu-language land-record AI that compresses response times, and satellite-driven heat analysis that will guide Hyderabad’s urban cooling strategy by 2035 [53-62]. The state also launched a sovereign AI nerve centre (ICOM) and an open data exchange platform with over 1,000 datasets, which he says already powers anticipatory health care and climate-resilient services [72-78][80-86]. Panelists defined an AI agent as a role-aware system with memory that can act across digital channels, insisting that guardrails and a trust layer are essential to curb hallucinations and bias [133-141][142-144][145-147]. They agreed that the biggest change is the move from narrow, task-specific bots to end-to-end, systems-level agents capable of autonomously executing business or government processes [110-113][119-122]. However, high-stakes applications such as drafting multi-million-dollar RFPs demand strong safeguards and likely retain a human-in-the-loop for final validation [148-154]. Concrete use-cases highlighted were AI-driven flood-plain analysis for water infrastructure and police-assistant agents deployed in the UK and Tasmania, demonstrating immediate value while larger ambitions continue [170-174][185-188]. Speakers stressed the importance of data sovereignty-strategic control now and technical supply-chain control later-advocating a two-track approach so governments can benefit today while building longer-term capabilities [246-250][251-254]. Upskilling public officials and providing transparent “nutrition-label” disclosures were identified as critical guardrails, acknowledging that probabilistic systems can never be perfectly deterministic [253][301-304][218-222]. Success metrics proposed included vernacular AI tools that empower farmers, faster delivery of physical infrastructure, and measurable income growth for the bottom 50 % of earners [334-335][345-347][355-356]. The discussion concluded that AI agents can become a force multiplier for governance if standards, evaluation frameworks, and agile regulation evolve in step with rapid technological change [260-268][317-324].


Keypoints


Major discussion points


Shift from traditional AI to agentic AI – The Minister highlighted moving “from generative AI that simply answers… to agentic AI that acts now” [22-24]. Panelists echoed this transition, describing it as a move to “end-to-end AI-led execution of business processes or government processes” [110-113] and noting that “the biggest change… is the emergence of agentic AI” [116-117]. Mike added that the evolution is “from task specific to systems level” [120-122], while Srini observed a shift “from co-pilot human in the loop to agents which can act and really provide value” [124-125].


Concrete Telangana initiatives using AI agents – The Minister gave multiple examples: AI-driven flood prediction for the “Moosey” river [45-48], a Telugu-first AI that “records land records, interprets satellite indicators and compresses the time between the climate event and an incident settlement” [57-59], satellite-driven heat analysis shaping urban cooling strategies [60-62], solar-powered edge compute nodes keeping services alive during grid failures [63-64], and the creation of a “sovereign AI nerve centre” and a state-wide data exchange platform that powers health-risk anticipation and climate-resilient planning [73-84][85-88].


Guardrails, trust, and human oversight – Srini stressed that an agent must have a “trust layer” with guardrails to prevent hallucinations, bias, and toxicity [144-148]. Lee warned of “risks of over-reliance” and emphasized careful use-case selection, sandboxes, and clear liability [190-203]. Mike highlighted the need for transparency (e.g., “nutrition-label” style cards) and human-in-the-loop control to build trust [216-231][301-306]. Saibal pointed out that up-skilling public-sector staff is essential because “the person… is not an AI engineer” [253-254].


Strategic and technical sovereignty over data and AI – The Minister described a “sovereign AI nerve centre” and an open data pipeline that keeps “all the data… on this platform” [73-77]. Srini differentiated “strategic sovereignty” (control over data and policies) from “technical sovereignty” (control over the full supply chain) and urged governments to pursue both tracks [246-251].


Future success metrics and vision – In closing, panelists offered concrete measures of progress: a farmer being able to get vernacular advice from a small language model [334-335]; an active AI safety evaluation ecosystem shared globally [339-343]; infrastructure built faster and safely with public confidence [345-351]; and measurable uplift in income for the bottom 50 % of the population [355-357].


Overall purpose / goal


The discussion aimed to showcase how AI agents can become “force multipliers” for public governance-illustrated by Telangana’s pioneering projects-while jointly exploring the policy, technical, and ethical frameworks needed to deploy them responsibly. Participants sought to define practical use-cases, outline necessary guardrails, and envision measurable outcomes for a “better tomorrow” powered by trustworthy, sovereign AI.


Overall tone


The conversation began with an optimistic and visionary tone, celebrating Telangana’s breakthroughs and the promise of agentic AI. As the panel moved into technical details, the tone became analytical and cautionary, focusing on risks, guardrails, and the need for human oversight. In the final segment, the tone shifted to forward-looking and hopeful, emphasizing concrete success metrics and collaborative pathways for governments and industry. Throughout, the dialogue remained constructive and collaborative.


Speakers

Victoria Espinel – Panel moderator and discussion facilitator; representative of Salesforce (thanked Salesforce team)[S12]


Minister Sridhar Babu – Minister (Telangana), policymaker and government official discussing AI governance[S5]


Srinivas Tallapragada – Engineering leader for a major AI platform (referred to as “Srini”), focuses on AI agents and trust layers[S8]


Saibal Chakraborty – Panelist, AI policy and public-sector expert[S9]


Lee Tiedrich – Professor, AI safety researcher; contributed to International AI Safety Report[S2]


Mike Haley – Senior Director of AI at Autodesk; discusses AI applications in infrastructure and guardrails[S1]


Additional speakers:


– None


Full session reportComprehensive analysis and detailed insights

The session opened with Victoria Espinel welcoming Minister Sridhar Babu, describing him as a “very special guest” and inviting him to the podium [1-6]. The Minister began by greeting the audience, highlighting Delhi as the capital of India and noting the presence of distinguished panelists and industry leaders [7-12]. He framed the discussion around “AI agents for a Better Tomorrow” and positioned the present moment as a fundamental inflection point in governance [16-17].


A central theme introduced by the Minister was the transition from generative AI, which merely answers questions, to “agentic AI that acts now” [22-24]. He argued that the traditional search bar is dying, to be replaced by more profound, action-oriented systems [33-34]. This shift, he suggested, marks the third era, defined by the intelligence of the system, which should be treated as public infrastructure rather than a product [36-41]. He illustrated this with three “lives” of AI in the country: research, policy, and finally, real-world impact that addresses dust, drought, monsoons and markets [43-44].


He reiterated the conference theme “AI for everyone, AI for human welfare” [85-86] and, after outlining the vision, thanked the Salesforce team, the event organizers, and the audience for the opportunity to present Telangana’s work [90-95]. He also framed the future of governance as being forged in the “living laboratories of the Global South”, citing Hyderabad as a prime example [95-98].


The Minister then detailed several Telangana pilots that embody this vision. An AI co-governor is being used to predict floods on the Moosy river and allocate resources before a crisis materialises [45-48]. In agriculture, AI advisors are being trained together with farmers, incorporating local dialects, soil wisdom and lived patterns into the model [52-54]. A Telugu-first AI system now records land records, interprets satellite indicators and dramatically shortens the response time between climate events and incident settlement [57-59]. Satellite-driven heat analysis is already shaping zoning, green-belt creation and urban cooling strategies for Hyderabad, with a target implementation by 2035 [60-62]. Edge-compute nodes powered by solar energy keep government services operational during grid failures, a first for any Indian state [63-64].


Telangana has launched what the Minister called the country’s first sovereign AI nerve centre, ICOM, intended as an AI innovation hub that supports R&D, talent development and deep integration of intelligence into governance [72-73]. Complementing this is a state-wide data-exchange platform that hosts over 1 084 datasets, converting administrative exhaust into ecological signals and enabling anticipatory health care and climate-resilient services [73-84]. The real breakthrough, he stressed, lies not in isolated projects but in the architecture that binds them together, such as the upcoming AI-City and the net-zero Bharat Future City, which are envisioned as self-learning, sustainable territories that generate their own compute resources and serve as policy-advisory platforms [68-72].


Panelists then converged on a definition of an AI agent. Saibal Chakraborty described the shift as moving from solving discrete problems to end-to-end AI-led execution of business or government processes [110-113]. Lee Tiedrich added that agentic AI can act on behalf of people, extending beyond mere answer generation [115-117]. Mike Haley highlighted the evolution from task-specific bots to systems-level agents capable of chained reasoning and multi-agent orchestration [119-122]. Srinivas Tallapragada noted the transition from a “co-pilot human in the loop” to agents that can independently provide business value [124-125]. Srinivas Tallapragada enumerated the essential components of an agent: a defined role, knowledge, short- and long-term memory, actuation capability across digital channels, and a “trust layer” of guardrails to prevent hallucinations, bias and toxicity [133-147].


The panel collectively stressed that robust guardrails, auditability, and human-in-the-loop oversight are indispensable for high-stakes government applications [148-154][190-203][216-231]. They emphasized the need for a command-centre architecture that enables testing, auditing and independent verification before deployment [214-215], sandboxes and clear liability rules for use-case selection [190-203], and transparent control panels that allow engineers to intervene, reassess and override outputs, thereby building trust [216-231].


Capacity development was highlighted as essential. The Minister’s pilots with farmers exemplify how end-users can be trained to contribute data and benefit from AI [48-55]. Saibal stressed that public-sector officials, who are not AI engineers, must be up-skilled to understand trust limits and know when human checks are necessary [253-254]. This aligns with broader policy observations that AI governance requires multidisciplinary collaboration among policymakers, lawyers, engineers and sector specialists [S1,S54].


Data sovereignty emerged as another focal point. The Minister presented Telangana’s sovereign AI nerve centre and open data pipeline as a model for treating AI as core public infrastructure [72-77]. Srinivas Tallapragada distinguished “strategic sovereignty” (control over data and policies) from “technical sovereignty” (control over the full hardware supply chain), urging governments to pursue both tracks-immediate strategic control now, with a longer-term plan for technical independence [246-254].


Panelists agreed that standards and regulation must be agile to keep pace with rapid AI advances. Lee advocated for global, multidisciplinary standards and evaluation ecosystems that can be localised to respect cultural and legal differences [260-280,S1]. Srinivas Tallapragada proposed an “agile regulation” model, where policy frameworks incorporate feedback loops and can be updated iteratively, mirroring product-development cycles [317-324]. Mike described an industry-led approach: embedding “transparency cards”-nutrition-label-style disclosures of model provenance, data sources, accuracy and bias-into every AI feature, thereby giving governments clear information and pre-empting regulatory lag [301-306].


Concrete use-cases were explored. Mike detailed AI agents that analyse floodplains and optimise drainage in water-system design, illustrating how agents can assist early-stage infrastructure decisions despite imperfect inputs [170-174]. Srinivas Tallapragada cited police-assistant agents deployed in the UK (Bobby) and Tasmania (Terry), which handle non-emergency citizen queries and support field officers, demonstrating immediate value in public safety [185-188]. The panel also discussed AI-driven disaster response, with the Minister envisioning anticipatory actions that “counter dust, drought, monsoons and markets” [44-45], while others cautioned that probabilistic predictions require human oversight to avoid over-confidence [217-231][190-203].


Disagreements centred on the degree of autonomy appropriate for critical government functions. The Minister described AI as a “co-governor” that can act proactively (e.g., flood prediction) [45-48], whereas Mike and Lee highlighted the inherent uncertainty of AI outputs and the necessity of human-in-the-loop safeguards [217-231][190-203]. A second point of contention involved the primary mechanism for ensuring safe deployment: Saibal favoured procedural guardrails and human oversight, Lee pushed for internationally harmonised standards, Srinivas Tallapragada advocated for agile, feedback-driven policy, and Mike suggested industry-self-regulation via transparency cards [148-154][260-280][317-324][301-306].


When asked to envision success metrics three years hence, panelists offered concrete indicators. Saibal suggested that the true win would be a farmer receiving vernacular AI advice at scale across India [334-335]. Lee envisaged an active AI safety evaluation ecosystem, with AI Centres of Excellence sharing techniques globally [339-343]. Mike highlighted faster, safer infrastructure development that enjoys public confidence and engineer endorsement [345-351]. Srinivas Tallapragada proposed measurable uplift in per-capita income for the bottom 50 % of earners [355-357].


In closing, the Minister reaffirmed that AI agents can act as “force multipliers” for governance, provided that they are embedded within a trustworthy operating system, supported by sovereign data platforms and guided by robust guardrails [91-93]. Overall, the session highlighted a shared conviction that agentic AI, when built on sovereign data, transparent guardrails, and agile governance, can become a force multiplier for inclusive, resilient public services.


Session transcriptComplete transcript of the session
Victoria Espinel

We are going to start with a very special guest. Minister Bawu is going to join us for a keynote. Very excited to hear what you have to say, coming from Hyderabad, one of the centers of technology in India and in the world. So, Minister, thank you so much for joining us. And if I could ask you to come to the podium. Thank you so much, Minister.

Minister Sridhar Babu

Very good afternoon to all. In fact, we welcome you to our city of Delhi, a beautiful city, a capital of India. And many people are from India, too. And we welcome the distinguished panelists. eminent panelists who are sitting here to sit and discuss the quotes for Better Tomorrow. And I welcome the leaders of the industry and the delegates over here. And especially coming to the subject, AI agents for Better Tomorrow. You know, I wish to see that, you know, where we stand today. And where we would end up tomorrow. The point of discussion over here. We stand today at a fundamental inflection point in the history of governance. As a policymaker, I would like to mention a few points.

Because all the technocrats are sitting on all the eminent, you know, scientists maybe from physics or the maths may be sitting on the other side. To develop AI into next level. You know, for decades, the digital revolution in the government was defined by transition from paper to portals and from physics cues to digital clicks. But today, we are witnessing the birth of the new paradigm. We are moving beyond generative AI that simply answers. We are moving from them to agentic AI that acts now. What I’ve been discussing with Mr. Srinivas just now. And for 30 years, our relationship with technology was a series of commands. We used to give commands and used to get the answers.

We typed, we clicked, we prompted. We were the masters for the such bar. We used to, you know, we were the masters. Nobody can say that. But I stand here. I stand here today. I can see and everybody can see the search bar is dying. In its place, something more profound. Just now Mrs. Sweeney was just telling about agency. It’s just evolving. In the first era of our national building was defined by land. The second by the industry. And third has been defined. More illusory, the intelligence of the system. And the nation that leads this century are those that learn to treat intelligence not as a product but as a form of a public infrastructure.

The idea is no philosophical for our state of Telangana. It is the story of our everyday governance because it’s IT driven state as we are known for. And often say that artificial intelligence has three lives in the country. the first life is in the research labs the second we take into in the policy papers but the third ultimately both of this combined together how we are trying to affect the life that truly matters for each and everybody you know how do we see it is that when ai meets the real challenges of of our lives when artificial intelligence meets the dust where we face ai meets the drought when it meets the monsoons when it meets the markets of the living society and this is where its legitimacy is earned when it really counters this dust doubt monsoons and markets in telangana we see agents not as a tool here we would like to take them as a team mates.

You know as the way the pilots relay and the co -pilots. Tomorrow as our government here in Telangana also see that we relay on AI as co -governors system that can predict a flood before the first cloud gathers over the Moosey. Moosey is our river in the midst of our city. You know allocate resources before the crisis and deliver services before citizen ever need to ask. For example if you take the agriculture a small farmer I hail from a very remote area and that to a rural place. A farmer in my place or in some other place from the rural area when the climate is not environmental concept for them it is right now a daily negotiation with uncertainty.

So when we built our AI advisors we did something unconventional. Right now we are trying to do on the pilot stage. We asked farmers to train the system with us. You know the dialects the soil wisdom the lived patterns become the pattern of the model. This is where the governance comes into picture. To use the best of the technologies where you invent produce or do sitting in R &D use best of your grey matter to come up with some products until and unless we use and induce into our governance there will be no end result. That is what we believe in. That is why our Telugu first AI can record. Land records interpret satellite indicators and compress the time between the climate event and an incident settlement.

So this saved lots of time, you know, for our, you know, government agencies as well as to the end user as a farmer. Our satellite -driven heat analysis no longer stops at mapping temperatures. They now shape zoning, green bells and urban cooling strategies for Hyderabad. Which we are planning to take up to the core by 2035. And across 33 districts in our state, our solar power edge computer nodes ensure that the government service and the climate remains operational when the grid fails. And this is also one of the novel things what the Telangana is the first state where we have implemented. Yet I don’t claim that these are examples for climate. This is just a fact of a story.

This is just a beginning. This is the first preface we can say that because the real breakthrough is not from each project. It is from the architecture that binds together. Our future projects like our coming up the state of art infrastructure in the upcoming AI city, absolutely dedicated AI city and the Bharat future city which shall be the net zero city. Are designed not as a smart districts either for technology or for other aspects, but as a self -learning cities, territories that define sustainabilities, territories which can provide themselves for the compute and make them policy advisors. Our country’s first sovereign AI nerve center. ICOM you know this is our first ever initiative by any state in India that we have come up with the first sovereign AI nerve center that is supposed to be the AI innovation hub that is named as ICOM that the aim and objective is you know this intelligence should be shall go deep beyond just for incubation but also render into R &D and shall be the prime focus of creating AI ready talent for tomorrow’s world and I would like to mention here that Hyderabad and Telangana is the first state to come up with a platform that is This Telangana data exchange platform, the sovereign data open pipeline ensures that the intelligence is grounded in integrity.

So the platform is on the open. And this is the first state we have put all the data on this platform. You know, if we go through it by this open data pipeline, you know, 1 ,084 data sets, they have moved from administrative exhaust to ecological signal. We have created something rare in the global south, that state that generates its own intelligence at a scale. And we have seen the results too. And the results shown. The healthcare doesn’t wait for symptoms. It now anticipates risk. Because the data exchange we have done with our co -partners, even in the healthcare. or with the doctors or with the public health institutions. They are just not waiting to deliver the medication, but predicting the risk and try to put it into action.

And we are not waiting for the heat waves to come. We are trying to analyze through the data and how we should place ourselves and we are preparing corridors for the shed. And farmers also, we believe, using this AI technology, we don’t want farmers to wait for the loss. You know, they have to receive assurance before despair. And we are also planning that infrastructure doesn’t wait to break. You know, it has to whisper when it will fail. You know, when all this, the cutting -edge technologies, especially the AI, deployed with purpose and AI agents offer government something rare in public life. The ability to act before harm, to prepare before shock, to protect before loss and how resilient our infrastructure emerges, how safe the climate resilient cities take shape and how our public services become anticipatory humane and trusted.

And this is the future what we are imagining and we are trying to put all our actions into stream and it is this operating system we dreamt and we started running and I believe the next chapter of the statecraft will not be written in the boardrooms of traditional power centers but in the living laboratories of the global south. In the cities of Hyderabad, and the world can already see a preview of what an intelligent century of governance looks like. Let us leave Bharat Mandapam today. Here, while this great convention is taking place with a shared conviction that the tomorrow we are building is not just the smarter, it is braver. However, and you know, the great caption goes, AI for everyone, AI for human welfare should be the theme.

And also, we should, I as a policymaker, you as a technology expert sitting over there, should aim and anticipate for it. I thank the organizers for giving me, you know, a length of year to air. And I want to hear my pitch on behalf of our state of Telangana. I would like to thank the Salesforce team. especially the team management who are invited me over here for gracing this and having to see you know all the best brains sitting over here and the grey matter who would be doing much more for our welfare of our human being. Thank you very much.

Victoria Espinel

Minister, thank you so much for joining us. We very much appreciate it. It was very exciting to hear what’s happening in Hatsheba and in Teliana. Let’s kick our panel off. Alright, so I am going to start with an icebreaker. Everyone gets 30 seconds to respond. This panel is by AI agents, so what would you say, I’m going to start there and then go towards me, what would you say is the single biggest difference that you see between AI last year, we’ve been sitting here and We’ve been sitting here last year in the AI agents that we are seeing today. Saibul, can you kick us off?

Saibal Chakraborty

So I think in my mind the conversation has moved decisively towards agentic AI. We are no longer talking about, as Honorable Minister also said, about solving discrete problems or discrete searches. We are now looking at end -to -end AI -led execution of business processes or government processes. I think that’s the single biggest change in thinking that has come up.

Victoria Espinel

Professor Lee Tiedrich?

Lee Tiedrich

To put this in context, I was involved in the International AI Safety Report, and we just had our panel on that a little while ago. And Professor Bengio was saying the biggest change from 25 to 26 is the emergence of agentic AI. And my perspective, is its ability not only to do the end -to -end, but to also act on behalf of the… of people is really the big change.

Victoria Espinel

Mike?

Mike Haley

So I’m probably going to jump on the train here. You know, what we were seeing last year was narrow agents able to solve specific problems. What we see now are agents that are able to abstract the problem, chain of thought reasoning, being able to take that and turn it into sequenced action and turn the multiagent sort of systems level thinking. So it’s the move from task specific to systems level is the big shift that I’m seeing.

Victoria Espinel

And Srini? Srini Srinivasan

Srinivas Tallapragada

Yeah, so I think for me the big shift has been from co -pilot human in the loop to agents which can act and really provide value, business value. And that’s been the big shift.

Victoria Espinel

So let’s talk about that value. Let’s talk about AI agents as a forceful multiplier. I’m going to start here this time Srini, you lead engineering for one of the biggest platforms in the world. There’s a lot of discussion about AI agents. Can you demystify this? What does that mean?

Srinivas Tallapragada

Yeah. So I think what does that mean? An agent, just like a human, first of all, an agent has to act. It says agency and it acts. That’s the first big difference. And like any agent, it has to have a couple of things. It has to know a role. Just like a human, it needs to know what it’s supposed to do, what are the jobs to be done. It needs knowledge. Just like a human, if I have in my mind, an agent has to have knowledge, some memory, so both short -term and long -term memory. And then it should also be able to act. You know, it should be able to, in a digital world, should be able to act on an API or something.

And then it should be able to act wherever the surface is. Maybe it’s in WhatsApp channel, wherever the user is interacting with it, in a WhatsApp channel or web channel or a digital channel or a SMS text. More importantly, most important in all of this is we should have guardrails on what it’s not supposed to do. that’s the most important and then all of it has to be covered to make it useful with what we call a trust layer because these things can hallucinate it can have bias, it can have toxicity, avoid all of that and they are unpredictable ultimately so it should have governance then it’s auditability, so you can do all of this this and all is to do all of this is what an agent does so this is also the why even though there is a lot of hype in reality it hasn’t diffused enough, this is the business value which we are trying to bridge as the vendors

Victoria Espinel

Thank you. Saibal I’m going to go to you next, so let’s talk about governance, we sit here in Delhi, the capital of one of the greatest nations of the world, the public sector, are they ready for this, how do we think about that?

Saibal Chakraborty

So I think let me not answer that question, I think the public sector needs to be ready so all the way from managing public finances public procurement managing their workflows and processes better, there is no way that public sector can avoid this. However, as Shrini, you pointed out, the stakes here are very, very high. So imagine an agent crafting an RFP, a multi -million or a billion dollar RFP on behalf of the government. How do we and you know in public procurement, we often sacrifice speed for procedural tightness. So how do we actually, what guardrails do we put around an agent or more? So can it really be end -to -end? Can it really be fully autonomous?

Or do I still need that last human layer to make sure that the T’s are crossed, the I’s are dotted because the stakes are really high and a mistake can really, you know, lead to a lot of negative impact. So I think the public sector has to be ready. but I think some of these guardrails has to be thought through and in the context of public sector are agents fully autonomous or do they still automate or do they still operate with a little bit of that human layer I think that has to be thought through.

Victoria Espinel

That’s great thank you. I love that you said RFPs because that’s a concrete example so let’s talk a little bit about use cases and Mike I’m going to go to you let’s talk about resilient infrastructure one of the examples I hear a lot for AI agents they can help you make reservations and I love to eat I think making restaurant reservations actually pretty valuable to me but could an AI agent do something like design a bridge could it design an energy grid like where do we stand between reality and science fiction?

Mike Haley

Yes I think we’re tracking pretty quickly to agents being able to do just those kinds of things In the past, what’s been difficult is using computational methods in AI, which has been around for a reasonable time for these things, has been very difficult. Because if you’re using some form of computational method or AI to design a bridge, you have to specify that bridge perfectly. You have to give it perfect inputs. Now, it turns out that when a designer is designing something, they don’t have perfect inputs. That’s the process of design is actually figuring out what your inputs are, right? So this has always been a little bit of a barrier for people to use these advanced methods.

With AI, and specifically AI agents, you’ve now got a much easier way of interacting. It’s more forgiving towards fuzzy requirements and earlier stages of thinking. It’s able to give you things that inspire you. So one of the things I talk a lot about publicly is that the notion of agents and creatives working in a loop together, that it’s breaking the cycle where the engineer has to come up with every idea from scratch, from a beginning. Rather describe what you’re doing, let the agents explore. So I’ll give you one example specifically in infrastructure because you wanted to get concrete. I mean, something that we work with is water systems, for example. So we’ve built AI agents that can analyze floodplains.

They can analyze how you might want to think of water drainage and these kind of things. So every time you’re making a decision early on in your design, you can let this thing run through, and it’s going to optimize your design in order to ensure that drainage is going to be successful on that. Now, drainage seems like a small little side thing, but it’s a pretty massive part of infrastructure. And having an agent handle that for you, it’s a pretty big deal.

Victoria Espinel

Mike, I have very close family ties to Louisiana, so drainage and flood zones, that is not a small thing. That is a very, very big thing. And actually, that’s a perfect segue to the question I wanted to ask Srini. So one of the most complex things that a government might have to deal with is disaster response. Is that a place where AI agents could be helpful?

Srinivas Tallapragada

I really like the theme, welfare for all. And I think while we can think of very big things of where the AI is doing, AI can add value right now. And disaster response is one good example. Another small example. Another example which I wanted to give was like, you know, the key is to give back time to the people. That’s very valuable. Giving back time is a very noble goal in my opinion to everybody. So we have this very interesting use case where there is a city in New Thames in UK where they created an agent called Bobby. It’s like Bobby is a UK term for policeman and the citizens are asking a lot of questions which are not emergency and Bobby is answering them.

More than 90 % of them they get a lot of value. What was interesting for me was we have another city in Tasmania which is using a product agent force to roll out agents to their police people more than thousand police people because lot of times when they are in the field the policemen new or more experienced they have lot of questions and they are asking and they call this agent Terry and lot of policemen say Terry is their best partner. you know they have been so I think while we can think about futuristic ways here and now there are a lot of things we can provide right now with the technology guardrails in the public sector in private sector obviously where with if you have the right platform where you have trust governance as a foundational value with all the right guardrails we can still add a lot of value and we are seeing thousands of examples across public and private sector where you have the crawl walk run mode you know you start something basic you can still add value you still have the most esoteric cases with multi agent orchestrations I feel like but you can start with basic today and still get a lot of value that’s what we are seeing

Victoria Espinel

that’s great so professor Tiedrich we’ve talked a little bit about how agents can help governments serve their publics are there are there risks there Are there risks of over -reliance?

Lee Tiedrich

Yeah, I mean, there are definitely risks, and I think I share the view of my co -panelists that I think there’s a lot of benefits to using AI in government and improving government services worldwide. But like everything else, we have to do it cautiously and smartly, and I think some of it kind of comes back to the human factor, like pick your use cases wisely. One of the themes in the safety report is that AI is emerging very jaggedly. We have some use cases like computer programming that are really good. There are others that may not be quite ready for prime time. So I think when we think about over -reliance is thinking about where AI is excelling, focusing on those use cases, and maybe doing sandboxes around some of the others to give them a little bit more time to mature.

I think also the over -reliance, picking up on some of the great points, is the guardrails. You know, one of the things in the safety report is good news. We’ve made a lot of progress on guardrails and risk management, but still as the technology moves quickly, a lot more work to be done. And so not relying too much that we overlook guardrails and thinking about where humans should be in the loop. And then the third thing I’ll just mention is, you know, the interoperability of different agents. And as agents start to call upon third -party agents, it’s just thinking through, you know, what guardrails, how do you choose that? How do you allocate liability? How do you test the agents that you’re going to bring into your system?

Victoria Espinel

So guardrails have come up. Srini mentioned it. You just mentioned it. Let’s talk about guardrails a little bit. So, Srini, we hear about chatbots. We hear about hallucinations. Those can be annoying. When you’re talking about a government deploying an AI system, AI agents, the consequences can be extremely significant. They can be a hallucinating of an agent can be quite dangerous. So let’s talk about guardrails. How do you engineer trust into a system so that a minister, a secretary, a secretary can be able to say, feel confident that that’s a tool that they can use to serve their people?

Srinivas Tallapragada

drift, they can hallucinate. So you need a command center where you can say all of it is, this is the difference between a pilot or a demo, which you can find thousands of demos in YouTube versus real life, where these things become So we had to build all of these things for both the customers or governments to build confidence, they can audit, they can test, not even if themselves an independent party also can test, all of this infrastructure is what is required to make this a reality, but once you do that, there’s a huge value you can immediately provide to either the customers or the citizens.

Mike Haley

Can I just add to that quickly? Because I think you hit a really interesting point at the end there. When people talk about guardrails, they think of guardrails as this perfect thing, that at some point the guardrails are going to get strong enough that every result is perfect, it’s completely predictable, and we’re good. And I think we need to talk about the honesty of that. We’re talking about systems that are inherently probabilistic systems. You’re never going to make a probabilistic system 100 % deterministic. It’s an acronymism. Right. So what we’ve discovered is that, I mean, you do do all the guardrails work that we’re all talking about, but where you were going at the end there about making systems that can look at the accuracy of what’s produced, give you some feedback on how accurate the solution or how well it’s going to perform, and then, and this is very important, what we’ve discovered is giving control to the human being, giving control, in our case, to an engineer, right, who is able to say, oh, I get it.

This is kind of, the result is a little off. I’m going to give it some more feedback. I’m going to reassess the results. I’m going to run it again. Or I might even go in myself and kind of tweak that information. And what we’ve discovered, when I’m talking to an engineer and explaining how this stuff works, if I don’t give them that level of control, they don’t trust the system. The minute they know they can actually control it, so it’s not, trust doesn’t depend on a perfect answer. Trust actually depends on transparency and understanding and then the ability to come in and control something.

Victoria Espinel

But I think that’s also because the engineers understand this. This is the tool. It’s a tool for them to use. to help them. It’s not something that is going to take control. Is there anything specifically with respect to infrastructure that you think government should be mindful of?

Mike Haley

Yeah. Well, look, so, I mean, infrastructure is not known as the easiest and quickest thing to build, right, in countries. And I think, you know, one of the really boring things but absolutely necessary things with infrastructure is to make sure your digital ecosystem around that infrastructure is set. And I see a lot of places in the world getting into building infrastructure trying to do this quickly without getting all that digital infrastructure in place. So building information, modelling, ensuring that every part of your infrastructure is correctly modelled, it’s represented at the right level. AI is not going to just magically come in and solve a bunch of problems unless you’ve got a lot of that digital stuff in place already.

So it’s kind of a little bit of the boring work, but getting that stuff in place early is one of the biggest things. I mean, I’ve had a number of conversations here this week about the 2047 initiative in India, the Man. of infrastructure that needs to be built in this country and the importance of using something like building information modeling, getting standard data, getting that in place now. If you get that in place now, all this AI goodness is way easier to deploy against it.

Victoria Espinel

Yeah, please.

Srinivas Tallapragada

Yeah, so I think I heard a lot of discussion around sovereignty, and I think the way we should think of sovereignty is two levels. There’s strategic sovereignty and technical sovereignty. So by strategic sovereignty, I mean is like you get control on data, your governance policies, you know, and your operational policies. That I think you can implement it right now and get value. And I think and then on the technical one where people want to control their entire supply chain from the chips and all, I would like to for governments to and public officials and policy officials to think is as two tracks. One. One takes longer in a lot of capital investment. Don’t let the second track.

stop getting the benefit of the first track the first track is easy you can ensure the data doesn’t leave your country your policy guardrails have control human in the loop you still get a lot of benefits while you still want to continue on the second track that would be my request to all the government

Saibal Chakraborty

can I just make a quick build on what Mike said because I do a lot of my work in the public sector with governments I think one of the biggest guardrails beyond policies is actually the skilling the upskilling like Mike you said it’s a probabilistic system inherently right so you cannot expect it to give correct results all the time there’s nothing called a correct result so the person who’s actually using this who’s using the tool at the district level, at the state level to make real government decisions that person is not an AI engineer that person needs to be upskilled and needs to be told what can be trusted and what requires that additional layer of check.

So I think if agent TKI has to take off in public sector at scale, then that upskilling at various levels of the government on what can be trusted and what cannot be trusted is also a very, very big component.

Victoria Espinel

Yes, I totally agree. Professor, I wanted to ask you, so it feels so trite to say technology is moving really quickly, but in the last few years, I mean, AI is moving very, very quickly. We’ve talked a lot about guardrails. How should governments think about this? I mean, how are governments going to be able to keep up in terms of setting government expectation, setting potentially regulation for a technology that is moving so quickly?

Lee Tiedrich

It’s a hard one. I think, you know, AI has evolved into a global multi – disciplinary field. And I think, you know, we need to bring the global community together. I think we need policymakers, lawyers, talking with engineers, talking with sector specialists to really inform the policy in real time. I mean, I’m a big fan. I spent a year working at NIST, the U .S. National Institute of Standards and Technology. And, you know, we need to figure out how to do some of the guardrails, you know, starting with the science. And then the science can inform, you know, how to develop the standards, how to develop the evals. And then it becomes, you know, a question.

I mean, different countries have different views on whether we should regulate or not regulate. You know, the U .S. has a very deregulatory approach. Europe is the opposite. But if we can kind of agree on what those common standards are for evaluation and testing, then governments can be free to decide, do we mandate this or not mandate that? And I think one important thing is that we need to be able to do that. And I think that’s one of the things that we need to be able to do. And I think that’s one of the important nuance to add to the mix. And this has been a theme of the conference. is, you know, we have to, well, we want some standardization on these evaluation mechanisms.

We have to recognize that we speak different languages. We have different cultural norms. So when we want to have standardization, we’ve got to be able to localize what the evaluation looks like is what might be appropriate in one country isn’t going to be appropriate in another country. So it’s hard, but I think, you know, starting with the science, the scientific report, I would point people to, you know, building on that, working through the AC network, working through standards organizations and all these other initiatives to develop the evaluation, build that evaluation ecosystem, and then regulations can kind of overlay on top of that as policymakers think appropriate for their jurisdictions.

Victoria Espinel

But if I could ask a follow -up question to you or any of the panelists, I mean, I think one of the challenges there for companies is that it’s really helpful for companies. I also speak for enterprise software companies that I represent. It’s helpful to know what those government expectations are. Like industry is looking for clarity and predictability. And I think that’s one of the things that I think is really important. And I think that’s one of the things that I think is really important. And I think that’s one of the things that I think is really important. And I think that’s one of the things that I think is really important. And

Mike Haley

Should I take a shot at it? Yeah, I see. As a software provider, you know, at Autodesk, we definitely deal with that, Victoria. You know, we’ve had a couple approaches. One, I mean, we’re obviously going to stay on top of this all the time, working with governments, making this part of a conversation. I spend a good part of my year traveling around the world, talking to governments and trying to sort of help them understand what needs to happen, but also help us understand, like you said, what they’re wanting. But the main problem is just the sheer variance. I mean, even within the United States, we have things between different state efforts, right? And then you get around the world, it just gets even more complicated.

What we’ve tried to do is we’ve – We’ve tried to run as far ahead of this as we can. So if there is a way that we can build in good controls right from the beginning, we actually build those controls to the maximum extent that we can within reason, right? So what we’ve done is we’ve found now that we’ve run, I’ll give you an example. In every AI feature we have in our software, we have something called a transparency card, which looks like a nutrition label on food. But that nutrition label tells you what kind of model is behind it, what data was used to train it, what kind of level of control you have, what accuracy it has, any bias that we know about in the model, that kind of stuff.

And it’s a standard thing. So we rolled that out about a year ago really to try and stay ahead of things. So if governments started asking for these things, well, we’ve got a transparency card. What’s actually happened now is that there’s a bunch of interest in that becoming part of a standard. So I mean I’m not saying that really just to tout us because I think other companies are doing great things in this space as well. You guys are doing a bunch of good stuff in this space too. I think this is an opportunity for us. For us in industry to run ahead to try and help define some of these things because it is moving so fast.

And I hate to, maybe I shouldn’t say this publicly, but the government doesn’t always have the best answers, right? So, I mean, we can work with government to help them develop those answers and come up with good things, which helps us then, you know, resist some of the complexity that’s coming down the line.

Srinivas Tallapragada

Yeah, so one of the challenges in this is you can project it too much. It’s an exponential curve. It’s very hard to project. So I think sometimes it’s learned by doing. I think the biggest thing the government can, all governments can do is the policy framework on how to update these standards. Today, usually it takes a long time, and so everybody’s afraid, and then it takes even harder to change a standard. So then everybody, so then they try to solve everything, and things are changing. And so I think the main thing policymakers could do is, just like the feedback loop, if there’s a way to improve. If there’s a way to improve the policy framework, because then you don’t need to be afraid of getting everything right.

You know, you understand that, hey, you told me some basics, and as new data comes in, you can update it. And then I think that, I think that in engineering and product, we call this the product feedback loop and agile development. If we have something equivalent on that, then I think then everybody is clear because we all want the right thing. I think there’s no disconnect in the foundational thing. We want AI to help, correct, in a positive way to our net positive way to our entire community. And the regulatory framework in a changing technology, if the regulatory framework is able to change, if we can change that, then we are not afraid to say that we need to get everything right on day one.

And we can learn by doing it. So agile regulation.

Victoria Espinel

I have loved this panel. Unfortunately, we’re coming to a close. So I’m going to ask each of you one final question and Saibala, I’m going to start with you and then head this way. But if we were so fortunate to meet again in three years, so fortunate to meet in Delhi again in three years, looking back what would you say is the one thing that you think would be the best way to determine whether or not we have succeeded in addressing some of these challenges i know it’s a big question sorry but uh thank you

Saibal Chakraborty

so since we’re in uh delhi i’ll give the answer in the indian context i think as um the primary theme or one of the primary themes of this particular conference is inclusivity so for me the success of ai the true success of ai will be if a farmer could talk to a small language model powered tool in his or her own vernacular language and get practical advice on how to manage the crop how to manage the cattle and if that could be scaled up uh across the board i think that would be a very good idea across the length and breadth of india Then I think that, for me, is the real win for AI.

That’s a

Victoria Espinel

big win. I mean, that’s a significant impact. Thank you. Great. Professor Tietrich? Yeah, so

Lee Tiedrich

I’m kind of coming back to the evaluation ecosystem. We’ve made a lot of progress over the last couple of years, but more work needs to be done. You know, more countries, including the Global South, are launching ACs, you know, AI safety or security institutes, which is not hard regulation, binding regulation, but it’s governments weighing in. And I think real progress three years from now, we have an active AC institute that’s sharing information, making real progress on evaluation techniques, and one of the commitments that came out of some of the companies yesterday is, you know, also localizing that so everybody can benefit from that Global North, Global South. Thank you.

Victoria Espinel

Mike? So earlier

Mike Haley

on, I spoke about infrastructure and, you know, physical infrastructure that is in countries. What I would hope to see is, in a couple of years, in a couple of years’ time, we’re actually seeing infrastructure genuinely get developed faster than it’s ever been developed, which is a really, really tough problem, making that happen in the physical world. So as a measure of AI truly doing this, that’s an incredible measure. But on top of that, it needs to be doing that without compromising safety, without compromising. It’s not a big black box that nobody understands, right? So what I would love to see is not only is that infrastructure being developed faster, but the public is engaged with it.

The engineers and people that are doing it feel comfortable with it. They feel secure. They feel fine signing off on that because they feel that this is reliable. Thank you.

Victoria Espinel

Srinu? If AI

Srinivas Tallapragada

is so revolutionary as we all assume, I would hope in three years, the bottom 50 % income percentile, their per capita income has been measurable. That for me is the real impact of this technology. That’s

Victoria Espinel

fantastic. I want to say thank you to all of our panelists I want to say a special thank you to Srini and to Salesforce for bringing us all together here today thank you to our audience for joining us big round of applause for our panelists thank you you Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (14)
Factual NotesClaims verified against the Diplo knowledge base (3)
Confirmedhigh

“Victoria Espinel welcomed Minister Sridhar Babu as a “very special guest” and invited him to the podium”

The opening remarks in the knowledge base explicitly refer to the minister as a “very special guest” and thank him for joining the keynote [S1] and [S2].

Additional Contextmedium

“AI should be treated as public infrastructure rather than a product”

The knowledge base contains statements that “Intelligence is not an asset, it’s infrastructure” and that “Information should be treated as a public good rather than a commercial commodity,” providing supporting context for treating AI as public infrastructure [S82] and [S80].

Additional Contextmedium

“AI advisors are being trained together with farmers, incorporating local dialects, soil wisdom and lived patterns into the model”

A related point in the knowledge base notes that farmers are using AI weather forecasts, illustrating how AI is being applied in agriculture to support farmers, which adds nuance to the claim about farmer-centric AI advisors [S85].

External Sources (89)
S1
S2
https://dig.watch/event/india-ai-impact-summit-2026/agents-of-change-ai-for-government-services-climate-resilience — Professor Lee Tiedrich? big win. I mean, that’s a significant impact. Thank you. Great. Professor Tietrich? Yeah, so
S3
https://dig.watch/event/india-ai-impact-summit-2026/ai-safety-at-the-global-level-insights-from-digital-ministers-of — I think Yashua and Alondra’s comments tee up the next question for Adam. These risks are evolving quite rapidly, and one…
S4
https://dig.watch/event/india-ai-impact-summit-2026/welfare-for-all-ensuring-equitable-ai-in-the-worlds-democracies — And I think sort of working from the bottom up with the science, developing the evaluation technique, taking into accoun…
S5
Agents of Change AI for Government Services & Climate Resilience — – Minister Sridhar Babu- Srinivas Tallapragada
S6
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — This panel discussion on heterogeneous computing and AI infrastructure in India brought together leading experts from in…
S7
https://dig.watch/event/india-ai-impact-summit-2026/agents-of-change-ai-for-government-services-climate-resilience — And we can learn by doing it. So agile regulation. I really like the theme, welfare for all. And I think while we can t…
S8
Agents of Change AI for Government Services & Climate Resilience — – Mike Haley- Srinivas Tallapragada – Minister Sridhar Babu- Srinivas Tallapragada – Saibal Chakraborty- Srinivas Tall…
S9
Agents of Change AI for Government Services & Climate Resilience — – Saibal Chakraborty- Srinivas Tallapragada
S10
https://dig.watch/event/india-ai-impact-summit-2026/panel-discussion-ai-in-digital-public-infrastructure-dpi-india-ai-impact-summit — Government data, it’s early days, it’s very early days, but government data is being provided access to through platform…
S11
https://dig.watch/event/india-ai-impact-summit-2026/agents-of-change-ai-for-government-services-climate-resilience — And Srini? Srini Srinivasan But I think that’s also because the engineers understand this. This is the tool. It’s a too…
S12
Agents of Change AI for Government Services & Climate Resilience — -Victoria Espinel- Panel moderator and discussion facilitator
S13
Pre 4: Dynamic Coalition on data and trust: Stakeholders Speak – Perspectives on Age Verification — – **Regina Filipová Fuchsová**: Industry Relations Manager at EURID, session moderator Regina Filipová Fuchsová: Excuse…
S14
Need and Impact of Full Stack Sovereign AI by CoRover BharatGPT — I think it should be soon. I think there are ministers, Ashniv Ashton sir. We should add one more role to them. We shoul…
S15
Building Indias Digital and Industrial Future with AI — As India advances in digital public infrastructure and its AI ambitions, the key is how we ensure these systems remain t…
S16
Multistakeholder Partnerships for Thriving AI Ecosystems — This comment introduces a sophisticated understanding of AI infrastructure needs, moving beyond simple data collection t…
S17
Agentic AI in Focus Opportunities Risks and Governance — So everyone needs to know that it’s a legitimate agent and not a rogue robot or a fraudster. Important, right? The secon…
S18
UNGA/DAY 1/PART 2 — The advancement of AI is outpacing regulation and responsibility, with its control concentrated in a few hands. (UN Secr…
S19
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — And as several of our panelists emphasized, if we don’t address that gap deliberately, the shift towards AI agents is on…
S20
Safe and Responsible AI at Scale Practical Pathways — Artificial intelligence | Building confidence and security in the use of ICTs Guardrails, Human‑in‑the‑Loop, and Risk‑A…
S21
WS #283 AI Agents: Ensuring Responsible Deployment — Prendergast frames agentic AI as a critical technological shift where AI has evolved beyond reactive tools to become pro…
S22
Heat action plans in India struggle to match rising urban temperatures — On 11 June, the India Meteorological Department (IMD)issued a red alert for Delhias temperatures exceeded 45°C, with rea…
S23
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Policy needs to be at a principle level because if it becomes too detailed, it becomes hard to maintain, especially with…
S24
Building the Next Wave of AI_ Responsible Frameworks & Standards — Bhattacharya explained that trust ranks first among Salesforce’s five core values—trust, customer success, innovation, e…
S25
Partnering on American AI Exports Powering the Future India AI Impact Summit 2026 — This comment demonstrates sophisticated understanding that ‘AI sovereignty’ isn’t a monolithic concept but represents di…
S26
Transforming Agriculture_ AI for Resilient and Inclusive Food Systems — The tone was consistently optimistic yet pragmatic throughout the conversation. Speakers maintained an encouraging outlo…
S27
The Declaration for the Future of the Internet: Principles to Action — A balanced scorecard with certain parameters provides a measurable indicator of the progress made. A nuanced shift in p…
S28
Comprehensive Report: Preventing Jobless Growth in the Age of AI — Moderate disagreement with significant implications. While speakers agreed on broad goals, their different assessments o…
S29
From summer disillusionment to autumn clarity: Ten lessons for AI — Overall, what’s notable in all these political developments is pragmatism. The lofty narratives of last year – like fear…
S30
New plan outlines how India will democratise AI infrastructure — Indiais moving to rebalance access to AI infrastructureas part of a new national push to close gaps in computing power a…
S31
Multistakeholder Partnerships for Thriving AI Ecosystems — This comment introduces a sophisticated understanding of AI infrastructure needs, moving beyond simple data collection t…
S32
AI as critical infrastructure for continuity in public services — Artificial intelligence | Data governance | Building confidence and security in the use of ICTs Data sovereignty requir…
S33
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — ## Capacity Building and Implementation Virginia Dignam: Thank you very much, Isadora. No pressure, I see. You want me …
S34
Open Forum #3 Cyberdefense and AI in Developing Economies — – Ram Mohan- Wolfgang Kleinwachter- Philipp Grabensee Capacity Building and Human Resources Development | Legal and re…
S35
AI and international peace and security: Key issues and relevance for Geneva — Capacity-Building Initiatives: Capacity-building initiatives are vital for equipping states with the knowledge and skill…
S36
Open Forum #17 AI Regulation Insights From Parliaments — Capacity Building and Education Capacity building and education are essential for all stakeholders Development | Capac…
S37
Agents of Change AI for Government Services & Climate Resilience — Artificial intelligence The minister says AI is moving beyond simple question answering toward agents that can act auto…
S38
WS #283 AI Agents: Ensuring Responsible Deployment — These key comments fundamentally transformed what could have been a technical discussion about AI governance into a nuan…
S39
Survival Tech Harnessing AI to Manage Global Climate Extremes — In the large scale model, it defiles. So the AI can downscale better way in the localized, suppose one kilometer resolut…
S40
WS #466 AI at a Crossroads Between Sovereignty and Sustainability — Pedro Ivo Ferraz da Silva: Yeah, thank you very much, José Renato, Alexandra, and also other colleagues in the panel. It…
S41
Agentic AI in Focus Opportunities Risks and Governance — -Enterprise Guardrails and Risk Management: Panelists emphasized the critical importance of implementing robust safety m…
S42
Keynote-António Guterres — We need guardrails that preserve human agency, human oversight and human accountability
S43
Agentic AI and the new industrial diplomacy — How this looks in practice:The European AI Act,which came into force in2024, classifies many industrial AI systems as ‘h…
S44
Digital Embassies for Sovereign AI — This addresses the need for adaptive governance frameworks that can keep pace with rapid technological change
S45
WS #162 Overregulation: Balance Policy and Innovation in Technology — This workshop focused on balancing AI regulation and innovation, exploring how to foster technological advancement while…
S46
Evolving AI, evolving governance: from principles to action | IGF 2023 WS #196 — In conclusion, it is crucial for AI regulation to keep pace with the rapid advancements in technology. The perceived ina…
S47
Opening address of the co-chairs of the AI Governance Dialogue — International technical standards and their role to make sure that policy and regulation is flexible and agile Standard…
S48
Lightning Talk #173 Artificial Intelligence in Agrotech and Foodtech — The discussion suggests several key implications for agricultural development. First, AI tools must be designed with acc…
S49
Transforming Agriculture_ AI for Resilient and Inclusive Food Systems — AI in this regard offers significant potential. We’re seeing AI systems and tools being applied to optimize the use of c…
S50
Turbocharging Digital Transformation in Emerging Markets: Unleashing the Power of AI in Agritech (ITC) — In conclusion, data, AI, and new technologies offer great potential in revolutionising and improving agriculture. Howeve…
S51
Open Forum #58 Collaborating for Trustworthy AI an Oecd Toolkit and Spotlight on AI in Government — Jungwook Kim: Thank you. So the question is dealing with the safety or security issues around the AI and it’s a public o…
S52
Engineering Accountable AI Agents in a Global Arms Race: A Panel Discussion Report — Kolbe-Guyot explains that public administration faces unique constraints because citizens cannot choose alternative gove…
S53
Safe, secure, and trustworthy AI: What is it and how do we get there? — While global agreements on core principles are welcome, they need to turn into concrete action. So what does it mean to …
S54
Agents of Change AI for Government Services & Climate Resilience — It’s a hard one. I think, you know, AI has evolved into a global multi – disciplinary field. And I think, you know, we n…
S55
Agentic AI and the new industrial diplomacy — The shift from ‘pilot to plant’ is happening globally, but the motivations, players, and governance challenges vary shar…
S56
WS #283 AI Agents: Ensuring Responsible Deployment — Prendergast frames agentic AI as a critical technological shift where AI has evolved beyond reactive tools to become pro…
S57
https://dig.watch/event/india-ai-impact-summit-2026/agents-of-change-ai-for-government-services-climate-resilience — So when we built our AI advisors we did something unconventional. Right now we are trying to do on the pilot stage. We a…
S59
Telangana launches Aikam to scale AI deployment — The Telangana government haslaunchedAikam, a new autonomous body aimed at positioning the state as a global proving grou…
S60
Keynote-António Guterres — I urge Member States. Industry and civil society to contribute to the panel’s work. work. Second, launching a global dia…
S61
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Lt Gen Vipul Shinghal — Third, ensuring transparency in AI systems:Commanders must understand the data sources, training methodologies, and deci…
S62
Building the Next Wave of AI_ Responsible Frameworks & Standards — Bhattacharya explained that trust ranks first among Salesforce’s five core values—trust, customer success, innovation, e…
S63
European Tech Sovereignty: Feasibility, Challenges, and Strategic Pathways Forward — -Data sovereignty: Where Europe should maintain complete control -Operational sovereignty: Ensuring continuity under ex…
S64
The Declaration for the Future of the Internet: Principles to Action — A balanced scorecard with certain parameters provides a measurable indicator of the progress made.
S65
Building Population-Scale Digital Public Infrastructure for AI — The tone is optimistic and collaborative throughout, with speakers sharing concrete examples of successful implementatio…
S66
Open Forum #18 Digital Cooperation for Development Ungis in Action — Establishing concrete metrics and evaluation frameworks for measuring WSIS implementation progress
S67
Scaling AI Beyond Pilots: A World Economic Forum Panel Discussion — Outcome Focus: Success should be measured by meaningful business and human outcomes rather than just productivity metric…
S68
WAIGF Opening Ceremony & Keynote — Hajia Sani: I’m sure we can do much better than that. Another round of applause for the Minister. Thank you so much. You…
S69
(Day 3) General Debate – General Assembly, 79th session: morning session — President: On behalf of the Assembly, I wish to thank the President of the Republic of the Gambia. The Assembly will h…
S70
https://dig.watch/event/india-ai-impact-summit-2026/trusted-connections_-ethical-ai-in-telecom-6g-networks — This is not a science fiction. This is the power of AI in telecommunication. Today, AI is transforming industries. And a…
S71
Opening remarks — Morning greetings were extended to participants at the conference, including those joining virtually, with particular ac…
S72
Keynote-Sundar Pichai — Namaste. Thank you. Thank you. Prime Minister Modi and distinguished leaders. It’s wonderful to be back in India. Every …
S73
Steering the future of AI — **Major Discussion Points:**
S74
Comprehensive Summary: AI Governance and Societal Transformation – A Keynote Discussion — This presentation is structured as a single, extended keynote rather than a traditional discussion, but Hunter-Torricke’…
S75
Comprehensive Discussion Report: AI Agents and Fiduciary Standards — Pentland presented a future where AI agents would handle virtually every business and government process, essentially ad…
S76
The Future of the Internet: Navigating the Transition to an Agentic Web — Historical dominance of browser-based search experiences; emerging possibilities in voice, thought understanding, and ro…
S77
Thinking through Augmentation — While Ucuzoglu is optimistic about the long-term impact of transformative technology, he acknowledges that it is not an …
S78
Fireside Conversation: 02 — This discussion features AI pioneer Yann LeCun, known as the “godfather of deep learning,” speaking with moderator Maria…
S79
CLOSING CEREMONY | IGF 2023 — Rodney Taylor:Thank you. Distinguished ladies and gentlemen, good evening. I am honored to speak this evening, and I had…
S80
Towards a Resilient Information Ecosystem: Balancing Platform Governance and Technology — Information should be treated as a public good rather than a commercial commodity
S81
Capacity Building in Digital Health — Chopra illustrated this with a tuberculosis detection example: rather than training more healthcare workers to detect tu…
S82
Keynote-Rajesh Subramanian — Intelligence is not an asset, it’s infrastructure, the foundation of the future of global progress, productivity, and ec…
S83
Open Forum #33 Building an International AI Cooperation Ecosystem — Kurbalija argues that AI has transformed from being a mysterious technology controlled by a few developers and top labs …
S84
GEO-politics/economics/emotions in the AI era — This analysis has framed this recalibration through three interconnected lenses:
S85
How AI Drives Innovation and Economic Growth — Evidence from around the world is consistent with this. Farmers respond to these AI weather forecasts. So I think that’s…
S86
Welfare for All Ensuring Equitable AI in the Worlds Democracies — -Audience- Various audience members asking questions
S87
Beyond human: AI, superhumans, and the quest for limitless performance & longevity — This discussion from the AI for Good conference featured presentations on using artificial intelligence to combat aging …
S88
Panel Discussion Data Sovereignty India AI Impact Summit — So you’re not left behind. See, AI is a journey where we don’t want any country to be left behind. One, lack of… resou…
S89
Leaders’ Plenary | Global Vision for AI Impact and Governance Morning Session Part 1 — Because innovation means progress, for us humans and for our planet. So indeed, what better motto than People, Planet, P…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
M
Minister Sridhar Babu
5 arguments122 words per minute1656 words811 seconds
Argument 1
Transition from generative AI to “agentic” AI; the traditional search bar is being replaced (Minister Sridhar Babu)
EXPLANATION
The Minister describes a shift from generative AI that merely provides answers to agentic AI that can take actions autonomously. He notes that the classic search‑bar interface is becoming obsolete as more proactive AI systems emerge.
EVIDENCE
He states that we are moving beyond generative AI that simply answers and are now moving to agentic AI that acts now, indicating a new paradigm in AI development [22-23]. He also observes that the search bar is dying and being replaced by something more profound [33-34].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The discussion notes a shift from answer-based generative AI to action-oriented agentic AI and the decline of the classic search bar, which is highlighted in the opening remarks about AI as an inflection point [S1].
MAJOR DISCUSSION POINT
Shift from answer‑based to action‑based AI
AGREED WITH
Saibal Chakraborty, Lee Tiedrich, Mike Haley, Srinivas Tallapragada
Argument 2
AI advisors for farmers, flood prediction, climate‑responsive services, and AI‑driven urban planning (Minister Sridhar Babu)
EXPLANATION
The Minister outlines several government‑level AI applications, including farmer advisory systems, flood forecasting, climate‑responsive service delivery, and satellite‑driven urban planning. These examples illustrate how AI agents are being integrated into everyday governance to improve resilience and efficiency.
EVIDENCE
He explains that AI can act as a co-governor to predict floods before clouds gather over the Moosey river and allocate resources pre-emptively [45-48]. He describes pilots where farmers train the system with local dialects and soil wisdom, turning lived patterns into model inputs [48-55]. He mentions satellite-driven heat analysis that now informs zoning, green belts, and urban cooling strategies for Hyderabad [58-62]. He also notes solar-power edge computer nodes that keep services operational when the grid fails across 33 districts [63-64].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Examples of AI-driven farmer advisories, flood forecasting and climate-responsive planning are described in the panel on AI for climate resilience and disaster response [S2].
MAJOR DISCUSSION POINT
Practical AI use cases in agriculture and climate
AGREED WITH
Saibal Chakraborty
Argument 3
Creation of a sovereign AI nerve centre, AI city, and open data exchange platform for statewide services (Minister Sridhar Babu)
EXPLANATION
The Minister announces the development of a state‑level AI hub, an AI‑focused city, and a data exchange platform that will serve as a sovereign AI infrastructure. These initiatives aim to foster AI research, talent development, and secure data handling within Telangana.
EVIDENCE
He describes upcoming state-of-the-art infrastructure in an AI city and a net-zero Bharat future city designed as self-learning territories that provide compute and policy advice [70-72]. He introduces ICOM, the first sovereign AI nerve centre intended as an innovation hub and talent pipeline [73]. He details the Telangana data exchange platform that hosts 1,084 datasets, converting administrative exhaust into ecological signals and enabling a sovereign data pipeline [73-77].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The minister’s announcement of a sovereign AI nerve centre and an AI-city aligns with references to a state-level AI hub and data exchange platform in the policy briefing [S1] and the full-stack sovereign AI discussion [S14].
MAJOR DISCUSSION POINT
Building sovereign AI infrastructure
AGREED WITH
Srinivas Tallapragada
Argument 4
AI should be treated as a co‑governor, with policies that embed human‑pilot oversight (Minister Sridhar Babu)
EXPLANATION
The Minister proposes that AI systems function as co‑governors alongside human decision‑makers, providing predictive capabilities while retaining human oversight. This approach is presented as a way to enhance public service delivery without relinquishing control.
EVIDENCE
He likens AI to a co-pilot, stating that the government will rely on AI as co-governors that can predict floods and allocate resources before citizens request services [45-48]. Earlier, he reflects on the historical shift from command-based interactions to a partnership with technology, emphasizing the need for human oversight [25-30].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The co-pilot framing of AI as a partner to human decision-makers is reiterated in the opening remarks about AI as public infrastructure and governance tool [S1].
MAJOR DISCUSSION POINT
AI as collaborative governance tool
AGREED WITH
Srinivas Tallapragada, Lee Tiedrich, Mike Haley, Saibal Chakraborty
DISAGREED WITH
Mike Haley, Lee Tiedrich
Argument 5
AI is a form of public infrastructure; Telangana is building a sovereign AI nerve centre and data exchange platform (Minister Sridhar Babu)
EXPLANATION
The Minister frames AI as essential public infrastructure, comparable to roads or electricity, and highlights Telangana’s efforts to establish a sovereign AI nerve centre and an open data exchange. This positions AI as a foundational element of state development.
EVIDENCE
He declares that the nation leading this century will treat intelligence as public infrastructure rather than a product [40-41]. He reiterates the creation of the sovereign AI nerve centre and the data exchange platform that ensures intelligence is grounded in integrity and kept within the state [70-77].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The minister’s view of AI as essential public infrastructure is echoed in the opening statements that compare intelligence to roads and electricity [S1] and in broader discussions of trusted digital infrastructure [S15].
MAJOR DISCUSSION POINT
AI as public infrastructure
S
Srinivas Tallapragada
6 arguments171 words per minute1282 words449 seconds
Argument 1
An AI agent must have a defined role, knowledge, memory, actuation ability, and guardrails (Srinivas Tallapragada)
EXPLANATION
Srinivas outlines the essential components of an AI agent: a clear role, domain knowledge, both short‑term and long‑term memory, the ability to act via APIs or channels, and robust guardrails to prevent misuse. These elements together constitute a trustworthy, functional agent.
EVIDENCE
He explains that an agent needs to know its role, possess knowledge, retain short-term and long-term memory, be able to act on digital interfaces such as APIs, and operate across channels like WhatsApp or web [136-144]. He stresses the importance of guardrails and a trust layer to mitigate hallucinations, bias, and toxicity [190-196].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Guidelines for trustworthy agents emphasise role definition, knowledge bases, memory, actuation via APIs and strong guardrails, as outlined in the safe-AI at scale framework [S20].
MAJOR DISCUSSION POINT
Core attributes of AI agents
AGREED WITH
Minister Sridhar Babu, Saibal Chakraborty, Lee Tiedrich, Mike Haley
Argument 2
AI agents support disaster response, police assistance, and public safety operations (Srinivas Tallapragada)
EXPLANATION
Srinivas shares examples where AI agents are deployed for non‑emergency citizen queries and to assist police officers, demonstrating their utility in public safety and disaster response contexts. These pilots show that agents can provide timely information and support to frontline personnel.
EVIDENCE
He cites a city in New Thames, UK, where an agent called Bobby answers over 90 % of citizen non-emergency questions [185-188]. He also mentions a Tasmanian city using an agent named Terry to support more than a thousand police officers in the field, providing answers to operational questions [186-188].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The panel on AI for climate resilience cites concrete deployments of agents for disaster response and public safety, matching the described use cases [S2].
MAJOR DISCUSSION POINT
Public safety applications of AI agents
Argument 3
Robust guardrails, auditability, and a command‑center are required for confidence in AI deployments (Srinivas Tallapragada)
EXPLANATION
Srinivas argues that trustworthy AI deployment demands a centralized command centre, comprehensive auditability, and the ability for independent parties to test systems. These mechanisms build confidence for governments and citizens alike.
EVIDENCE
He states that a command centre is needed to differentiate pilot demos from real-life deployments, allowing customers or governments to audit, test, and even have independent parties verify the system [214-215].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for auditability, a central command centre and independent testing are central to the safe-AI monitoring and assurance discussion [S19] and the broader responsible AI guidelines [S20].
MAJOR DISCUSSION POINT
Need for oversight infrastructure
AGREED WITH
Minister Sridhar Babu, Lee Tiedrich, Mike Haley, Saibal Chakraborty
Argument 4
Distinction between strategic sovereignty (data and policy control) and technical sovereignty (full supply‑chain control) (Srinivas Tallapragada)
EXPLANATION
Srinivas differentiates two layers of sovereignty: strategic, which concerns control over data and policy, and technical, which involves ownership of the entire hardware and software supply chain. He urges governments to pursue both tracks, emphasizing that strategic sovereignty can deliver immediate benefits.
EVIDENCE
He defines strategic sovereignty as control over data, governance policies, and operational policies, which can be implemented now for value [247-250]. He describes technical sovereignty as control over the full supply chain, including chips, and recommends governments treat these as separate tracks, not letting the longer-term technical track delay the benefits of the strategic one [251-252].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The differentiation between strategic and technical AI sovereignty is explicitly addressed in the minister’s remarks on data governance and the full-stack sovereign AI briefing [S1] and [S14].
MAJOR DISCUSSION POINT
Two‑level AI sovereignty
AGREED WITH
Minister Sridhar Babu
Argument 5
Policy frameworks should be agile, allowing rapid updates as AI technology evolves (Srinivas Tallapragada)
EXPLANATION
Srinivas advocates for agile regulation that can be quickly revised as AI capabilities change, likening it to a product feedback loop. This approach would reduce fear of getting standards perfect from day one and enable continuous improvement.
EVIDENCE
He notes that current policy frameworks are slow, causing fear, and suggests a feedback loop that allows standards to be updated as new data emerges, akin to agile development in engineering [317-324].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for agile regulation and learning-by-doing is highlighted in the discussion on rapid AI adoption and agile policy cycles [S2].
MAJOR DISCUSSION POINT
Agile AI regulation
AGREED WITH
Lee Tiedrich
DISAGREED WITH
Saibal Chakraborty, Lee Tiedrich, Mike Haley
Argument 6
Success is reflected in measurable income growth for the bottom 50 % of the population (Srinivas Tallapragada)
EXPLANATION
Srinivas envisions that within three years AI should have lifted the per‑capita income of the lowest half of the population, using this metric as a gauge of AI’s societal impact. He frames income uplift as the ultimate indicator of technology’s benefit.
EVIDENCE
He states his hope that in three years the bottom 50 % income percentile will have measurable per-capita income growth, describing this as the real impact of the technology [355-357].
MAJOR DISCUSSION POINT
Income‑based impact metric
S
Saibal Chakraborty
5 arguments152 words per minute569 words224 seconds
Argument 1
Agentic AI enables end‑to‑end execution of business and government processes (Saibal Chakraborty)
EXPLANATION
Saibal asserts that the conversation has moved from solving isolated problems to enabling AI agents that can execute entire business or governmental workflows from start to finish. This represents a fundamental change in how AI is applied.
EVIDENCE
He remarks that the discussion has moved decisively towards agentic AI and that we are now looking at end-to-end AI-led execution of business processes or government processes [110-112].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The shift to agentic AI that can execute full workflows, rather than isolated tasks, is noted in the agentic AI security-by-design overview [S17].
MAJOR DISCUSSION POINT
End‑to‑end AI execution
AGREED WITH
Minister Sridhar Babu, Lee Tiedrich, Mike Haley, Srinivas Tallapragada
Argument 2
AI can draft multi‑million‑dollar RFPs and automate public procurement workflows (Saibal Chakraborty)
EXPLANATION
Saibal raises the scenario where an AI agent prepares large‑scale procurement documents, highlighting the need to consider appropriate guardrails and human oversight for high‑value transactions. He questions the extent of autonomy such agents should have.
EVIDENCE
He describes an agent crafting a multi-million or billion-dollar RFP on behalf of the government and asks what guardrails are needed, whether full autonomy is possible, or if a final human layer is required to ensure correctness [148-154].
MAJOR DISCUSSION POINT
AI in public procurement
Argument 3
Public sector must decide the level of autonomy versus human oversight for high‑stakes tasks (Saibal Chakraborty)
EXPLANATION
Saibal emphasizes that governments need to determine how much autonomy to grant AI agents, especially for critical functions like procurement, balancing speed with procedural rigor. He underscores the importance of maintaining human checkpoints.
EVIDENCE
He asks whether an agent can be fully autonomous in high-stakes contexts or if a human layer must remain to cross the T’s and dot the I’s, noting the potential negative impact of mistakes [149-154].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The UN GA statement calls for universal guardrails, clear accountability and a balance between automation and human oversight for high-impact AI applications [S18].
MAJOR DISCUSSION POINT
Balancing autonomy and oversight
AGREED WITH
Minister Sridhar Babu, Srinivas Tallapragada, Lee Tiedrich, Mike Haley
DISAGREED WITH
Lee Tiedrich, Srinivas Tallapragada, Mike Haley
Argument 4
Public officials need upskilling to understand AI trust limits and when human checks are required (Saibal Chakraborty)
EXPLANATION
Saibal points out that many public officials lack AI engineering expertise, so systematic upskilling is essential for them to recognize where AI outputs are trustworthy and where additional human verification is needed.
EVIDENCE
He notes that district-level officials are not AI engineers and must be upskilled to know what can be trusted and what requires extra human checks, highlighting this as a major component for AI adoption in the public sector [253-254].
MAJOR DISCUSSION POINT
Upskilling government staff
AGREED WITH
Minister Sridhar Babu
Argument 5
Success is achieved when a farmer can receive vernacular, AI‑driven advice at scale across India (Saibal Chakraborty)
EXPLANATION
Saibal defines success as the ability for every farmer to interact with an AI tool in their native language and obtain practical, actionable advice, thereby demonstrating inclusive AI impact.
EVIDENCE
He states that the true win for AI would be if a farmer could talk to a small language-model-powered tool in his or her own vernacular and receive practical advice on crops and cattle, scaled across India [334-335].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The climate-resilience panel describes farmer-focused AI tools that deliver advice in local languages, illustrating the envisioned scalable vernacular service [S2].
MAJOR DISCUSSION POINT
Inclusive AI for agriculture
M
Mike Haley
6 arguments213 words per minute1516 words426 seconds
Argument 1
Shift from narrow, task‑specific agents to systems‑level reasoning and chained actions (Mike Haley)
EXPLANATION
Mike observes that earlier AI agents were limited to narrow tasks, whereas current agents can perform chain‑of‑thought reasoning and coordinate multiple actions across systems. This marks a transition to more complex, integrated AI capabilities.
EVIDENCE
He contrasts last year’s narrow agents that solved specific problems with today’s agents that can abstract problems, perform chain-of-thought reasoning, and operate at a systems level [119-122].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The evolution toward chain-of-thought reasoning and systems-level AI agents is discussed in the agentic AI capabilities briefing [S17].
MAJOR DISCUSSION POINT
From narrow to systems‑level AI
AGREED WITH
Minister Sridhar Babu, Saibal Chakraborty, Lee Tiedrich, Srinivas Tallapragada
Argument 2
AI agents can analyze floodplains, optimise drainage, and assist in infrastructure design (Mike Haley)
EXPLANATION
Mike provides a concrete use case where AI agents evaluate floodplain characteristics and suggest drainage optimizations, illustrating how agents can augment civil‑engineering design processes.
EVIDENCE
He explains that AI agents can analyze floodplains, evaluate water drainage, and optimize design decisions early in the process, thereby improving infrastructure outcomes [169-174].
MAJOR DISCUSSION POINT
AI‑assisted infrastructure design
Argument 3
Because AI is probabilistic, human engineers need transparent control and the ability to intervene (Mike Haley)
EXPLANATION
Mike stresses that AI systems are inherently probabilistic and cannot be made perfectly deterministic; therefore, engineers must retain the ability to review, adjust, and re‑run outputs, which builds trust through transparency and control.
EVIDENCE
He notes that guardrails cannot guarantee perfect results, so systems should provide accuracy feedback and allow engineers to intervene, tweak, reassess, or rerun the model, emphasizing that this control is essential for trust [217-231].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Responsible AI guidelines stress human-in-the-loop control, transparency and the ability to adjust probabilistic model outputs [S20].
MAJOR DISCUSSION POINT
Human‑in‑the‑loop for probabilistic AI
AGREED WITH
Minister Sridhar Babu, Srinivas Tallapragada, Lee Tiedrich, Saibal Chakraborty
DISAGREED WITH
Minister Sridhar Babu, Lee Tiedrich
Argument 4
Engineers must retain the ability to review, adjust, and re‑run AI outputs to maintain trust (Mike Haley)
EXPLANATION
Mike reiterates that giving engineers the capacity to modify AI results and understand the underlying processes is crucial for confidence in AI deployments. This aligns with the broader theme of transparent, controllable systems.
EVIDENCE
He describes how engineers can give feedback, adjust parameters, and rerun models, and that this ability to control the system underpins trust rather than expecting flawless outputs [224-231].
MAJOR DISCUSSION POINT
Control loops for trustworthy AI
Argument 5
Industry can pre‑empt regulation by providing “transparency cards” that disclose model provenance, accuracy, and bias (Mike Haley)
EXPLANATION
Mike outlines a proactive industry measure where each AI feature includes a “transparency card” similar to a nutrition label, detailing model type, training data, accuracy, and known biases. This aims to give governments clear information and potentially shape future standards.
EVIDENCE
He explains that every AI feature in their software now includes a transparency card showing model details, training data, accuracy, and bias information, and that this practice has attracted interest as a possible standard [301-304].
MAJOR DISCUSSION POINT
Proactive disclosure for regulatory alignment
DISAGREED WITH
Saibal Chakraborty, Lee Tiedrich, Srinivas Tallapragada
Argument 6
Success means faster, safer infrastructure development with public confidence and engineer endorsement (Mike Haley)
EXPLANATION
Mike envisions AI enabling infrastructure projects to be completed more quickly and safely, while also ensuring that engineers and the public trust the technology. He links speed, safety, and confidence as key success metrics.
EVIDENCE
He states that in a few years we should see infrastructure built faster than ever, without compromising safety, and that engineers and the public must feel comfortable and secure with the AI-enabled processes [345-353].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
India’s digital and industrial AI strategy emphasizes trusted, interoperable infrastructure that accelerates projects while maintaining safety and public confidence [S15].
MAJOR DISCUSSION POINT
Accelerated, trustworthy infrastructure
L
Lee Tiedrich
5 arguments197 words per minute833 words252 seconds
Argument 1
AI agents can act on behalf of people, moving beyond answering queries (Lee Tiedrich)
EXPLANATION
Lee highlights that the emergence of agentic AI allows systems not only to provide answers but also to take actions on behalf of users, representing a major shift in AI capability.
EVIDENCE
He notes that the biggest change is the ability of AI not only to do end-to-end tasks but also to act on behalf of people [115-117].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The agentic AI overview highlights the new capability of agents to act on users’ behalf, not just provide answers [S17].
MAJOR DISCUSSION POINT
AI acting for users
AGREED WITH
Minister Sridhar Babu, Saibal Chakraborty, Mike Haley, Srinivas Tallapragada
Argument 2
Over‑reliance risks demand sandboxes, human‑in‑the‑loop safeguards, and clear liability rules (Lee Tiedrich)
EXPLANATION
Lee warns that excessive reliance on AI without proper safeguards can be dangerous, recommending sandbox environments, human‑in‑the‑loop controls, and clear liability frameworks to mitigate risks.
EVIDENCE
He discusses the need for sandboxes, human-in-the-loop safeguards, and considerations of liability and testing when agents call third-party agents [190-203].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The UN GA call for universal guardrails and the safe-AI at scale paper both stress sandbox testing, human-in-the-loop safeguards and clear liability frameworks [S18].
MAJOR DISCUSSION POINT
Risk mitigation for AI deployment
AGREED WITH
Minister Sridhar Babu, Srinivas Tallapragada, Mike Haley, Saibal Chakraborty
DISAGREED WITH
Minister Sridhar Babu, Mike Haley
Argument 3
Human judgment is essential for selecting safe use cases and applying guardrails (Lee Tiedrich)
EXPLANATION
Lee stresses that selecting appropriate use cases and implementing guardrails requires human judgment, emphasizing a cautious and smart approach to AI adoption in government.
EVIDENCE
He advises picking use cases wisely, noting that AI excels in some areas while others are not ready for prime time, and that over-reliance can cause neglect of necessary guardrails and human oversight [191-196].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Guidance on responsible AI deployment underscores the role of human judgment in use-case selection and guardrail design [S18].
MAJOR DISCUSSION POINT
Human‑centric AI governance
Argument 4
Global, multi‑disciplinary standards and evaluation ecosystems are needed, with localisation for different jurisdictions (Lee Tiedrich)
EXPLANATION
Lee calls for internationally coordinated standards and evaluation frameworks for AI, while allowing localisation to respect different legal and cultural contexts. He sees this as a foundation for effective regulation.
EVIDENCE
He describes the need for global, multi-disciplinary standards, evaluation ecosystems, and the necessity to localise standards for different jurisdictions, noting differing regulatory approaches in the U.S. and Europe [260-280].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The UN GA statement calls for globally coordinated, multi-disciplinary AI standards that can be localized to regional legal and cultural contexts [S18].
MAJOR DISCUSSION POINT
International AI standards with local adaptation
AGREED WITH
Srinivas Tallapragada
DISAGREED WITH
Saibal Chakraborty, Srinivas Tallapragada, Mike Haley
Argument 5
Success includes active AI safety institutes sharing evaluation techniques worldwide (Lee Tiedrich)
EXPLANATION
Lee envisions a future where AI safety institutes are active, collaborating globally to develop and share evaluation methods, thereby strengthening AI safety practices across regions.
EVIDENCE
He mentions that within three years there should be active AI safety institutes sharing evaluation techniques, with efforts to localise these practices for both Global North and South [339-343].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The same UN GA discussion envisions active AI safety institutes that share evaluation methods across regions to strengthen global AI safety [S18].
MAJOR DISCUSSION POINT
Global AI safety collaboration
Agreements
Agreement Points
Recognition of a major shift from generative, answer‑based AI to agentic, action‑oriented AI
Speakers: Minister Sridhar Babu, Saibal Chakraborty, Lee Tiedrich, Mike Haley, Srinivas Tallapragada
Transition from generative AI to “agentic” AI; the traditional search bar is being replaced (Minister Sridhar Babu) Agentic AI enables end‑to‑end execution of business and government processes (Saibal Chakraborty) AI agents can act on behalf of people, moving beyond answering queries (Lee Tiedrich) Shift from narrow, task‑specific agents to systems‑level reasoning and chained actions (Mike Haley) An AI agent must have a defined role, knowledge, memory, actuation ability, and guardrails (Srinivas Tallapragada)
All speakers highlighted that AI is moving beyond simple answer generation toward autonomous, action-taking agents that can execute whole workflows, signalling a new paradigm in AI development [22-23][110-112][115-117][119-122][136-144].
POLICY CONTEXT (KNOWLEDGE BASE)
The minister highlighted that AI is moving beyond simple question answering toward autonomous agents that can take real-world actions, marking a transition from generative models to action-oriented systems [S37].
Need for robust guardrails, auditability and human‑in‑the‑loop oversight for AI agents
Speakers: Minister Sridhar Babu, Srinivas Tallapragada, Lee Tiedrich, Mike Haley, Saibal Chakraborty
AI should be treated as a co‑governor, with policies that embed human‑pilot oversight (Minister Sridhar Babu) Robust guardrails, auditability, and a command‑center are required for confidence in AI deployments (Srinivas Tallapragada) Over‑reliance risks demand sandboxes, human‑in‑the‑loop safeguards, and clear liability rules (Lee Tiedrich) Because AI is probabilistic, human engineers need transparent control and the ability to intervene (Mike Haley) Public sector must decide the level of autonomy versus human oversight for high‑stakes tasks (Saibal Chakraborty)
Every panelist stressed that AI agents must be bounded by clear guardrails, be auditable, and retain human oversight, especially for high-impact government functions [25-30][136-144][214-215][190-203][217-231][148-154].
POLICY CONTEXT (KNOWLEDGE BASE)
Panelists stressed the critical importance of enterprise guardrails, auditability and human-in-the-loop (or on-the-loop) oversight for agentic AI, especially in high-risk environments [S41][S42].
Capacity development and training are essential for effective AI deployment
Speakers: Minister Sridhar Babu, Saibal Chakraborty
AI advisors for farmers, flood prediction, climate‑responsive services, and AI‑driven urban planning (Minister Sridhar Babu) Public officials need upskilling to understand AI trust limits and when human checks are required (Saibal Chakraborty)
Both the minister and Saibal highlighted the importance of training end-users-farmers in rural Telangana and public officials across India-to ensure AI systems are used responsibly and effectively [48-55][253-254].
POLICY CONTEXT (KNOWLEDGE BASE)
Multiple workshops and roadmaps identify capacity building-technical training and policy-level education-as a prerequisite for responsible AI implementation across sectors [S33][S34][S36][S38].
AI is being positioned as core public infrastructure and a sovereign data ecosystem
Speakers: Minister Sridhar Babu, Srinivas Tallapragada
Creation of a sovereign AI nerve centre, AI city, and open data exchange platform for statewide services (Minister Sridhar Babu) Distinction between strategic sovereignty (data and policy control) and technical sovereignty (full supply‑chain control) (Srinivas Tallapragada)
The minister and Srinivas concur that AI should be treated like roads or electricity-public infrastructure-backed by a sovereign data platform that guarantees strategic control over data and policy while charting a longer-term technical sovereignty path [40-41][70-77][247-252].
POLICY CONTEXT (KNOWLEDGE BASE)
India’s white paper treats AI compute, datasets and models as a digital public good, reflecting a broader view of AI as essential public infrastructure and a sovereign data ecosystem [S30][S31][S32].
Regulatory frameworks for AI must be agile and adaptable to rapid technological change
Speakers: Srinivas Tallapragada, Lee Tiedrich
Policy frameworks should be agile, allowing rapid updates as AI technology evolves (Srinivas Tallapragada) Global, multi‑disciplinary standards and evaluation ecosystems are needed, with localisation for different jurisdictions (Lee Tiedrich)
Both speakers argue for flexible, evolving policy and standards mechanisms that can keep pace with AI advances, combining agile domestic regulation with internationally coordinated, locally-adapted standards [317-324][260-280].
POLICY CONTEXT (KNOWLEDGE BASE)
Analyses call for agile, risk-based regulatory approaches that can keep pace with fast-moving AI technologies, echoing recommendations from the EU AI Act and IGF discussions [S45][S46][S47].
Similar Viewpoints
Both panelists see the emergence of agentic AI as a catalyst that moves applications from isolated, task‑specific tools toward integrated, system‑wide process automation, enabling governments to streamline complex workflows [110-112][119-122].
Speakers: Saibal Chakraborty, Mike Haley
Agentic AI enables end‑to‑end execution of business and government processes (Saibal Chakraborty) Shift from narrow, task‑specific agents to systems‑level reasoning and chained actions (Mike Haley)
Unexpected Consensus
AI‑driven farmer support as a key success metric
Speakers: Minister Sridhar Babu, Saibal Chakraborty
AI advisors for farmers, flood prediction, climate‑responsive services, and AI‑driven urban planning (Minister Sridhar Babu) Success is achieved when a farmer can receive vernacular, AI‑driven advice at scale across India (Saibal Chakraborty)
While the minister discussed pilot projects that train AI with farmer dialects and local knowledge, Saibal framed nationwide vernacular advisory capability as the ultimate measure of AI success-showing an unexpected alignment between a state-level implementation focus and a pan-India inclusive impact goal [48-55][334-335].
POLICY CONTEXT (KNOWLEDGE BASE)
Agritech literature emphasizes AI-enabled farmer support-optimising water, fertilizer and pest use-as a primary metric for impact, while noting data accessibility and ecosystem support as critical enablers [S48][S49][S50].
Overall Assessment

The panel displayed strong convergence on several fronts: the transition to agentic AI, the necessity of guardrails and human oversight, the importance of capacity building, the framing of AI as sovereign public infrastructure, and the need for agile, standards‑based regulation. These shared positions cut across government, academia and industry, indicating a common understanding of both opportunities and risks associated with AI agents.

High consensus – the speakers largely agree on the direction of AI development and the policy/operational safeguards required, which bodes well for coordinated action on AI governance, capacity building and infrastructure investment.

Differences
Different Viewpoints
Confidence in AI’s predictive capability for disaster/flood response and the degree of autonomy it should have
Speakers: Minister Sridhar Babu, Mike Haley, Lee Tiedrich
AI should be treated as a co‑governor, with policies that embed human‑pilot oversight (Minister Sridhar Babu) Because AI is probabilistic, human engineers need transparent control and the ability to intervene (Mike Haley) Over‑reliance risks demand sandboxes, human‑in‑the‑loop safeguards, and clear liability rules (Lee Tiedrich)
The Minister claims AI can act as a co-governor that predicts floods before clouds gather and allocates resources pre-emptively [45-48], while Mike stresses that AI is inherently probabilistic, cannot guarantee perfect predictions and therefore requires human engineers to retain control and intervene [217-231]. Lee adds that over-reliance on such autonomous systems is risky and calls for sandboxes, human-in-the-loop safeguards and liability frameworks [190-203]. These positions reflect a clash between a high-confidence, autonomous vision and a cautious, human-centric safety stance.
POLICY CONTEXT (KNOWLEDGE BASE)
Discussions on climate-resilient AI note its potential for high-resolution flood forecasting but also highlight the need for reliable data, human oversight and clear limits on autonomy in emergency contexts [S39][S41].
Preferred mechanism for ensuring safe, trustworthy AI deployment in the public sector
Speakers: Saibal Chakraborty, Lee Tiedrich, Srinivas Tallapragada, Mike Haley
Public sector must decide the level of autonomy versus human oversight for high‑stakes tasks (Saibal Chakraborty) Global, multi‑disciplinary standards and evaluation ecosystems are needed, with localisation for different jurisdictions (Lee Tiedrich) Policy frameworks should be agile, allowing rapid updates as AI technology evolves (Srinivas Tallapragada) Industry can pre‑empt regulation by providing “transparency cards” that disclose model provenance, accuracy, and bias (Mike Haley)
Saibal argues that governments need to set guardrails and decide how much autonomy to grant AI agents, especially for critical functions like procurement [148-154]. Lee proposes a top-down solution: develop global, multi-disciplinary standards and evaluation ecosystems that can be localised [260-280]. Srinivas suggests a bottom-up, agile regulatory approach where standards are continuously updated through feedback loops [317-324]. Mike offers a market-driven answer, where industry voluntarily adds transparency cards to each AI feature to inform regulators and users [301-304]. The disagreement lies in whether the primary driver of safe AI should be government-mandated standards, agile policy cycles, or industry self-regulation.
POLICY CONTEXT (KNOWLEDGE BASE)
Workshops on trustworthy AI stress the role of robust guardrails, auditability, international standards and human accountability as mechanisms for safe public-sector AI deployment [S41][S51][S53].
Unexpected Differences
The Minister’s optimistic claim that AI can replace the traditional search bar and act as a proactive co‑governor versus panelists’ caution about AI’s probabilistic limits and need for human control
Speakers: Minister Sridhar Babu, Mike Haley, Lee Tiedrich
AI should be treated as a co‑governor, with policies that embed human‑pilot oversight (Minister Sridhar Babu) Because AI is probabilistic, human engineers need transparent control and the ability to intervene (Mike Haley) Over‑reliance risks demand sandboxes, human‑in‑the‑loop safeguards, and clear liability rules (Lee Tiedrich)
The Minister declares that the search bar is dying and that AI will act proactively as a co-governor for flood prediction and resource allocation [33-34][45-48]. This confident, near-autonomous vision was unexpected given the panel’s consistent emphasis on AI’s probabilistic nature, the necessity of human oversight, and the risks of over-reliance [217-231][190-203]. The contrast highlights a surprising gap between policy optimism and technical caution.
POLICY CONTEXT (KNOWLEDGE BASE)
The minister’s vision of AI as a proactive co-governor contrasts with panelist concerns about probabilistic outputs and the necessity of human oversight, reflecting an ongoing debate on agency, safety and governance of AI systems [S37][S41][S42][S38].
Overall Assessment

The discussion revealed a core consensus that AI agents must be governed by robust guardrails, auditability, and human oversight. The main points of contention centered on how much autonomy to grant AI systems—especially for high‑stakes public functions like disaster prediction—and on the best pathway to achieve safe deployment, whether through government‑driven standards, agile policy cycles, or industry‑led transparency measures. The unexpected optimism expressed by the Minister about AI’s autonomous capabilities contrasted sharply with the panel’s cautionary stance, underscoring a tension between policy ambition and technical realism.

Moderate to high. While participants share the overarching goal of trustworthy AI, they diverge significantly on the degree of autonomy and the primary mechanism for regulation, which could affect the speed and effectiveness of AI integration into public services.

Partial Agreements
All four speakers concur that AI agents need strong guardrails, auditability, and human oversight before being deployed in critical public‑sector contexts. However, they diverge on the concrete mechanisms: Saibal focuses on procedural guardrails, Lee on sandboxing and liability, Srinivas on a centralized command‑center and auditability, and Mike on engineer‑level transparency and control [148-154][190-203][214-215][217-231].
Speakers: Saibal Chakraborty, Lee Tiedrich, Srinivas Tallapragada, Mike Haley
Public sector must decide the level of autonomy versus human oversight for high‑stakes tasks (Saibal Chakraborty) Over‑reliance risks demand sandboxes, human‑in‑the‑loop safeguards, and clear liability rules (Lee Tiedrich) Robust guardrails, auditability, and a command‑center are required for confidence in AI deployments (Srinivas Tallapragada) Because AI is probabilistic, human engineers need transparent control and the ability to intervene (Mike Haley)
Takeaways
Key takeaways
AI is moving from narrow, query‑based tools to agentic systems that can act end‑to‑end on behalf of users and governments. Agentic AI requires a defined role, knowledge base, memory, actuation capability, and robust guardrails to be trustworthy. Governments can leverage AI agents for concrete public‑service challenges such as flood prediction, agricultural advice, disaster response, public procurement, and infrastructure design. Treating AI as a form of public infrastructure demands sovereign data strategies, exemplified by Telangana’s AI nerve centre and open data exchange platform. Human‑in‑the‑loop oversight, upskilling of public officials, and transparent control mechanisms are essential because AI systems are probabilistic and can hallucinate. Standardisation, evaluation ecosystems, and agile policy frameworks are needed to keep regulation in step with rapid AI advances, with localisation for different jurisdictions. Success metrics should focus on tangible societal impact – e.g., vernacular AI assistance for farmers, income growth for low‑income populations, and faster, safer infrastructure development.
Resolutions and action items
Telangana will continue building its sovereign AI nerve centre (ICOM) and expand the Telangana data exchange platform to support AI‑driven governance. Governments are encouraged to establish a ‘command‑center’ architecture for auditing, testing and monitoring AI agents before deployment. Public sector bodies should launch up‑skilling programmes so officials understand AI trust limits and can apply human‑in‑the‑loop checks. Adopt a phased ‘crawl‑walk‑run’ rollout model for AI agents, starting with low‑risk pilots (e.g., farmer advisory bots) and expanding as guardrails mature. Industry participants (e.g., Autodesk) will provide “transparency cards” that disclose model provenance, accuracy, bias and data sources for AI features. Create sandbox environments for high‑stakes use cases (e.g., AI‑generated RFPs) to test guardrails, liability rules and interoperability of third‑party agents. Policymakers should design agile regulatory frameworks that allow standards and evaluation criteria to be updated iteratively as technology evolves.
Unresolved issues
Determining the optimal balance between full AI autonomy and required human oversight for high‑impact government tasks. Establishing clear liability and accountability mechanisms when AI agents invoke third‑party services. Defining concrete, universally accepted evaluation metrics and certification processes for AI agents across diverse jurisdictions. Achieving technical sovereignty (full control over hardware supply chains) while still reaping benefits from strategic data sovereignty. Ensuring equitable access to vernacular AI tools for all farmers and marginalized communities at scale. How to synchronise global standards with local cultural, legal, and linguistic requirements without stalling innovation.
Suggested compromises
Implement human‑in‑the‑loop safeguards for critical processes while allowing agents to operate autonomously on lower‑risk tasks. Adopt a two‑track sovereignty approach: pursue immediate strategic data sovereignty and plan for longer‑term technical sovereignty. Use a “crawl‑walk‑run” methodology: start with simple, well‑guarded pilots, then progressively expand functionality as confidence grows. Combine agile policy updates with sandbox testing, enabling rapid iteration of standards without waiting for full legislative cycles. Pair AI agents with transparent control panels that let engineers intervene, adjust parameters, and override outputs when needed.
Thought Provoking Comments
Follow-up Questions
What guardrails should be put around AI agents when they generate large public procurement documents (RFPs), and should a human oversight layer remain?
High‑stakes procurement requires accountability and safeguards to prevent costly errors, making it essential to define appropriate guardrails and determine the necessity of human‑in‑the‑loop review.
Speaker: Saibal Chakraborty
Should AI agents in the public sector be fully autonomous or always include a human‑in‑the‑loop for critical decisions?
Clarifying the degree of autonomy influences governance design, risk management, and public trust in AI‑driven government processes.
Speaker: Saibal Chakraborty
How can governments engineer trust into AI agent systems so that ministers and secretaries feel confident using them?
Establishing trust mechanisms (auditability, transparency, control) is prerequisite for adoption of AI agents in high‑impact governmental roles.
Speaker: Victoria Espinel
What are the risks of over‑reliance on AI agents in government services, and how can they be mitigated?
Identifying over‑reliance hazards (e.g., blind trust, lack of human oversight) helps shape safeguards and balanced deployment strategies.
Speaker: Victoria Espinel
How can governments balance strategic and technical AI sovereignty, achieving data control now while pursuing full supply‑chain sovereignty later?
Strategic sovereignty (data governance) can deliver immediate benefits, while technical sovereignty (hardware/control of supply chain) requires longer‑term investment; balancing both is crucial for national security and autonomy.
Speaker: Srinivas Tallapragada
What upskilling programs are needed for public‑sector staff to effectively work with AI agents and understand their limitations?
Public officials often lack AI expertise; targeted training ensures they can interpret outputs, apply guardrails, and maintain oversight.
Speaker: Saibal Chakraborty
What standards and evaluation mechanisms should be developed for AI agents, and how can they be localized for different cultural and regulatory contexts?
Common standards enable consistent safety assessments, while localization respects regional legal, cultural, and ethical differences.
Speaker: Lee Tiedrich
How can regulatory frameworks be made agile to keep pace with the rapid evolution of AI technologies?
Agile regulation allows policies to be updated as AI capabilities change, preventing regulatory lag and fostering innovation.
Speaker: Srinivas Tallapragada
How should an evaluation ecosystem for AI safety and security be built, especially involving AI Centers in the Global South?
A coordinated evaluation infrastructure, including regional AI safety institutes, is needed to test, certify, and share best practices globally.
Speaker: Lee Tiedrich
What digital infrastructure (e.g., BIM, standardized data models) is required to enable AI agents to assist in designing and managing physical infrastructure?
Accurate, standardized digital representations of assets are prerequisite for AI agents to generate reliable designs and operational recommendations.
Speaker: Mike Haley
How can the impact of AI on inclusive outcomes be measured, such as providing vernacular language tools for farmers across India?
Defining metrics for accessibility and effectiveness of AI tools in local languages is essential to assess inclusive benefits.
Speaker: Saibal Chakraborty
What methodologies can be used to assess whether AI deployments are raising income for the bottom 50 % income percentile?
Quantifying socio‑economic impact on low‑income populations provides a concrete measure of AI’s societal value.
Speaker: Srinivas Tallapragada
What governance models are effective for sovereign AI nerve centers and state‑run data exchange platforms?
Understanding organizational, legal, and operational frameworks for state‑owned AI hubs informs replication and scaling.
Speaker: Minister Sridhar Babu
How can AI agents be integrated with real‑time climate data pipelines to predict floods, droughts, and other events for proactive governance?
Linking AI agents to live environmental data can enable anticipatory actions, reducing disaster impact.
Speaker: Minister Sridhar Babu
What ethical and liability frameworks are needed for multi‑agent ecosystems where agents invoke third‑party agents?
When agents call external services, clear rules for responsibility, testing, and risk allocation are required.
Speaker: Lee Tiedrich
What technical guardrails (e.g., hallucination detection, bias mitigation) are required for government‑deployed AI agents?
Ensuring outputs are accurate, unbiased, and free from hallucinations is critical for trustworthy public‑sector AI applications.
Speaker: Srinivas Tallapragada, Mike Haley

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Advancing Scientific AI with Safety Ethics and Responsibility

Advancing Scientific AI with Safety Ethics and Responsibility

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel explored how the rapid emergence of AI-enabled biodesign tools is shifting biosecurity risk from traditional laboratory containment to the upstream design phase, creating a new governance challenge that demands attention to data governance, model evaluation and red-team activities [6-13]. Participants argued that India’s heterogeneous scientific ecosystem cannot rely on a single central authority; instead, oversight must be decentralized to empower biosafety officers, information-security units and other institutional actors, establishing multiple, coordinated checks and balances [24-27][28-31].


To reconcile open-science benefits with high-risk capabilities, a tiered-access model combined with contextual norms and pre-deployment assessments using structured rubrics was recommended, drawing on RAND Europe’s risk index and a “know-your-customer” style credentialing [41-49][50-57]. The speakers emphasized that assessment results should be shared through a credentialed network with tiered confidentiality rather than kept proprietary, and that a six-monthly independent monitoring ritual-potentially housed in an AI-safety institute linked to governments and international bodies-would provide continuous risk oversight [92-99][100-119]. Recognizing limited AI readiness in many Global South institutions, they called for socio-cultural evaluation, deployment of small-model solutions, self-regulation commitments and capacity-building programmes to make safeguards proportionate and functional [62-71][75-79]. A unified yet adaptable framework was proposed, integrating participatory stakeholder involvement, accountability mechanisms for developers to document testing, and self-regulation endorsements [72-77][78-80].


Cross-border challenges were highlighted, with fragmented data standards and divergent legal regimes hampering biosurveillance; the panel urged harmonised federated standards (e.g., HL7-FHIR-style), pre-negotiated legal safe-harbours, and shared evaluation criteria embedded in national systems [226-233][234-241]. To close incident-reporting gaps, a new AI incident taxonomy covering physical, psychological, cyber, algorithmic, socio-economic and environmental harms was described, alongside toolkits for assessing user perceptions and building AI literacy in healthcare settings [270-276]. Emerging powers such as India are creating sandboxes, a Global-South trustworthy-AI network and an AI-safety commons to enable low-resource countries to adopt tailored governance while learning from each other’s experiences [161-169][170-180].


The discussion concluded that effective governance must move beyond model-centric audits to systemic, socio-technical assessments that consider capability uplift, incentive structures and the cross-border diffusion of risk, integrate AI evaluation into grant reviews and biosafety panels, and incorporate tech-sovereignty measures for AI security [189-197][198-202][147-149][155-156]. Overall, a decentralized, collaborative, and context-aware architecture-supported by regular independent evaluation, capacity building and interoperable standards-is essential to safely harness AI-driven scientific innovation [24-27][41-57][122-130][226-233][189-202].


Keypoints

Major discussion points


AI is moving bio-risk upstream from physical labs to the design stage, creating a new governance challenge.


The panel highlighted that traditional bio-security relied on “physical infrastructure and lab facilities” [7-8] but AI-driven biodesign tools now let researchers “engineer proteins, optimise DNA sequences…” without those constraints [10-12]. This shift means risk must be managed earlier in the design pipeline [12-13] and calls for “more adaptive oversight mechanisms” [23-24].


Oversight must be decentralized, capacity-building focused, and tailored to heterogeneous ecosystems (especially in India and the Global South).


Speakers argued that a single authority “in Delhi…won’t work” [25-27] and advocated for empowering “information security or biosecurity offices” [28-31] and creating “cross-trained AI biosafety review panels” [147-149]. They also stressed the wide variation in “governance capacity, compliance culture and technical expertise” across institutions [126-129] and the need for “proportionate, capability-aware safeguards” [138-144].


Open-science benefits must be preserved through tiered, contextual access and pre-deployment assessments rather than blanket restrictions.


The discussion proposed “tiered access and contextual norms” [41-42] and praised the RAND Europe “pre-deployment assessment with structured rubrics” [44-48]. It was emphasized that “differentiated governance at capability level is always better than blanket restriction at access level” [57-58] and that open-source tools remain essential, especially for low-resource settings [53-56].


Institutionalising independent evaluation (red-teamings) and continuous monitoring is essential, but requires new structures and investment.


A six-monthly “monitoring and assessment of risk” ritual was recommended [105-106] and the creation of an “AI safety or security institute” with formal government links was suggested [113-118]. The need for “non-interactive methodology” and broader integration into institutions was also noted [107-110].


Cross-border data-standard harmonisation and legal safe-harbors are critical for AI-enabled biosurveillance and pandemic preparedness.


Participants pointed out fragmented standards across Southeast Asia [212-216] and advocated for federated frameworks like HL7-FHIR adapted for public-health [227-230]. They called for pre-negotiated “legal safe harbors” for data sharing [230-234] and shared evaluation criteria embedded in national systems [235-241].


Overall purpose / goal of the discussion


The panel convened to explore how emerging AI-driven biodesign and biosurveillance tools reshape bio-security risk, and to identify governance, policy, and capacity-building measures-especially for the Global South-that can ensure safety while retaining the scientific and societal benefits of open AI research.


Tone of the discussion


Opening (0:00-5:00): Cautious and exploratory; speakers acknowledge uncertainty (“not an AI safety expert…take it with a pinch of salt” [3-4]) and the novelty of the risk landscape.


Middle (5:00-22:00): Constructive and solution-oriented; ideas about decentralized oversight, tiered access, and institutional mechanisms are presented with optimism.


Later (22:00-38:00): Collaborative and forward-looking; emphasis on building networks, commons, and cross-border standards, with a tone of partnership and urgency.


Closing (38:00-end): Summative and hopeful; participants reiterate key actions, express confidence in emerging frameworks, and thank each other, ending on a cooperative note.


Overall, the conversation moves from identifying a novel problem to proposing concrete, multi-level solutions, maintaining a collegial and proactive tone throughout.


Speakers

Speaker 1


Area of expertise: Biosecurity, AI-enabled biodesign, AI safety in life-sciences.


Role / Title: (not specified in the transcript) – presented as a biosecurity expert discussing institutional readiness and safety measures.


Citation: [S13]


Speaker 2


Area of expertise: AI governance, open-science policy, risk assessment for AI-enabled biological tools.


Role / Title: (not specified in the transcript) – referenced as a contributor to RAND Europe studies and a proponent of pre-deployment assessments.


Citation: [S10][S12]


Speaker 3


Area of expertise: AI policy, socio-technical assessment, AI readiness for emerging economies, governance frameworks.


Role / Title: (not specified in the transcript) – identified as “Geetha” who works on institutional gaps and AI-trustworthiness initiatives.


Citation: [S1][S2]


Moderator


Name: Shyam


Area of expertise: Session facilitation / AI impact discussions.


Role / Title: Moderator of the panel.


Citation: [S16]


Audience Member 1


Area of expertise: Psychological harms of AI, AI safety research.


Role / Title: Researcher in AI safety at the University of York.


Audience Member 2


Area of expertise: Model monitoring, data-drift and temporal robustness.


Role / Title: Audience participant (no further affiliation provided).


Audience Member 3


Area of expertise: Biosecurity incident response, cross-border prevention frameworks.


Role / Title: Audience participant (no further affiliation provided).


Additional speakers:


– None beyond those listed above.


Full session reportComprehensive analysis and detailed insights

The moderator opened the session by asking whether the emerging challenges should be framed as a data-governance issue, a model-design problem, or a compliance-verification matter [1].


Bio-security perspective (Speaker 1).


Speaker 1, whose expertise lies in bio-security rather than AI safety, framed his remarks in terms of life-science risk governance [2-4]. He noted that traditional bio-security has relied on physical infrastructure, inspections and material-transfer controls [7-8], but the rapid proliferation of AI-enabled biodesign tools-over 1 500 according to a RAND study-has begun to decouple risk from those physical safeguards [9-10][9-13]. These AI-driven capabilities now allow researchers to engineer proteins, optimise DNA sequences and model pathogen-host interactions without laboratory containment [10-12]. Consequently, the risk landscape is shifting upstream to the design phase of biological work [12-13], demanding new, more adaptive oversight mechanisms [23-24]. While data governance, model evaluation and red-team activities remain essential [13-15], the panel argued they must be re-oriented to address this upstream threat.


Open-science discussion (Speaker 2).


Speaker 2 advocated a tiered-access and contextual-norms approach [41-42], supported by pre-deployment assessments using structured rubrics such as RAND Europe’s risk index [44-48]. He emphasized that open-source tools are crucial for low-resource settings and should not be conflated with danger [53-56]; instead, differentiated governance at the capability level should replace blanket restrictions [57-58]. Building on this, he proposed a systematic pre-deployment assessment regime akin to a “know-your-customer” (KYC) approach, where developers of high-risk biodesign tools undergo credentialed scrutiny before release [49-52]. The results of these assessments would be shared across a credentialed network with tiered confidentiality [115-119], helping to prevent the “danger…once released” from spreading unchecked [45-48].


Institutional-gap analysis (Speaker 3).


Speaker 3 highlighted that India’s high global ranking masks substantial intra-regional disparities, with countries such as Indonesia lagging far behind [63-64]. He pointed out that large language models trained predominantly on Western data fail 20-30 % of biological-safety benchmarks relevant to Southeast Asia [66-67][68-70], underscoring the need for socio-cultural evaluations and participatory approaches that involve end-users from the outset [71-73]. He also called for the development of small, edge-deployed language models for low-resource settings [71-73] and stressed the importance of building AI literacy and ensuring privacy protections for marginalized communities [270-274]. Finally, he reiterated India’s self-regulation commitments and argued that a unified yet adaptable framework can be tailored to diverse deployment settings [78-80].


Independent-evaluation / red-team proposal (Speaker 2).


Speaker 2 recommended institutionalising a six-monthly “monitoring and assessment of risk” ritual carried out by an AI-safety institute that is technically credentialed, independent, and formally linked to governments [105-108][111-118]. He cited the recent SECURE-Bio study in which a frontier language model outperformed expert virologists on wet-lab protocol troubleshooting [101-104], underscoring the urgency of continuous, non-interactive risk monitoring [107-110].


Ecosystem-specific safety measures (Speaker 1).


Speaker 1 suggested embedding AI evaluation modules into grant-review procedures and establishing cross-trained AI biosafety review panels at the institutional level [147-149]. He called for investment in domestic evaluation capacity, such as the AI safety institute at IIT Madras [148-149], and for leveraging tech-sovereignty measures to control data flows [155-156].


Emerging Global-South powers (Speaker 3).


Speaker 3 described India’s creation of sandboxes for health-care and ideological AI systems [162-163] and announced the launch of a Global-South network for trustworthy AI together with an AI-safety commons that will provide shared evaluation resources within the next one to two years [164-166]. He also noted the development of an incident-reporting framework customised for Indian contexts, capturing harms across physical, psychological, cyber-incident, algorithmic, socio-economic and environmental dimensions [270-274].


Model-vs-socio-technical focus (Speaker 1).


Speaker 1 warned that even with perfect digital safeguards, a physical infrastructure is still required to synthesise or modify viruses, highlighting the “digital-to-physical barrier” that limits immediate creation of dangerous pathogens [246-251]. He argued that AI can also aid safety, for example by using agentic AI to detect jailbreak attempts in vaccine-development platforms, but that governance must balance model-centric controls with broader socio-technical considerations.


Biosurveillance integration (Speaker 2).


Speaker 2 observed that fragmented data standards and divergent legal regimes in Southeast Asia have led to data hoarding that cost lives during COVID-19 [212-219]. He proposed adopting a federated, HL7-FHIR-style interoperability framework for public-health surveillance [227-230], establishing pre-negotiated legal safe-harbours for emergency data sharing [231-234], and embedding shared evaluation criteria within national surveillance systems [235-241]. He warned that the AI-governance community often treats biosurveillance as a niche, while biosecurity experts see AI merely as a tool, creating a dangerous communication gap [237-240].


Audience Q&A.


An audience member from the University of York raised the issue of psychological impacts, prompting Speaker 3 to present a taxonomy of harms-including physical, psychological, cyber-incident, algorithmic, socio-economic and environmental dimensions-and to share a toolkit for assessing healthcare workers’ perceptions of AI tools [265-276]. The discussion also covered temporal data drift, with Speaker 3 explaining that model-monitoring pipelines must detect distributional shifts over time-a key safety criterion [286-288].


Coordinated incident-response framework.


Speaker 1 advocated empowering biosafety officers at the lab level and providing them with clear reporting channels to central leadership, creating a “decentralised but integrated” system [295-299]. Speaker 2 illustrated Singapore’s multi-agency model (NEA, MOH, Communicable Disease Agency, etc.) as an exemplar of clear role allocation during crises [300-309]. Both agreed that prevention and preparedness, underpinned by robust governance, are essential.


Closing remarks.


The moderator summarised the key points: the upstream shift of bio-risk, the necessity of decentralised yet coordinated oversight, the preservation of open-science through tiered access, the importance of capacity-building in the Global South, the need for harmonised data standards and legal safe-harbours, and the value of a systematic, socio-technical approach to AI safety [255-263]. Speaker 1 added that AI can aid safety-e.g., agentic AI detecting jailbreak attempts-while reiterating the digital-to-physical barrier [246-251]. The panel concluded on a hopeful note, emphasizing collaborative networks, shared safety commons, and adaptive governance as the path forward [252-254].


Session transcriptComplete transcript of the session
Moderator

Key area should we think about it as a governance data governance problem, problem in model design or should it be more on a verification or compliance angle.

Speaker 1

Thanks thank you very much Shyam for having me and good morning to everyone and welcome to this session. So I think okay let me maybe just start with saying that I’m not an AI or AI safety expert so whatever I say take it with a pinch of salt. My work is in biosecurity and that’s the angle I’ll come from. I think all of those things whether it’s a model evaluation and other things those are there and those are very very important factors and that those are the things that we need to keep in mind. But on top of that there is also a very important deep structural change that is happening. For example in the field of life sciences historically whatever risk and risk governance things that we had were very much linked to the physical infrastructure and lab facilities and facility inspection and material transfer control and things like that.

But that seems to have changed and seems to be changing very rapidly now with the kind of AI biodesign tools as well as LLMs that are emerging. So I think Rand also did a study on this, but there are more than probably 1 ,500 biodesign tools that are out there, and those are totally transforming how life sciences, but in general, science is done. Now, what kind of change that we are seeing is with these capabilities, now it’s much easier to engineer proteins, optimize DNA sequences to do things that we want, have better pathogen host modeling, interaction modeling, and things like that. Now, these capabilities are… because of AI becoming partly decoupled from the physical containment measures which were usually used in the life sciences.

So we have a lot of this risk landscape shifting a little bit more upstream to the design side when it comes to at least biological side of things. So yes, data governance, things matter. Model evaluation and red teamings are essential and we should be doing that. But also it is very important that especially for a country like India where we have a very vibrant scientific ecosystem but that is also very uneven. How we can use this AI -enabled science which is rapidly evolving into the existing mechanisms to some extent but also at the same time develop those capabilities, have more people with the core capabilities and more people with the core capabilities and more people with the core capabilities chemical security, AI nuclear security, and things like that.

So we need to train more people on those things. So integrating, again, going back to the life sciences, so integrating AI evaluation into biosafety system, strengthening the institutional readiness. Some places there are information, some labs and some institutions have information security labs or information security offices. How we can get them better prepared for these new emerging risks that are coming due to AI. Some places they have biosafety officers or biosecurity officers. How we can enable them better to address the AI risk is what the direction that we need to move towards. And have more adaptive oversight mechanism that is not only based on the, limited to this once in a while inspection that happens, but that goes more with the rapidly evolving things that we are seeing with the AI models coming up.

And I think, I think, So, just in terms of paradigm change that we are seeing and that you mentioned, is that there need to be more decentralized checks and balances and oversight mechanisms. If there is one authority sitting somewhere in Delhi and trying to do everything, that’s not going to work. So that is one of the things that we have to collectively think about. How do we decentralize these kind of oversight systems to some extent? For example, as I was saying, how we can empower the information security or biosecurity offices and create what in the field of disarmament where I have worked on called way of prevention. One measure is not enough. It’s not sufficient.

You need to have a number of measures in place which collectively can help prevent something bad from happening. Thank you.

Moderator

Thank you. That’s very insightful. And I think we’ve already touched on some areas that, you know, that would be follow -up questions. P .T., focusing a bit more on… open science where high risk domains, especially in biological data and AI capabilities, as Surya was mentioning. How do we preserve the benefits of open science while preventing the destabilizing diffusion of capabilities that we were just discussing about?

Speaker 2

Thank you. Thank you for having me today. So I guess like I would love to be able to give like a binary yes or no answer. Right. I think we all want to have that. But unfortunately, that’s not quite the case. So we need to find a way to balance the openness and also the restrictions as well. So I guess my answer here would be sort of like a tiered access and contextual norms. I think those are really important. And I think RAN Europe has done a really great job at establishing the global risk index on AI enabled biological tools. And also just generally looking into AI safety in general, where they do this thing where they call the pre -deployment assessment.

with structured rubrics. And I’m a huge fan of that because I think that when you release very frontier models and frontier tools, the danger is already out there once released. It’s really hard to withdraw the danger. But however, prevention, right? There’s this window before you release where you can do a pre -deployment assessment. So I think I’m a really huge fan of that and also the same way that I’m a big fan of KYC, know your customers. And I guess this principle also pretty much applies whereas in the case of biosecurity, where we differentially allow the development of medical countermeasures and also the defensive measures that is necessary for the research, but also don’t limit the researchers from actually innovating either.

And I guess my point here is that we’re not going to be able to do that. The non -safeguarded access, like private access to credential researchers where necessary for like defensive research is absolutely necessary. And then, you know, like open source tools, it’s necessary. Like we can’t turn away from being open source. Like any governance structure that conflates open source with danger makes a huge mistake because that also is a very critical development point, especially for lower resource settings. So we cannot afford to conflate that altogether. So I guess a very long way to answer this and then to summarize my answer is that differentiated governance at capability level is always better than blanket restriction at access level.

Yeah.

Moderator

I think that’s a very structured answer and I think, you know, there’s a start of a very valid framework level conversation that’s already happening there. Geetha, turning to you, thinking more about institutional gaps in enabling some of the solutions that we are discussing, potential solutions, what are the most immediate gaps that you see in evaluating systems, technical capability, regulatory and coordination, largely from the policy angle that you work in?

Speaker 3

Thank you, Shyam. Good morning, everyone. So on the technical capabilities, right, the most fundamental thing I see is the AI readiness aspect of deployment. So in general, when we see India stands or ranks third globally, and when we see the Southeast Asian countries, I think Indonesia is around 49, and so there we see the gap, right? So whatever we do from the Western context or in the Indian context can never be catered to the AI readiness aspect of deployment. So I think it’s important to the needs, the unique needs of the Southeast. Asian countries and moreover what there is the end user perception where we see that we have to build lot of capacity for creating awareness among the end users who are actually going to use the products and from the policy perspective I would like to give you certain aspects where we think about the socio -cultural aspects that is relevant to the deployment environments.

So in general the large language models are usually trained on the western data and the very recent research work maybe I will cover a bit of both tech and policy here. So there is a Southeast Asia related benchmark, safety benchmark which says that all these leading large language models have failed when they evaluated for more than 20 to 30 percent of the risk. So in the biological settings so which means that we did not have enough safeguards which will protect people from encountering all these risks. And moreover, so this lets us know that we have to build in more sociocultural evaluations and assessments which will cater to the harms that is more particular to that particular deployment environment rather than just having a high level evaluation strategies.

And this cannot come just from the policy side, right? So we need to bring in all the participatory approach which will bring in the end users, the different stakeholders involved in using all these AI systems, be it model, right from the requirements definition, right? So when we assess whether we need an AI system or not, generally now there is a perception saying that for whatever we are going to build or the problem that we are going to solve, by default we assume. We assume that we need a large language model which will not care. which is not even possible to have it deployed in a low resource setting, right? So we need to think about small language models which will enable edge deployments at the low resource settings and also consider all the multicultural and socio -economic diversity that exist in these regions so that your model doesn’t hallucinate, is still fair and also establish some governance and accountability frameworks which will make the developers more accountable and also because having the developers more accountable will enable them considering more safeguards, right?

And also create more awareness about the main fundamental thing is that they will be expected to document whatever testing that has been gone through. And on the policy side, there is one more aspect which is the Indian government also endorses, right? The self -regulation. voluntary commitments on managing and mitigating risk that comes out of all these AI models. So I think we have to have a unified framework which can still be adaptable to different deployment settings.

Moderator

I think we are already getting a diversity of perspectives here and it is very useful to hear. Moving ahead and thinking about institutionalizing these kind of capabilities in scientific AI context, PT turning to you. Should independent evaluation and red teaming of AI systems from a technical kind of solution perspective for this problem that generate biological outputs, especially thinking biosecurity, given your perspective on this, should it become a norm and part of the global scientific specialist infrastructure? And if so, how would we go about that?

Speaker 2

I think we have to have a clear understanding of the role of the AI system and how it can And I think that is a key point. And I think that is a key point. And I think that is a key point. And I think that is a key point. And I think that is a key point. And I think that is a key point. And I think that is a key point. And I think that is a key point. So I guess a good example to use here is probably we’re thinking of nuclear weapons, right? Which falls under this organization called the International Atomic Energy Agency, the IAEA. Now, from my perspective, I think fissile materials, correct me if I’m wrong, they’re very scarce.

And they are, to a certain degree, technically trackable. And they are also, more than anything else, highly regulated. Whereas biology, on the other hand, is everything but that. It’s diffused, it’s dual -use by nature, and it’s also nearly impossible to trace. And also, most importantly, commercially available, right? And so in the recent study, actually, this was done by this organization called SECURE. Bio, where they actually tested frontier large language models against expert virologists. And it turns out that ChachiPT -03 actually outperformed expert virologists by 94 % at troubleshooting wet lab protocols. So that’s a very shocking number, right? And then, I mean, obviously you mentioned earlier that there’s a very concentrated effort that is happening between the US, UK, and China, like the global superpowers, basically.

And I guess there’s, we, in the recommendation from the RAND Europe that I was, you know, helping out with is that we recommended that governments and also independent researchers do this six -monthly ritual of monitoring and also assessment of risk on a continuous basis. And we also suggested, obviously, like using AI as an automation tool to increase the efficiency of this risk monitoring system. But I think, to your point, I think stuff like this, stuff like that is non -interactive methodology that doesn’t require, you know, researchers to actually query directly with the danger systems is actually already in and of itself a very meaningful, you know, safeguard. But that is not enough. You know, we need something that is much larger than that.

That is the integration into, like, you know, institutionalizing it. And I would argue that, like, a six -monthly, you know, ritual, that refresh cadence, for it to be delivered, it’s going to require a very significant investment from the government at multilateral level, right? And so we can’t go without any investment at all. So my suggestion would be to actually implement this AI safety or security institute model that we’ve been applying where largely… It is technically credentialed. It’s independent, but also has a very… formal relationship with the government. And something that I would caveat from the bio side is that for the institution to have some kind of anchoring around biological weapons convention or the WHO.

Because right now that relationship is not quite there yet. And I think, you know, back to my point of like pre -deployment assessment, I think that is definitely needed and then the result has to be shared then across the credential network with tiered confidentiality that rather than being kept, you know, as a proprietary to the different state. I think it’s kind of a

Moderator

That’s an interesting position, PT. Suryesh, thinking more about safety measures at large, how can we make sure that they remain rigorous and feasible within research ecosystems that you’re quite familiar with, you know, from a biosecurity angle, if you will, but largely also in the larger scientific ecosystem.

Speaker 1

Thanks Shyam. I think first, yeah first thing that we need to understand is how that ecosystem is and then see if certain measures will work there or not, right. One of the hallmarks of let’s say Indian scientific ecosystem is there is a lot of heterogeneity. There are some places which are really extremely well performing and there are other places who are not well resourced or have other all kind of challenges. So, understanding how the ecosystem is, what kind of regulation within the institutes that are there, what kind of administrative measures that are there, what kind of safety teams these kind of institutes might have, all of those things are extremely important, right. The governance capacity, compliance culture and technical expertise varies widely in Indian institutions.

And I believe this is true for many other countries in the global south as well. So it’s not something very unique. Particularly to India, we have challenges related to different kind of resources. And even when the resources are there, sometimes it’s also problematic to use them efficiently enough. Now, given that context, if we just import safety frameworks that are developed in a well -resourced place in a Western country or any developed countries, I don’t know if those would be a very good fit for the kind of system that we have here. So those might become more performative than functional to some extent. Another challenge that also P .T. mentioned to some extent is that the speed and scale of AI is huge, right?

And we need these traditional review mechanisms that institutes have for safety audits and all of those things are not going to work. We need something which is far more adaptive and quick. And also what we had traditionally is this periodic paper -based facility -centric kind of measures. And those are very much outdated in the era of AI that we live in. Now, so what… Now the question becomes, how do we design proportionate capability -aware safeguards that would be better matched for the challenges that we have? One of the major challenges, as I think a lot of us realize, is that there is limited awareness about AI safety when it comes to scientific issues, even among the scientists.

So a huge number, a large majority of scientists just don’t know what they are putting, let’s say in chat GPT might be harmful or what they are getting out of biodesign tools could be harmful to some extent. So there is some understanding about the privacy -related issues, but safety and security is still a big gap in understanding of even the scientific experts that are there. Now also regarding AI, I think there needs to be a tiered risk classification. So not everything is highly risky. There are certain biodesign tools, for example, that are trained in… in virus data. Those we’ll put in a higher risk category compared to something which is just working, let’s say, on certain animals which are not dangerous.

Now, also the safety measures, as I was mentioning earlier, as the risk has moved a bit upstream, it has come more on the design side, we should also have more safety measures moving upstream. And as Piti was mentioning that, you know, certain kind of evaluation that are before launching AI tools are necessary, but also integrating AI evaluation modules into grant review processes, creating cross -trained AI biosafety review panels, so panels specifically for AI biosafety at, from the bottom -up side, instead of having them from the top -down approach. Investing more in domestic evaluation capacity, having more AI safety institutes like Geeta’s home institute at IIT Madras. So we need a lot more of that. And lastly, I think what we have in the US and UK are these, a lot of AI safety work is being done there, right?

And as I was mentioning, importing that directly might not work. And we in the global south are largely the users and importer of this technology. So we have to see from the bottom up side, where do we put those safety measures? Do we, like when it comes to import, what kind of, when the data is being transferred, is there certain places where we can put those kind of safeguards? Also, how we can use some tech sovereignty measures in this context, right? That tech sovereignty measures are used for a number of things, but AI security is something, AI safety and security is something where those could also be used to some extent. So, yeah, I would stop here and then we can discuss.

Thank you.

Moderator

Thank you. And I think a lot of useful thoughts here for us to explore a bit more. I think we’ve… just crossed the mid mark and I’m going to use Geeta to kind of like bridge between the next two topics by combining two of your questions sorry for that so just as Surya just mentioned will the emerging scientific powers you know global south middle powers would they be able to shape governance in this context especially you know enable science or will they continue to inherit the frameworks and if they were to show leadership what would that look like in scientific AI and research ecosystems and you know you’ve already been working on some of this so I’m looking forward to kind of hearing concrete measures that you know are happening

Speaker 3

Sure. So in general what I think is definitely the emerging powers right they are putting on all efforts to bring in all the tools and frameworks that are required for governing these AI systems and for example, so India’s strategy towards all these emerging techs is that they are trying to create sandboxes which are highly essential for deploying or evaluating safety aspects for the models, right? So they do it for healthcare systems, they do it for ideology systems and whatever, right? So these type of tools and frameworks come from Indian settings will actually help the other underdeveloped countries to learn from the strategies that we use and then build something of their own or something which cannot go cross border can still happen through learning and collaboration, right?

So for example, we are going to launch a global south network for trustworthy AI and we are going to launch a global south network for trustworthy AI and we are going to launch a global south network for trustworthy AI which will enable all these mechanisms to happen, enable people to… develop and deploy AI systems which will be deployed in the low resource settings. And the other initiative which is going to give a very big leap in evaluating AI safety is coming up with an AI safety commons for the global south. That is part of the safe and trusted AI pillar that is one of the pillars in this impact summit and I think in another one or two years we will have safety commons which will help us evaluate and assess how these AI data models and systems work for different deployment settings.

Another important thing is that as Suresh mentioned about the audit frameworks. So when we come with, when we focus on the kind of risk and audit mechanisms that we have here, we still have it from an organization perspective and not from the end user perspective. So at CRI, we have come up with an incident reporting mechanism and a framework that caters to the Indian settings. So it tells you how to operationalize incident, AI incident reporting in the Indian settings, which is completely different from the Western settings. And here we have to get the harms that the people experience in the marginalized communities, which will never be recorded everywhere, right? So how do we enable all these things?

So since it is all about all these CERN -based systems, right, even those things will have certain impacts to the marginalized communities, which may be an indirect impact. But how do they are knowing about such things are happening to them, right? So those kinds of gaps we should mitigate by building more awareness, creating more AI literacy. And we should also be able to provide more privacy to all these people. The final thoughts about combining all these things is that we have to bring in some kind of collaborative work between the different stakeholders who are involved in developing and deploying these systems. And the governments have already given certain prompt knowledge about how to enable all these things through the techno -legal framework and guidelines that was recently published and the AI governance guidelines.

Which was recently published by METI. So the Southeast Asian countries can learn from the developing countries like India and then have curated a more tailored approach towards their unique needs. So that is what I think. So whoever has an opportunity or a willingness to have more things that will actually help them use or leverage these technologies can learn from whatever. Learn from. the mistakes as well as the experience that the other countries have, which is now openly available through all these summits.

Moderator

That’s very useful and I’m looking forward to following up on IIT Madras’s work in this front as well. Going to Suresh for kind of the last question in this series really, should, you know, safety measures, evaluations, primarily focus, where should the focus be at the model level? And you talked about upstream quite a bit. Should there be more broader socio -technical readiness measures, misuse considerations? Where do you think it should be?

Speaker 1

And also, very importantly, how we have to also see it from the context of, you know, people doing their own thing, DIY kind of science that happens. And also, small -scale commercial activities which are not fully under the oversight mechanism of the government, right? So, considering all of these points, right, the policy evaluation must expand from model -centric assessment to socio -technical assessment. And this would include, you know, evaluating things like how much capability uplift relative to the government capacity that is there. So, government has certain capacity to manage or do oversight, but these AI tools, how are they changing that? Incentive structures, very, very important, that shape the model deployment. Also, the diffusion of risk across borders.

All of these things don’t respect national borders, right? So, how it’s going to spread. If people using VPN or other things, a number of other things that are there. So integration, lastly, the integration with existing biosafety and resource security systems as I had already mentioned. So briefly, like performance evaluation is necessary, but governance -relevant evaluation must be systemic. And otherwise, we risk auditing algorithms while ignoring the institutions that operationalize them. And that is very, very important, how we focus on that institutional level mechanisms. Thank you. Thank you.

Moderator

Piti, kind of the last structured question before we move into a bit more of an open conversation. AI becomes embedded not just in new capacities, but also existing programs like biosurveillance, public health systems. And so there’s a mix between emerging kind of scientific knowledge with more legacy, let’s call engineering knowledge as well. So. So how do we make sure that safety, evaluation, interoperability, all of that exists in this divide without fragmentation happening across the ecosystem? Because, you know, you can easily imagine everyone’s doing their own AI, you know, safety evaluation and not necessarily talking to each other.

Speaker 2

Thank you, Shyam. I think this is a very important question. And it’s also a topic that I’m really passionate about as well, which is biosurveillance. To your point, I think, you know, countries are already deploying AI -enabled biosurveillance systems that are, you know, either syndromic surveillance or it could be, you know, genomic sequencing pipelines or outbreak modeling. The countries are already doing that, but they are not building on… the unified data standards. So they’re basically building on very incompatible data standards with very different legal regimes across the borders. We’ve seen that in Southeast Asia. We’ve seen that even countries like, for example, Singapore to Malaysia, you see different legal regiments on how they monitor the data and also the biosurveillance.

And so the fragmentation risk is actually not a technical risk, I would argue, because it’s not just a technical risk, because we’ve seen COVID. I feel like if anyone is anybody saying, I think we all were a little bit traumatized by COVID. We’ve seen how data hoarding and incompatible reporting actually cost lives. And I saw that especially happening across the region in the lower resource settings. Like countries like Cambodia, for example. AI systems that are trained on non -representative data obviously are going to perform much worse. And guess what happens? When they perform worse, the region that is most affected is the region that needs the help the most. And because of that, and also that region is also the same region with the least data infrastructure.

And so I guess to sort of like answer your question and what I think we need to do, I think there are three things to be addressed here. The first one is obviously the data standards harmonization. Currently, we don’t have that. I think we would need not like a global overhead standard that enforces on every country, but more of a federated interpretability that applies frameworks that applies to different countries. So I can think of like HL7FHIR, which is the federated… healthcare interpretability resources that are attempting to address these very specific issues on clinical data, but this one would be adapted for public health surveillance. And the second point is the legal safe harbors for basically just kind of cross -border sharing of data for public health emergencies that are negotiated beforehand because, and this is important, beforehand, because if you negotiate during an outbreak, people are going to be freaking out.

People are going to be like, I’m not going to share my data to you. What are you going to do with that data? So this needs to be done beforehand. And the last point, and also the most politically challenging point, is actually to have some kind of shared evaluation criteria across the board between different countries that are embedded into the national surveillance systems. And, for example, like Singapore data infrastructure environment might not apply to countries with like different climate data or like different demographic data. So this needs to be applied into, you know, the national surveillance systems. And what I noticed, I guess like the last message is that what I noticed the AI governance framework often thinks of biosurveillance as like an edge, like a niche edge case.

And then people in biosecurity frameworks, like doing biosecurity frameworks, thinks that AI governance is like a tool. And these people don’t talk to each other. And that gap, that gap right there is where the risk happens. So, yeah, we just need to talk to each other more. That’s easier said. Yeah.

Moderator

So I think I’m just about to close with maybe five minutes or just under that for audience questions. Thank you, Justin. 10 second final thoughts on each of you from the panel. Suresh.

Speaker 1

Just wanted to very quickly, we need to also keep in mind that how AI could help solve some of these AI safety challenges. How agentic AI could be used, let’s say, when people are trying to develop vaccines. CEPI has developed this platform where agentic AI is being used to check if there is someone who is trying to jailbreak or someone who is trying to misuse the tool that is there. Second very quick point, also, with all that what I said, there is still a gap to transfer things from digital to physical, what is called digital to physical barrier. So, even if you have everything, you still can’t just develop, modify viruses without having a proper physical infrastructure and there are still some ways to control that.

Thank you.

Speaker 3

I think we should move on transforming from issues to intelligence like learning from the risk that happens and feeding it back to the model training and other assessment activities to mitigate the risk in real time so that is where we need to move towards bringing in more people into evaluations and then making it safer for people to use

Speaker 2

I guess I’ll make it quick the point that I want to make here is that Shurya should echo his point I think you’re right that we should not shoot ourselves in the foot especially for developing countries, I think it’s really important and so my message for the last message here is just kind of like while we are forging ahead in innovation and while we are innovating ahead in whatever domains of scientific domains that we’re doing we need to be conscious of the impact that we have and I think in the AI Impact Summit is one of the really good places to jumpstart that kind of conversations and break the silos. Thank you.

Moderator

Thank you everyone. I’m just going to take probably one minute to kind of summarize key points Evaluation, I see largely a systemic question, safety measure systemic question. I especially like the point on incident response not being already there. And a couple of points on the cross -border solutions and problems, we already have that. Discussion on open signs, we talked about how managed access, safeguards, and comparing government capacity to manage that versus letting it out for more DIY -oriented signs, which is a good term, I really like that. That’s a key area. And for emerging scientific powers, of course, collaboration is key. Tailored approach, that’s something that I’m again waiting to see from IT Madras as well, their contribution on this.

And some cross -border work on legal safe harbors, data standard harmonization, PT that you mentioned, really land well from this panel. I’m going to… I’m going to stop my… summary right now and you know more of this would be kind of put together in a blog at some point in the uh nearby future uh perhaps uh we can go for questions uh first uh yes please i think i can give you mine

Audience Member 1

Thank you so much for your wonderful insights i really enjoyed this session as a researcher in safety of ai at the university of york so i focus on psychological harms of ai and so what i want to ask particularly gita is um when it comes to the definition of harms and traditional safety engineering they’re catering more to physical harms and now we see the whole spectrum of harms expanding beyond that so i would love to know the work being done by karai and you in this area and and in fact enrich my research with it

Speaker 3

Yeah, sure uh so when we actually assess harms and impacts right we do we have to do it from the different two different perspectives one is on the functional side where we assess all these algorithmic risk and other stuff. From the human centric perspective, like you said, we can keep doing everything from the psychology perspective and other ethics and other stuff. So, here at CIRI, we do work on assessing bias, determining whether the model is stereotypical or not and how do we generate explanations for the high level scientific models and all. So, from the perspective of the psychological things, there is this cognitive science or cognitive capabilities of AI models which will actually enhance or degrade the capabilities of humans.

So, those things are we are trying to do some assessments from the incident perspective. So, if you go to read the incident reporting framework that we have, we have a taxonomy of risk and harms and also the impacts. So, from the kinds of harms that we have defined, we have categorized it as physical, psychological, cyber incident based harm set. And moreover, we have all the generic kinds of harms like algorithmic harms, socio -economic harms, the environmental harms and all. So, we are trying to come up with a taxonomy that will cater to the different hierarchies that will be applied to these kind of harms and impacts which will again be model specific, use case specific and the domain specific.

So, that is where we are trying to work on. And we also have a healthcare based tool, a toolkit which will enable people to actually assess the perceptions of how they treat these models, how they see whether these AI applications are helpful for them or not and then come up with some capacity building programs for different roles in which they are working on. And this has been done with CMC Wellure Hospital and we have been assessing the perceptions of healthcare workers. and then come up with a training module which will enable them to use AI models or tools more confidently rather than, say, being resistant or not relying on them for so much.

Moderator

Last, probably last quick question. Maybe keep it short on the responses as well, please. Sorry.

Audience Member 2

Hi. So my question is about, like, we are discussing all the geographical barriers, right? The modality is geography. When we change the geography, the models tend to perform poorly. Are we concerned about the temporal modality as well? When we go forward in time, the data is going to change eventually, and that is going to affect modeling. And how do we plan on, like, you know, mitigating such a problem if it arises?

Speaker 3

Yeah. So this comes under the model monitoring, the system monitoring approach, where we consider the data drifts out of distribution. So we consider the distribution aspects of the data and models. So definitely this is one of the criteria where you assess safety and evaluate the impacts of it.

Moderator

Yes, I think last question

Audience Member 3

Thank you so much for the insightful discussion, really appreciated the expertise that you’re bringing to the topic and thanks PT for bringing up COVID because my question is about that. As we learn from COVID biosecurity risk can quickly become a cross border existential threat. So what would a successful web of prevention and incident response framework look like and who are you looking up to in this space? Like who’s doing it well in this space?

Speaker 1

I can start maybe PT can add. So I think as I was mentioning, it will have to be more decentralized but at the same time integrated to the leadership. So I think there needs to be more empowerment of people who are like biosafety officers in the lab or who are institutional biosafety committee members, who are people who are working on the ethics and research security side at the institutes. So those are the people who need to be empowered. So there needs to be more capacity building of those people and at the same time there needs to be a mechanism established so they can report those incidents to the very top and there is top leadership sitting in the capitals.

They can in some way get an overview or monitor the situation as it is going on at different institutes level.

Speaker 2

Thanks. I can add a little bit to that. So in Singapore we actually have different agencies responsible for this. So we have the National Environmental Agency and then we have the MOH, obviously the Ministry of Health and then we also have different smaller agencies like Communicable Disease Agency and also like Prepare Agency where they are responsible for different tasks. But I want you to envision this as almost like the way that Singapore is trying to establish itself. I think it’s trying to establish itself almost as a firefighter. So when there’s an incident where there’s a crisis, who is actually doing what is very clear but it’s always not always clear across like different countries. For example, in Laos, Vietnam, might be looking very different, but I think having a very coordinated response across the different agencies on who is doing what.

Like, for example, National Environmental Agency is responsible for wastewater surveillance. So monitoring how the sickness is increasing or spiking or not, those are the people, yeah, that you would look up to. And I think that’s the last word, right? It all comes down to prevention and preparedness, even in this much like anything else with biocontext.

Moderator

Thank you, everyone, for the question, and thank you to my brilliant panelists, Suryesh, Geeta, and P .T. This was a very insightful discussion. On the screen is the work from RAND Europe with CLTR, some of what was referred to by P .T. and other panelists as well, some aspects of what we were discussing about risk typification. You’ll probably get some ideas there as well. And with that, I close. I’m surprised. I’m supposed to hand over these mementos to apparently including me, so let us do that now. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (33)
Factual NotesClaims verified against the Diplo knowledge base (5)
Confirmedhigh

“Speaker 1 is a bio‑security expert with experience in disarmament.”

The knowledge base lists Speaker 1 (Suryesh) as a bio-security expert who works in the field of biosecurity and disarmament, confirming the report’s description [S3].

Additional Contextmedium

“Open‑source tools are crucial for low‑resource settings and help democratise expertise.”

A source notes that AI democratises expertise previously limited by resources, giving people in underserved areas access to sophisticated diagnostics, which adds nuance to the claim about the importance of open-source tools for low-resource contexts [S105].

Additional Contextmedium

“Balancing security concerns with open‑source approaches requires case‑by‑case solutions.”

The knowledge base highlights a discussion on the tension between national security and open-source approaches, emphasizing the need for ongoing dialogue and tailored solutions, providing additional context to the report’s tiered-access/open-science discussion [S110].

Additional Contextmedium

“Data governance, model evaluation, and red‑team activities remain essential for responsible AI deployment.”

A source describes the practice of publishing model cards, evaluation benchmarks, and data to make model behavior transparent and to flag risks, which supports and expands on the claim about the continued importance of data governance and model evaluation [S108].

Additional Contextlow

“The panel discussion examined biosecurity challenges within the Biological Weapons Convention (BWC) framework.”

Another source discusses the focus on biosecurity within the BWC, emphasizing non-proliferation of dual-use research, which adds background to the bio-security perspective presented by Speaker 1 [S101].

External Sources (111)
S1
Building the Workforce_ AI for Viksit Bharat 2047 — -Speaker 1- Role/Title: Not specified, Area of expertise: Not specified -Speaker 3- Role/Title: Not specified, Area of …
S2
S3
Advancing Scientific AI with Safety Ethics and Responsibility — – Speaker 1- Speaker 2- Speaker 3 – Speaker 1- Speaker 3- Moderator
S4
Global Perspectives on Openness and Trust in AI — -Audience member 2- Part of a group from Germany
S5
Day 0 Event #82 Inclusive multistakeholderism: tackling Internet shutdowns — – Nikki Muscati: Audience member who asked questions (role/affiliation not specified)
S6
The Arc of Progress in the 21st Century / DAVOS 2025 — – Paula Escobar Chavez: Audience member asking a question (specific role/title not mentioned)
S7
AI Transformation in Practice_ Insights from India’s Consulting Leaders — -Audience member 1- Founder of Corral Inc -Audience member 6- Role/title not mentioned
S8
Day 0 Event #82 Inclusive multistakeholderism: tackling Internet shutdowns — – Nikki Muscati: Audience member who asked questions (role/affiliation not specified)
S9
Building the Workforce_ AI for Viksit Bharat 2047 — -Audience- Role/Title: Professor Charu from Indian Institute of Public Administration (one identified audience member), …
S10
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — -Speaker 1- Role/title not specified (appears to be a moderator/participant) -Speaker 2- Role/title not specified (appe…
S11
Policy Network on Artificial Intelligence | IGF 2023 — Moderator 2, Affiliation 2 Speaker 1, Affiliation 1 Speaker 2, Affiliation 2
S12
S13
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S14
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S15
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S16
Keynote-Olivier Blum — -Moderator: Role/Title: Conference Moderator; Area of Expertise: Not mentioned -Mr. Schneider: Role/Title: Not mentione…
S17
Keynote-Vinod Khosla — -Moderator: Role/Title: Moderator of the event; Area of Expertise: Not mentioned -Mr. Jeet Adani: Role/Title: Not menti…
S18
Day 0 Event #250 Building Trust and Combatting Fraud in the Internet Ecosystem — – **Frode Sørensen** – Role/Title: Online moderator, colleague of Johannes Vallesverd, Area of Expertise: Online session…
S19
Global Perspectives on Openness and Trust in AI — – Alondra Nelson- Audience member 3
S20
AI Transformation in Practice_ Insights from India’s Consulting Leaders — -Audience member 3- Student -Audience member 6- Role/title not mentioned
S21
Building the Workforce_ AI for Viksit Bharat 2047 — -Audience- Role/Title: Professor Charu from Indian Institute of Public Administration (one identified audience member), …
S22
WS #123 Responsible AI in Security Governance Risks and Innovation — Addressing global capacity disparities, Karimian noted the importance of proactive collaboration to reduce inequalities …
S23
Opening plenary session and adoption of the agenda — Equally important is the call for investments in capacity building, particularly in developing countries, in order to en…
S24
AI for Safer Workplaces & Smarter Industries Transforming Risk into Real-Time Intelligence — Speaker 1, introduced by Naveen as his colleague, conducted a detailed demonstration of their AI safety platform. The de…
S25
Artificial intelligence (AI) – UN Security Council — Furthermore, the discussions underscored the necessity forregulatory mechanisms that are both flexible and adaptive. As …
S26
Open Forum #17 AI Regulation Insights From Parliaments — Sarah Lister: Thank you very much. And as we conclude this open forum on AI regulation, I’d like to start by thanking, f…
S27
From principles to practice: Governing advanced AI in action — A critical challenge Tse identified is the timeline mismatch between AI development and standards creation. Current form…
S28
Networking Session #60 Risk & impact assessment of AI on human rights & democracy — Matt O’Shaughnessy: Thank you so much, David. And it’s great to be here, even just virtually. So, you asked about the…
S29
WS #98 Towards a global, risk-adaptive AI governance framework — Audience: My name is Amal Ahmed. I’m currently working in DGA. I’m not asking a question. I’m just having an emphasi…
S30
What is it about AI that we need to regulate? — What is it about AI that we need to regulate?The discussions across the Internet Governance Forum 2025 sessions revealed…
S31
How to make AI governance fit for purpose? — Anne Bouverot: Thank you so much, Gabriela. Thank you for this. I’m lucky to go first because by the time everyone has s…
S32
Can we test for trust? The verification challenge in AI — Anja Kaspersen discussed the role of technical professional organizations like IEEE in AI governance conversations. She …
S33
Challenging the status quo of AI security — – **Security vulnerabilities**: Current systems showing susceptibility to prompt injection and manipulation attacks – A…
S34
Protecting Democracy against Bots and Plots — Artificial Intelligence can deliver various results that need to be regulated to prevent misuse.
S35
Laying the foundations for AI governance — Dawn Song: Thank you very much. Okay, I think we’ll turn now on this question of obstacles to Professor Dawn Song. Okay,…
S36
AI Infrastructure and Future Development: A Panel Discussion — Physical infrastructure constraints create bottlenecks – need for skilled trades workers, power, concrete, copper in mas…
S37
Policymaker’s Guide to International AI Safety Coordination — This comment crystallizes the fundamental tension at the heart of AI governance – the misalignment between market incent…
S38
Accelerating Structural Transformation and Industrialization in Developing Countries: Navigating the Future with Advanced ICTs and Industry 4.0 — **Comprehensive Ecosystem Development**: The need for systematic approaches covering education, finance, regulation, and…
S39
WS #103 Aligning strategies, protecting critical infrastructure — Need for capacity building, especially in the Global South
S40
Free Science at Risk? / Davos 2025 — There’s a need to balance open science with security concerns, but overly restrictive policies can hinder innovation
S41
Driving Social Good with AI_ Evaluation and Open Source at Scale — However, audience questions revealed tension between this contextual approach and institutional needs for standardizatio…
S42
WS #193 Cybersecurity Odyssey Securing Digital Sovereignty Trust — Adisa argues that policies should require AI threat modeling and red teaming as regulatory requirements for AI systems, …
S43
How can sandboxes spur responsible data-sharing across borders? (Datasphere Initiative) — Harmonizing cross-border regulations and practices within the African continent presents challenges due to differing reg…
S44
Artificial Intelligence & Emerging Tech — Jörn Erbguth:Thank you very much. So I’m EuroDIG subject matter expert for human rights and privacy and also affiliated …
S45
Rule of Law for Data Governance | IGF 2023 Open Forum #50 — Additionally, the analysis underscores the importance of harmonizing and aligning laws to facilitate cross-border data f…
S46
From principles to practice: Governing advanced AI in action — Both speakers advocate for embedding safety and responsibility considerations from the initial design phase rather than …
S47
WS #123 Responsible AI in Security Governance Risks and Innovation — This fundamentally challenges the conventional approach to AI governance by arguing against treating it as a compliance …
S48
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Virginia Dignam: Thank you very much, Isadora. No pressure, I see. You want me to say all kinds of things. I hope that i…
S49
Advancing Scientific AI with Safety Ethics and Responsibility — Need for decentralized oversight mechanisms with empowered local biosafety officers and institutional review panels Saf…
S50
Main Session | Policy Network on Internet Fragmentation — Multi-stakeholder collaboration is crucial for addressing fragmentation risks
S52
Artificial Intelligence & Emerging Tech — Jörn Erbguth:Thank you very much. So I’m EuroDIG subject matter expert for human rights and privacy and also affiliated …
S53
Opportunities of Cross-Border Data Flow-DFFT for Development | IGF 2023 WS #224 — It is clear from the audience’s questions that there is a concern about balancing the need for data localisation with th…
S54
Secure Finance Risk-Based AI Policy for the Banking Sector — India’s regulatory thinking reflects this balance, encouraging experimentation while reinforcing institutional responsib…
S55
UN SECRETARY-GENERAL’S STRATEGY ON NEW TECHNOLOGIES — In this context, difficult policy dilemmas and questions relating to the source, nature and scope of regulatory and…
S56
I NTRODUCTION — – Establishing a reference framework to guide government entities in adopting a best-in-class architecture for digital s…
S57
Meeting REPORT — The structured yet responsive policy implementation strategy—including provisions for regular review—reflects an adaptiv…
S58
Are AI safety institutes shaping the future of trustworthy AI? — As AI advances at an extraordinary pace, governments worldwide are implementing measures to manage associated opportunit…
S59
360° on AI Regulations — The advancements and widespread use of AI technology have raised concerns about its potential misuse. The dual-use natur…
S60
Digital policy in 2019: A mid-year review — Technological innovation is creating new possibilities. Artificial intelligence developments are moving at a fast pace, …
S61
AI for Humanity: AI based on Human Rights (WorldBank) — Stating that technology developments occur at a rapid pace implies a need for due diligence and risk assessment to keep …
S62
Panel Discussion: Europe’s AI Governance Strategy in the Face of Global Competition — While disagreeing that governance is dead, Curioni acknowledges that governance and regulation must evolve significantly…
S63
GOVERNING AI FOR HUMANITY — As far as ‘safety’ is contextual, involving various stakeholders and cultures in creating such standards enhances their …
S64
Comprehensive Report: European Approaches to AI Regulation and Governance — A particularly concerning dimension emerged around mental health impacts of AI use. An audience member reported people b…
S65
To share or not to share: the dilemma of open source vs. proprietary Large Language Models — In sum, this analysis illustrates that open source software serves not merely as a technical tool but as a catalyst for …
S66
Advancing Scientific AI with Safety Ethics and Responsibility — -Shifting Risk Landscape in Life Sciences: The discussion highlighted how AI biodesign tools and LLMs are fundamentally …
S67
Policymaker’s Guide to International AI Safety Coordination — This comment introduced a fundamentally different perspective on AI risk, shifting focus from deployment and governance …
S68
From principles to practice: Governing advanced AI in action — – Udbhav Tiwari- Brian Tse Chris argues that some AI risks require entirely new risk management approaches because they…
S69
WSIS Action Line C5: Building Trust in Cyberspace — Capacity building must be tailored to different national development levels and maturity
S70
WS #103 Aligning strategies, protecting critical infrastructure — Capacity building essential, especially for Global South
S71
Free Science at Risk? / Davos 2025 — There’s a need to balance open science with security concerns, but overly restrictive policies can hinder innovation
S72
WSIS Action Line C7 E-science: Assessment of progress made over the last 20 years — Open science platforms are highlighted as crucial, but they must be widely accessible to ensure equitable benefits from …
S73
test marko — concluded that while Geneva faces challenges, it retains significant advantages as a center for digital governance. Howe…
S74
https://dig.watch/event/india-ai-impact-summit-2026/advancing-scientific-ai-with-safety-ethics-and-responsibility — So we have a lot of this risk landscape shifting a little bit more upstream to the design side when it comes to at least…
S75
WS #193 Cybersecurity Odyssey Securing Digital Sovereignty Trust — Adisa argues that policies should require AI threat modeling and red teaming as regulatory requirements for AI systems, …
S76
Strategy — The document in its current form, serves as a high-level overview of Egypt’s National AI Strategy. In it is not mean…
S77
How can sandboxes spur responsible data-sharing across borders? (Datasphere Initiative) — Harmonizing cross-border regulations and practices within the African continent presents challenges due to differing reg…
S78
Artificial Intelligence & Emerging Tech — Jörn Erbguth:Thank you very much. So I’m EuroDIG subject matter expert for human rights and privacy and also affiliated …
S79
Day 0 Event #171 Legalization of data governance — Cross-border data flows require balancing security and utilization
S80
WS #31 Cybersecurity in AI: balancing innovation and risks — AUDIENCE: Hi, I’m Odas. I’m from… Digital Uganda. We’re based in Kigali, Rwanda. And I want to ask Yulia regarding w…
S81
Workshop 8: How AI impacts society and security: opportunities and vulnerabilities — Piotr Słowiński: Okay, great. And I think that you can see my screen, at least you should by now. So, yeah, welcome. I…
S82
OPENING STATEMENTS FROM STAKEHOLDERS — Discussions on artificial intelligence show that technological development is not without risk. Discussions on emerging…
S83
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — The tone was collaborative and solution-oriented throughout, with participants acknowledging both the urgency and comple…
S84
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — The discussion maintained a professional, collaborative, and optimistic tone throughout. Panelists demonstrated mutual r…
S85
WS #214 Youth-Led Digital Futures: Integrating Perspectives and Governance — The tone of the discussion was largely constructive and solution-oriented, with speakers offering insights from differen…
S86
WS #266 Empowering Civil Society: Bridging Gaps in Policy Influence — The tone was largely constructive and solution-oriented. Speakers acknowledged significant challenges but focused on ide…
S87
Open Forum #15 Digital cooperation: the road ahead — The tone was generally constructive and solution-oriented. Participants shared examples of successful partnerships and i…
S88
Panel 2 – Anticipating and Mitigating Risks Along the Global Subsea Network  — The discussion maintained a professional, collaborative tone throughout, with participants demonstrating technical exper…
S89
The Power of the Commons: Digital Public Goods for a More Secure, Inclusive and Resilient World — The overall tone was optimistic and forward-looking, with speakers expressing enthusiasm about the potential of DPGs whi…
S90
Launch / Award Event #57 Governing Identity Online Nations and Technologists — The discussion maintains an academic and informative tone throughout, characterized by scholarly presentation of researc…
S91
Safe Smart Cities and Climate Frustration — The discussion maintained a collaborative and solution-oriented tone throughout. Speakers were optimistic about the pote…
S92
Leaders TalkX: Partnership pivot: rethinking cooperation in the digital era — The discussion maintained a professional, collaborative, and forward-looking tone throughout. Despite the moderator’s ac…
S93
Closing remarks – Charting the path forward — The tone throughout was consistently formal, diplomatic, and optimistic. It maintained a collaborative and forward-looki…
S94
High-Level Track Facilitators Summary and Certificates — The discussion maintained a consistently positive and celebratory tone throughout, characterized by gratitude, accomplis…
S95
Opening and introduction — The AU’s commitment to working with Member States in adopting the meeting’s recommendations was reaffirmed, alongside th…
S96
Closing Session  — The tone throughout the discussion was consistently formal, collaborative, and optimistic. It maintained a celebratory y…
S97
Opening Ceremony — Kurtis Lindqvist: Your Excellencies, distinguished guests, ladies and gentlemen. First of all, I’d like to thank Ministe…
S98
Leaders TalkX: ICT Applications Unlocking the Full Potential of Digital – Part I — In summary, the speaker outlines Iraq’s progressive plans for development in information technology and digital skills e…
S99
Day 0 Event #257 Enhancing Data Governance in the Public Sector — – **Judith Hellerstein** – Moderator of the session on “Enhancing Data Governance in the Public Sector” Guy Berger rais…
S100
The Challenges of Data Governance in a Multilateral World — An advocate in the discussion strongly supports data governance models that prioritize cooperation, privacy, and the com…
S101
morning session — The argument calls for a clearer focus on the specific aspects of biosecurity within the BWC framework. An alternative v…
S102
Thinking through Augmentation — While Ucuzoglu is optimistic about the long-term impact of transformative technology, he acknowledges that it is not an …
S103
Breakthroughs in human-centric bioscience with AI — A consortium led by Integra Therapeutics, Pompeu Fabra University, and the Centre for Genomic Regulation in Barcelona, S…
S104
AI Governance Dialogue: Steering the future of AI — Development | Sociocultural Last year, the Nobel Prize for Chemistry was awarded to the developers of AlphaFold, an AI …
S105
Enhancing rather than replacing humanity with AI — AI democratises expertise that was previously limited by resources. People in underserved areas have access to sophistic…
S106
Flexibility 2.0 / Davos 2025 — There is a moderate to high level of consensus among the speakers on key issues. This consensus suggests a growing recog…
S107
NRIs MAIN SESSION: DATA GOVERNANCE — Furthermore, it is noted that support for data systems should not be limited to the private sector. The analysis suggest…
S108
Keynote-Alexandr Wang — “We publish model cards and evaluation benchmarks and data so you can see how they work, their intended use, and how we …
S109
WSIS Action Line C7:E-Science: Open Science, Data, Science cooperation, IYQ, International Decade of Science for Sustainable Development — Strong consensus emerged around human-centered technology development, the need for equitable access to scientific resou…
S110
High Level Session 2: Digital Public Goods and Global Digital Cooperation — – **Balancing Security and Openness**: The tension between national security concerns and open-source approaches require…
S111
Digital Public Goods and the Challenges with Discoverability | IGF 2023 — Interestingly, it’s apparent that technical capacity does not represent the only challenge when it comes to integrating …
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
Speaker 1
9 arguments159 words per minute1969 words742 seconds
Argument 1
*Structural risk shift* – Speaker 1: AI tools decouple design from physical containment, moving bio‑risk upstream to the design stage and demanding new oversight mechanisms.
EXPLANATION
AI‑enabled biodesign tools allow creation of biological agents without the need for traditional lab containment, shifting the primary risk from downstream physical safeguards to the upstream design phase. Governance therefore must focus on monitoring and controlling design activities rather than only facility inspections.
EVIDENCE
Speaker 1 explains that historically risk governance in life sciences was tied to physical infrastructure such as lab facilities and material-transfer controls ([7]), but AI biodesign tools have altered this paradigm ([8]). He cites RAND’s identification of more than 1,500 biodesign tools that are transforming scientific practice ([9]) and notes that AI now makes it easier to engineer proteins, optimise DNA sequences, and model pathogen interactions, effectively decoupling these activities from physical containment measures ([10-12]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The shift of biosecurity risk upstream due to AI biodesign tools and the existence of over 1,500 such tools is documented in the discussion of the changing risk landscape in life sciences [S3].
MAJOR DISCUSSION POINT
Structural risk shift
Argument 2
*Decentralized oversight* – Speaker 1: Centralised authority in Delhi is insufficient; oversight must be distributed to institutional biosafety and information‑security offices.
EXPLANATION
A single national authority cannot keep pace with the rapidly evolving AI‑driven bio‑risk landscape; oversight should be spread across labs, biosafety officers, and information‑security units to create a network of checks and balances. This decentralised model aims to provide more adaptive and timely supervision.
EVIDENCE
He argues that a lone authority in Delhi cannot manage the required oversight, calling for more decentralised checks and balances ([24-26]). He proposes empowering information-security and biosafety offices and establishing a “way of prevention” that combines multiple measures rather than relying on a single one ([27-31]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Decentralised oversight mechanisms with empowered local biosafety officers and institutional review panels are advocated as necessary alternatives to top-down approaches [S3].
MAJOR DISCUSSION POINT
Decentralized oversight
Argument 3
*Invest in capacity building for AI‑enabled biosafety* – Training more scientists and security professionals in AI‑driven bio‑security, chemical security and nuclear security is essential for a resilient ecosystem.
EXPLANATION
A skilled workforce can recognise emerging AI‑generated threats and apply appropriate safeguards, reducing reliance on ad‑hoc measures.
EVIDENCE
Speaker 1 notes the need to train more people on AI-enabled science, chemical security, AI nuclear security and related fields, emphasizing capacity building for the Indian ecosystem and similar contexts [16-18].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Capacity-building programmes for biosafety officers, ethics and research-security personnel are highlighted, and global capacity disparities are noted as a priority for investment in developing regions [S3], [S22], [S23].
MAJOR DISCUSSION POINT
Capacity building
Argument 4
*Integrate AI evaluation into existing biosafety systems* – Embedding AI risk assessments within current biosafety and bio‑security offices strengthens institutional readiness for new AI‑driven threats.
EXPLANATION
By aligning AI evaluation with established information‑security and biosafety structures, institutions can respond more quickly to novel risks.
EVIDENCE
Speaker 1 calls for integrating AI evaluation into biosafety systems and strengthening institutional readiness, asking how information-security and biosafety offices can be better prepared for AI risks [18-20].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Integration of AI safety assessment with existing biosafety and resource-security systems is recommended to avoid fragmentation [S3].
MAJOR DISCUSSION POINT
AI‑biosafety integration
Argument 5
*Adopt adaptive, continuous oversight mechanisms* – Traditional periodic, paper‑based inspections are insufficient; oversight must evolve in real time with rapid AI advances.
EXPLANATION
Continuous monitoring and adaptive checks enable regulators to keep pace with fast‑moving AI capabilities that can outstrip static review processes.
EVIDENCE
Speaker 1 argues for more adaptive oversight that goes beyond occasional inspections, matching the speed and scale of AI developments [23-24].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for flexible, adaptive regulatory mechanisms that keep pace with fast-moving AI developments is emphasized in UN Security Council discussions and in analyses of the timeline mismatch between AI progress and standards creation [S25], [S27].
MAJOR DISCUSSION POINT
Adaptive oversight
Argument 6
*Implement tiered risk classification for AI biodesign tools* – Not all AI‑generated biological tools pose the same danger; a graduated risk framework can focus scrutiny where it matters most.
EXPLANATION
Higher‑risk tools (e.g., those trained on virus data) receive stricter controls, while lower‑risk applications enjoy lighter oversight, optimising resource allocation.
EVIDENCE
Speaker 1 proposes a tiered risk classification, distinguishing high-risk biodesign tools from lower-risk ones such as those dealing with harmless animal data [142-146].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Risk-adaptive AI governance frameworks that propose tiered or graduated oversight for high-risk versus low-risk tools are discussed in the context of accelerating standards and risk-adaptive approaches [S27].
MAJOR DISCUSSION POINT
Risk tiering
Argument 7
*Embed AI safety assessment into grant review and create cross‑trained review panels* – Funding decisions should require AI safety checks, and dedicated panels with both AI and biosafety expertise can evaluate proposals holistically.
EXPLANATION
Linking safety evaluation to funding incentives ensures that developers consider risk mitigation early, while cross‑trained panels bring the necessary interdisciplinary perspective.
EVIDENCE
Speaker 1 mentions integrating AI evaluation modules into grant review processes and establishing cross-trained AI biosafety review panels from the bottom-up [147-148].
MAJOR DISCUSSION POINT
Safety‑by‑design in funding
Argument 8
*Leverage agentic AI to monitor misuse in vaccine development* – Advanced AI agents can automatically detect attempts to jailbreak or misuse biodesign platforms, providing a proactive safety layer.
EXPLANATION
By embedding monitoring AI within vaccine‑development pipelines, suspicious behaviour can be flagged before harmful outputs are generated.
EVIDENCE
Speaker 1 cites CEPI’s platform that uses agentic AI to check for jailbreak attempts during vaccine development activities [246-249].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
A demonstration of an AI agent (Jenny AI) that automatically processes safety hazards and detects misuse illustrates the feasibility of proactive, agent-driven monitoring [S24].
MAJOR DISCUSSION POINT
Proactive AI‑based monitoring
Argument 9
*Maintain a digital‑to‑physical barrier* – Even with powerful AI tools, physical infrastructure constraints (labs, containment) remain a critical control point that should not be overlooked.
EXPLANATION
Ensuring that digital designs cannot be easily translated into physical pathogens without proper containment adds an extra layer of security.
EVIDENCE
Speaker 1 highlights the persistent gap between digital design and physical synthesis, noting that without appropriate physical infrastructure the risk of creating dangerous viruses is limited [250-251].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Physical infrastructure constraints are identified as a bottleneck and a vital control point for biosecurity, underscoring the importance of a digital-to-physical barrier [S36].
MAJOR DISCUSSION POINT
Digital‑to‑physical security
S
Speaker 2
10 arguments152 words per minute1873 words737 seconds
Argument 1
*Tiered access & contextual norms* – Speaker 2: Adopt differentiated, capability‑level governance (e.g., pre‑deployment assessments, KYC‑style credentialing) rather than blanket restrictions.
EXPLANATION
A nuanced, tiered‑access framework that applies contextual norms can balance openness with safety. Pre‑deployment assessments using structured rubrics and credential‑based access (similar to KYC) allow high‑risk tools to be controlled without stifling innovation.
EVIDENCE
Speaker 2 proposes a tiered-access model with contextual norms, referencing RAND Europe’s global risk index and its pre-deployment assessment rubrics ([43-45]). He stresses that once frontier models are released the danger cannot be withdrawn, making pre-deployment checks essential ([46-48]). He likens the approach to KYC, suggesting credentialed researchers for defensive work while keeping open-source tools available ([49-51]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
A tiered-access model with pre-deployment assessments and credential-based controls is recommended in a global risk-adaptive AI governance framework [S29].
MAJOR DISCUSSION POINT
Tiered access & contextual norms
Argument 2
*Preserving open‑source benefits* – Speaker 2: Open‑source tools are essential for low‑resource settings; governance should not conflate openness with danger.
EXPLANATION
Open‑source biodesign tools are critical for innovation in low‑resource environments, and any governance model that treats open‑source as inherently risky would hinder progress. Policies should differentiate between the tool’s capabilities and its misuse potential.
EVIDENCE
He emphasizes that open-source tools are necessary for low-resource settings and warns against equating openness with danger, stating that open-source development is a vital innovation point ([53-56]).
MAJOR DISCUSSION POINT
Preserving open‑source benefits
Argument 3
*Periodic global monitoring* – Speaker 2: Propose a six‑monthly, government‑backed AI safety institute that conducts independent assessments and shares results through a credentialed network.
EXPLANATION
Regular, semi‑annual independent evaluations of AI systems, supported by governments and coordinated through a credentialed network, can keep risk assessments up‑to‑date. Automation with AI can increase the efficiency of this monitoring.
EVIDENCE
He cites RAND Europe’s recommendation for a six-monthly ritual of monitoring and risk assessment, involving governments and independent researchers, and suggests using AI to automate the process ([105-108]).
MAJOR DISCUSSION POINT
Periodic global monitoring
Argument 4
*Pre‑deployment assessment* – Speaker 2: Structured rubrics before release are a critical safeguard, especially for frontier models that can outperform expert virologists.
EXPLANATION
Assessing AI systems against structured criteria before deployment can prevent dangerous capabilities from being released unchecked. Sharing the assessment outcomes with a credentialed community ensures broader awareness while protecting sensitive information.
EVIDENCE
He highlights the importance of pre-deployment assessments with structured rubrics prior to releasing frontier models, noting that once released the danger cannot be withdrawn ([44-48]). He also mentions that assessment results should be shared across a credentialed network with tiered confidentiality rather than kept proprietary ([118-119]).
MAJOR DISCUSSION POINT
Pre‑deployment assessment
Argument 5
*Data‑standard harmonisation* – Speaker 2: Advocate for federated standards (e.g., HL7‑FHIR‑style) to enable interoperable biosurveillance across countries.
EXPLANATION
To avoid fragmentation, biosurveillance data should follow harmonised, federated standards that allow different jurisdictions to exchange information securely. An HL7‑FHIR‑like framework adapted for public‑health surveillance can provide the needed interoperability.
EVIDENCE
He points out the current lack of unified data standards for biosurveillance and proposes a federated interpretability framework similar to HL7-FHIR, adapted for public-health data ([226-230]).
MAJOR DISCUSSION POINT
Data‑standard harmonisation
Argument 6
*Pre‑negotiated safe‑harbor agreements* – Speaker 2: Legal frameworks must be established in advance to allow rapid cross‑border data sharing during public‑health emergencies.
EXPLANATION
Legal safe‑harbor provisions should be negotiated before crises occur so that data can be shared swiftly without legal hesitation. This pre‑emptive approach enables coordinated responses during emergencies.
EVIDENCE
He argues that safe-harbor agreements for cross-border data sharing need to be negotiated beforehand, otherwise countries may refuse data exchange during an outbreak ([230-234]).
MAJOR DISCUSSION POINT
Pre‑negotiated safe‑harbor agreements
Argument 7
*Clarify the intended role of an AI system before applying governance* – Understanding what a system is meant to do is a prerequisite for choosing appropriate oversight mechanisms.
EXPLANATION
A clear role definition helps differentiate between benign, assistive, or potentially dangerous applications, guiding the selection of safeguards.
EVIDENCE
Speaker 2 repeatedly stresses the need for a clear understanding of the AI system’s role as a key point before any governance discussion [84-91].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Fit-for-purpose AI governance literature stresses that defining a system’s intended role is a prerequisite for selecting suitable governance levers [S31].
MAJOR DISCUSSION POINT
Role‑based governance
Argument 8
*Employ non‑interactive, automated risk‑monitoring methods* – Automated assessments that do not require researchers to directly query dangerous models can provide meaningful safeguards without exposing users to risk.
EXPLANATION
Such non‑interactive methodologies reduce the chance of accidental misuse while still delivering valuable risk insights.
EVIDENCE
Speaker 2 describes a non-interactive methodology that avoids direct researcher interaction with dangerous systems, presenting it as an already meaningful safeguard [107-108].
MAJOR DISCUSSION POINT
Non‑interactive monitoring
Argument 9
*Anchor AI‑safety institutes within existing international frameworks* – Linking new AI safety bodies to the Biological Weapons Convention (BWC) or the World Health Organization (WHO) provides legitimacy and facilitates coordination.
EXPLANATION
Embedding AI safety institutions within established treaties ensures they operate under recognized legal mandates and benefit from existing verification mechanisms.
EVIDENCE
Speaker 2 notes that an AI safety institute should have anchoring around the Biological Weapons Convention or the WHO to strengthen its authority [116-118].
MAJOR DISCUSSION POINT
International anchoring
Argument 10
*Require substantial multilateral government investment for semi‑annual monitoring* – A six‑monthly risk‑assessment ritual cannot be sustained without dedicated funding from governments at the multilateral level.
EXPLANATION
Consistent financial support ensures the continuity, depth and credibility of periodic global monitoring activities.
EVIDENCE
Speaker 2 points out that the proposed six-monthly monitoring cadence would need a very significant investment from governments, emphasizing that it cannot proceed without such funding [111-112].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for adequately funded, flexible regulatory mechanisms to support continuous monitoring of AI systems are made in discussions about adaptive AI regulation [S25].
MAJOR DISCUSSION POINT
Funding for periodic monitoring
S
Speaker 3
8 arguments147 words per minute1665 words675 seconds
Argument 1
*Capacity gaps & AI readiness* – Speaker 3: Indian and Southeast Asian institutions vary widely in resources; AI readiness must be tailored to local contexts.
EXPLANATION
AI readiness differs dramatically across the Global South, with India ranking high globally but many Southeast Asian nations lagging. Governance and capacity‑building measures must reflect these heterogeneous resource levels and local needs.
EVIDENCE
Speaker 3 notes India’s strong AI ranking (third globally) contrasted with Indonesia’s lower rank (~49), highlighting the gap in AI readiness across the region ([62-66]). He stresses that solutions designed for Western contexts cannot be directly applied to the varied capacities of South-East Asian institutions ([64-66]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Analyses of global capacity disparities highlight the need for targeted capacity-building in developing regions, noting India’s high AI ranking versus lower-ranked Southeast Asian nations [S22], [S23].
MAJOR DISCUSSION POINT
Capacity gaps & AI readiness
Argument 2
*Socio‑cultural benchmarks* – Speaker 3: Existing safety benchmarks fail 20‑30 % of biological risk tests; assessments must incorporate regional socio‑cultural factors and participatory stakeholder input.
EXPLANATION
Current safety benchmarks for large language models perform poorly in biological risk scenarios, especially in the Global South. Incorporating socio‑cultural evaluations and stakeholder participation can produce more relevant safeguards.
EVIDENCE
He references a Southeast-Asia safety benchmark showing that leading LLMs fail 20-30 % of biological risk evaluations ([68-70]) and argues for additional sociocultural assessments that consider regional harms and involve end-users and stakeholders throughout the development lifecycle ([71-75]).
MAJOR DISCUSSION POINT
Socio‑cultural benchmarks
Argument 3
*Establish a Global South network for trustworthy AI* – A dedicated network can coordinate capacity‑building, standards‑setting and shared learning among low‑resource countries.
EXPLANATION
By pooling expertise and resources, the Global South can develop context‑appropriate governance models and avoid reliance on external solutions.
EVIDENCE
Speaker 3 announces the launch of a global-south network for trustworthy AI that will enable collaborative development and deployment in low-resource settings [164-165].
MAJOR DISCUSSION POINT
Regional collaboration
Argument 4
*Create an AI safety commons for the Global South* – A shared repository of safety tools, benchmarks and best‑practice guidelines will accelerate responsible AI deployment across diverse contexts.
EXPLANATION
The commons provides open access to evaluation resources, fostering transparency and collective improvement of safety standards.
EVIDENCE
Speaker 3 describes an upcoming AI safety commons for the Global South as part of the safe and trusted AI pillar, expected to be operational within one to two years [165-166].
MAJOR DISCUSSION POINT
Safety commons
Argument 5
*Develop an incident‑reporting framework tailored to Indian settings* – A context‑specific mechanism captures AI‑related incidents that might be missed by Western‑centric reporting systems.
EXPLANATION
Tailoring the taxonomy and reporting process to local realities improves data quality and enables timely response to emerging threats.
EVIDENCE
Speaker 3 mentions that CRI has created an incident-reporting mechanism and framework specifically designed for Indian contexts, differing from Western models [169-170].
MAJOR DISCUSSION POINT
Localized incident reporting
Argument 6
*Prioritise privacy protections for marginalized communities* – AI deployments must safeguard the data and identities of vulnerable groups to prevent disproportionate harms.
EXPLANATION
Embedding privacy safeguards ensures that AI‑driven surveillance or health tools do not exacerbate existing inequities.
EVIDENCE
Speaker 3 stresses the need to provide more privacy to people, especially those from marginalized communities, as part of responsible AI deployment [176-177].
MAJOR DISCUSSION POINT
Privacy for vulnerable groups
Argument 7
*Foster collaborative multi‑stakeholder governance* – Effective AI safety requires coordinated action among academia, industry, government and civil society.
EXPLANATION
Joint efforts break silos, align incentives and ensure that diverse perspectives shape policy and technical standards.
EVIDENCE
Speaker 3 calls for collaborative work between different stakeholders and notes that governments have already provided prompt knowledge through techno-legal frameworks and guidelines [177-179].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Proactive collaboration between industry, academia and governments is identified as essential to reduce capacity gaps and build resilient AI governance ecosystems [S22].
MAJOR DISCUSSION POINT
Multi‑stakeholder collaboration
Argument 8
*Align AI governance with emerging national guidelines such as METI’s* – Nationally published AI governance guidelines can serve as a template for regional adaptation and harmonisation.
EXPLANATION
Referencing METI’s recently released AI governance guidelines helps ensure consistency while allowing local tailoring.
EVIDENCE
Speaker 3 notes that governments have issued AI governance guidelines, specifically mentioning a recent METI publication that can inform other Southeast Asian countries [178-180].
MAJOR DISCUSSION POINT
National guideline alignment
A
Audience Member 1
1 argument167 words per minute100 words35 seconds
Argument 1
*Comprehensive harms taxonomy* – Audience Member 1: Calls for inclusion of psychological, cyber‑incident, socio‑economic, and environmental harms alongside physical risks in AI safety assessments.
EXPLANATION
A broader taxonomy that captures non‑physical harms—such as psychological, cyber‑incident, socio‑economic, environmental, and algorithmic impacts—provides a more complete picture of AI risks. This enables targeted mitigation strategies across diverse domains.
EVIDENCE
The participant describes CIRI’s work on a taxonomy that categorises harms into physical, psychological, cyber-incident, socio-economic, environmental, and algorithmic categories, and mentions a toolkit used with a hospital to assess healthcare workers’ perceptions of AI tools ([265-274]).
MAJOR DISCUSSION POINT
Comprehensive harms taxonomy
A
Audience Member 2
1 argument170 words per minute75 words26 seconds
Argument 1
*Model‑drift mitigation* – Audience Member 2: Highlights the need for continuous monitoring of distributional shifts over time to maintain model safety and performance.
EXPLANATION
AI models can degrade as data distributions change over time, so ongoing monitoring for temporal drift is essential to ensure continued safety and reliability. Detecting and addressing drift should be part of systematic model evaluation.
EVIDENCE
The audience member points out that model-drift monitoring should consider data moving out of distribution over time, describing this as part of a system-monitoring approach to safety ([286-288]).
MAJOR DISCUSSION POINT
Model‑drift mitigation
A
Audience Member 3
2 arguments190 words per minute78 words24 seconds
Argument 1
*Empowered biosafety officers* – Audience Member 3: Suggests a layered response where institutional biosafety officers report upward to central leadership for a holistic view.
EXPLANATION
Decentralising incident response by empowering biosafety officers at labs and institutions, while establishing clear channels for reporting to national leadership, creates a coordinated yet flexible oversight system.
EVIDENCE
He proposes empowering biosafety officers and institutional biosafety committees, building capacity for them, and creating mechanisms for incident reporting up to top leadership for an overview of the situation across institutes ([295-299]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Decentralised oversight that empowers local biosafety officers and creates reporting channels to central leadership is advocated as a key element of effective biosecurity governance [S3].
MAJOR DISCUSSION POINT
Empowered biosafety officers
Argument 2
*Clear agency roles* – Audience Member 3: Uses Singapore’s multi‑agency model (NEA, MOH, Communicable Disease Agency, Prepare Agency) as an example of coordinated response that other nations could emulate.
EXPLANATION
A clear delineation of responsibilities among agencies—such as Singapore’s National Environmental Agency, Ministry of Health, Communicable Disease Agency, and Prepare Agency—ensures swift and organized action during health crises. Replicating such role clarity can improve cross‑border coordination.
EVIDENCE
He describes Singapore’s structure where distinct agencies handle specific tasks (e.g., NEA for wastewater surveillance) and notes that this clear allocation of duties serves as a model for coordinated incident response ([301-309]).
MAJOR DISCUSSION POINT
Clear agency roles
M
Moderator
7 arguments125 words per minute969 words462 seconds
Argument 1
*Define the appropriate governance lens* – The discussion should first decide whether AI‑biosecurity issues are best addressed through data‑governance, model‑design controls, or verification/compliance mechanisms.
EXPLANATION
Choosing the right angle determines which policies, standards and oversight tools will be most effective for managing emerging risks.
EVIDENCE
The moderator opens the session by asking whether the problem should be framed as a data-governance issue, a model-design problem, or a verification/compliance challenge [1].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need to frame AI-biosecurity governance as data governance, model design, or compliance is reflected in fit-for-purpose governance analyses that explore appropriate lenses for AI risk management [S30], [S31].
MAJOR DISCUSSION POINT
Governance framing
Argument 2
*Balance open‑science benefits with safeguards* – Open scientific collaboration must be preserved while preventing the destabilising diffusion of high‑risk AI capabilities.
EXPLANATION
Open science accelerates innovation and capacity building, especially in low‑resource settings, but unrestricted release of powerful tools can create security threats.
EVIDENCE
The moderator asks how to keep the advantages of open science while avoiding the spread of dangerous capabilities [34-35].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Discussions on AI regulation emphasize balancing the benefits of open science with the need to mitigate the diffusion of dangerous capabilities [S30].
MAJOR DISCUSSION POINT
Open‑science vs. risk mitigation
Argument 3
*Make independent evaluation and red‑team­ing a norm* – Systematic, independent technical assessments should become a permanent part of the global scientific infrastructure for AI systems that generate biological outputs.
EXPLANATION
Regular red‑team exercises and independent audits can surface hidden vulnerabilities before they are exploited, ensuring a baseline of safety worldwide.
EVIDENCE
The moderator explicitly asks whether independent evaluation and red-team­ing should become a norm for bio-security-relevant AI systems [82-83].
MAJOR DISCUSSION POINT
Institutionalising independent evaluation
Argument 4
*Ensure safety measures are rigorous yet feasible* – Governance frameworks must strike a balance between scientific rigor and the practical constraints of diverse research ecosystems.
EXPLANATION
Overly burdensome requirements could hinder research, while lax standards leave gaps; policies need to be adaptable to varying institutional capacities.
EVIDENCE
The moderator asks Suryesh how to keep safety measures rigorous but feasible within the research ecosystems he knows well [120-121].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Adaptive regulatory approaches that consider practical constraints of varied research ecosystems are highlighted as necessary for feasible yet rigorous AI safety measures [S25].
MAJOR DISCUSSION POINT
Feasibility of safety regimes
Argument 5
*Empower emerging scientific powers to shape governance* – Countries of the Global South should lead the design of AI governance rather than merely importing Western frameworks.
EXPLANATION
Local contexts, resource constraints and unique innovation pathways require home‑grown policies that can be shared with other emerging economies.
EVIDENCE
The moderator prompts Geetha to discuss whether emerging scientific powers can shape governance and what leadership would look like [159-160].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The importance of emerging scientific powers taking a leading role in AI governance and capacity-building is underscored in discussions of global capacity disparities [S22].
MAJOR DISCUSSION POINT
Leadership of emerging powers
Argument 6
*Integrate AI safety into existing programs without fragmentation* – AI must be embedded in legacy biosurveillance and public‑health systems in a coordinated way to avoid siloed evaluations.
EXPLANATION
Co‑designing safety, interoperability and evaluation standards across new AI‑enabled tools and established infrastructures prevents gaps and duplication.
EVIDENCE
The moderator asks how to ensure safety, evaluation and interoperability across emerging and legacy systems without fragmentation [204-208].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Integration of AI safety with existing biosafety and resource-security programs is recommended to prevent fragmentation and ensure coordinated oversight [S3].
MAJOR DISCUSSION POINT
Integration with legacy systems
Argument 7
*Adopt a systemic, institution‑level approach to safety evaluation* – Auditing algorithms alone is insufficient; the surrounding institutions and operational practices must also be assessed.
EXPLANATION
A holistic view that includes institutional policies, capacity and incentive structures yields more reliable risk mitigation than isolated model checks.
EVIDENCE
In the closing summary the moderator stresses that safety evaluation must be systemic and institution-focused, warning against auditing algorithms while ignoring the institutions that operationalise them [255-259].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
A systemic, institution-focused safety evaluation that includes verification and institutional practices is advocated by technical standards bodies [S32].
MAJOR DISCUSSION POINT
Systemic safety evaluation
Agreements
Agreement Points
The rapid emergence of AI‑enabled biodesign tools shifts bio‑risk upstream from physical containment to the design phase, requiring new oversight mechanisms.
Speakers: Speaker 1, Speaker 2
*Structural risk shift* – Speaker 1: AI tools decouple design from physical containment, moving bio‑risk upstream to the design side and demanding new oversight mechanisms. *Pre‑deployment assessment* – Speaker 2: Structured rubrics before release are a critical safeguard, especially for frontier models that can outperform experts.
Both speakers agree that AI-driven biodesign changes the risk landscape by moving the critical control point to the design stage and that pre-deployment safety checks are essential to manage this shift [10-12][44-48].
POLICY CONTEXT (KNOWLEDGE BASE)
This aligns with emerging governance principles that call for embedding safety and responsibility from the design stage of advanced AI systems [S46][S47] and reflects concerns about dual-use biotechnologies that require upstream risk assessment [S59].
Oversight should be decentralised and empower local biosafety, biosecurity and information‑security offices rather than rely on a single central authority.
Speakers: Speaker 1, Audience Member 3, Moderator
*Decentralized oversight* – Speaker 1: A single authority in Delhi cannot manage everything; checks and balances must be spread to labs and offices. *Empowered biosafety officers* – Audience Member 3: Institutional biosafety officers should be empowered and have clear reporting channels to central leadership. *Governance framing* – Moderator: The discussion must decide the appropriate governance lens (data, model design, verification) which implies choosing the right institutional architecture.
All three stress that a distributed network of empowered local units is needed for timely, adaptive governance of AI-bio risks, with mechanisms to aggregate information centrally [24-27][295-299][1].
POLICY CONTEXT (KNOWLEDGE BASE)
Calls for decentralized oversight with empowered local biosafety officers have been articulated in recent workshops on AI and biosafety [S49] and are echoed in discussions about strengthening national-level capacities.
Governance should be tiered or differentiated according to the capability and risk level of AI tools, avoiding blanket restrictions.
Speakers: Speaker 1, Speaker 2
*Tiered risk classification* – Speaker 1: Not everything is highly risky; high‑risk biodesign tools should be treated differently from low‑risk ones. *Tiered access & contextual norms* – Speaker 2: Adopt differentiated, capability‑level governance (pre‑deployment assessments, KYC‑style credentialing) rather than blanket bans.
Both speakers advocate a graduated approach that matches oversight intensity to the specific danger posed by a tool, emphasizing flexibility over one-size-fits-all bans [142-146][41-45].
POLICY CONTEXT (KNOWLEDGE BASE)
Risk-based, tiered regulatory approaches are highlighted in India’s AI policy framework, which balances experimentation with systemic risk mitigation [S54], and are consistent with broader recommendations to match oversight intensity to AI capability [S55].
Continuous, adaptive monitoring (e.g., semi‑annual reviews) is needed to keep pace with fast‑moving AI capabilities.
Speakers: Speaker 1, Speaker 2
*Adaptive, continuous oversight* – Speaker 1: Traditional periodic inspections are insufficient; oversight must evolve in real time. *Periodic global monitoring* – Speaker 2: Proposes a six‑monthly, government‑backed AI safety institute to conduct independent assessments.
Both agree that static, infrequent reviews cannot match AI’s speed; a regular, possibly semi-annual, monitoring cadence is required, supported by adequate funding [23-24][105-108].
POLICY CONTEXT (KNOWLEDGE BASE)
Adaptive monitoring and regular policy reviews are recommended to cope with the rapid pace of AI innovation, as noted in UN Secretary-General strategy discussions and adaptive leadership reports [S55][S57][S62].
AI safety evaluation should be embedded within existing biosafety, grant‑review and incident‑reporting processes rather than treated as a separate activity.
Speakers: Speaker 1, Speaker 2, Speaker 3
*Integrate AI evaluation into biosafety systems* – Speaker 1: Align AI risk assessment with current biosafety offices. *Pre‑deployment assessment & credential network* – Speaker 2: Results of assessments should be shared across a credentialed network and integrated into grant decisions. *Incident‑reporting framework tailored to Indian settings* – Speaker 3: Developed a context‑specific incident‑reporting mechanism.
All three stress that AI safety checks must be woven into existing institutional workflows-biosafety offices, funding reviews, and incident-reporting pipelines-to ensure coherence and effectiveness [18-20][118-119][169-170].
POLICY CONTEXT (KNOWLEDGE BASE)
Embedding AI safety into existing review and reporting mechanisms is advocated in recent AI governance workshops that stress integration rather than retrofitting safety checklists [S46][S47].
Capacity gaps in the Global South require tailored AI readiness programmes, training, and collaborative networks.
Speakers: Speaker 1, Speaker 3, Moderator
*Invest in capacity building* – Speaker 1: Train more people in AI‑enabled biosafety, chemical security, etc. *Capacity gaps & AI readiness* – Speaker 3: Highlight heterogeneity of resources across South‑East Asian institutions and the need for locally‑relevant solutions. *Empower emerging scientific powers* – Moderator: Emerging powers should shape governance rather than merely import Western frameworks.
There is consensus that building technical and policy capacity in low-resource settings, and creating regional networks, is essential for effective AI-biosecurity governance [15-18][62-66][159-160].
POLICY CONTEXT (KNOWLEDGE BASE)
Addressing capacity gaps through tailored programmes and collaborative networks has been highlighted in multi-stakeholder development forums and in discussions on open-source tools for low-resource settings [S51][S65].
Multi‑stakeholder collaboration and shared standards (including data‑standard harmonisation) are crucial to avoid fragmentation across borders.
Speakers: Speaker 2, Speaker 3, Moderator
*Data‑standard harmonisation* – Speaker 2: Proposes federated standards (HL7‑FHIR‑style) for interoperable biosurveillance. *Collaborative multi‑stakeholder governance* – Speaker 3: Calls for joint work among academia, industry, government, civil society. *Integrate AI safety with legacy systems without fragmentation* – Moderator: Emphasises need for coordinated safety across new AI tools and existing programmes.
All three underline that common technical standards and collaborative governance structures are needed to prevent siloed, fragmented responses to AI-driven bio-risks [226-230][177-179][204-208].
POLICY CONTEXT (KNOWLEDGE BASE)
The importance of multi-stakeholder collaboration and harmonised standards to prevent fragmentation is a recurring theme in IGF and policy network sessions on internet fragmentation and AI governance [S50][S51][S53][S58].
Similar Viewpoints
Both see the need for formal, pre‑release safety checks that are tied to funding and research workflows, ensuring that risky capabilities are vetted before they reach the lab or market [44-48][147-148].
Speakers: Speaker 1, Speaker 2
*Pre‑deployment assessment* – Speaker 2 (structured rubrics before release). *Integrate AI evaluation into grant/research processes* – Speaker 1 (AI modules in grant review, cross‑trained panels).
Both propose institutional mechanisms at the regional or global level that regularly assess AI safety and share findings across a trusted community [164-165][105-108].
Speakers: Speaker 2, Speaker 3
*Global‑South network for trustworthy AI* – Speaker 3 (launching a network). *Periodic global monitoring* – Speaker 2 (six‑monthly institute).
Consensus that biosafety officers should be given authority, training, and clear reporting channels to central leadership to create an effective, layered response system [24-27][295-299].
Speakers: Speaker 1, Audience Member 3
*Empower biosafety officers* – Speaker 1 (decentralised checks, empowerment). *Empowered biosafety officers* – Audience Member 3 (layered response, reporting upward).
Unexpected Consensus
Inclusion of psychological and broader non‑physical harms in AI safety taxonomies.
Speakers: Audience Member 1, Speaker 3
*Comprehensive harms taxonomy* – Audience Member 1 (physical, psychological, cyber‑incident, socio‑economic, environmental). *Socio‑cultural benchmarks* – Speaker 3 (need for assessments beyond technical performance, including human‑centric impacts).
While the panel largely focused on bio-security and technical risk, both the audience member and Speaker 3 highlighted the importance of psychological and socio-cultural harms, extending the safety conversation beyond the expected biological scope [265-274][68-70].
POLICY CONTEXT (KNOWLEDGE BASE)
Recent AI safety discussions have expanded taxonomies to cover mental health and other non-physical harms, reflecting findings on psychological impacts of AI use [S64] and calls for contextual safety definitions involving diverse stakeholders [S63].
Recognition that open‑source tools are essential for low‑resource settings and should not be automatically restricted.
Speakers: Speaker 2, Speaker 1
*Preserving open‑source benefits* – Speaker 2 (open‑source critical for low‑resource innovation). *Digital‑to‑physical barrier* – Speaker 1 (even with open tools, physical infrastructure limits risk).
Speaker 1 did not explicitly champion open-source, yet his point about the digital-to-physical barrier implicitly supports the idea that open tools alone do not create immediate danger, aligning with Speaker 2’s stance that open-source should be preserved for developing contexts [250-251][53-56].
POLICY CONTEXT (KNOWLEDGE BASE)
Open-source AI is identified as a catalyst for innovation and global partnership, especially for low-resource environments, arguing against blanket restrictions [S65].
Overall Assessment

The panel shows strong convergence on several core themes: the upstream shift of bio‑risk due to AI, the need for decentralised and capacity‑building‑focused oversight, tiered and adaptive governance, and the integration of AI safety into existing institutional processes. Participants from different backgrounds (bio‑security, AI policy, regional capacity building) largely reinforce each other’s proposals rather than contradict them.

High consensus – most speakers align on the structural nature of the problem and on concrete policy levers (decentralised checks, tiered risk regimes, continuous monitoring, capacity building, and collaborative standards). This broad agreement suggests that future work can move quickly toward implementing multi‑layered, region‑specific governance frameworks without needing to resolve major conceptual disputes.

Differences
Different Viewpoints
Centralisation vs decentralisation of oversight mechanisms
Speakers: Speaker 1, Speaker 2
*Decentralized oversight* – Speaker 1: “If there is one authority sitting somewhere in Delhi and trying to do everything, that’s not going to work… How do we decentralize these kind of oversight systems to some extent?” [24-26] *Anchor AI-safety institutes within existing international frameworks* – Speaker 2: “…implement this AI safety or security institute model… It is technically credentialed. It’s independent, but also has a very… formal relationship with the government… the institution to have some kind of anchoring around biological weapons convention or the WHO…” [113-118][116-118]
Speaker 1 argues that a single national authority cannot keep pace with AI-driven bio-risk and calls for a network of empowered institutional biosafety and information-security offices (decentralised checks and balances) [24-26][27-31]. Speaker 2 proposes creating a dedicated AI safety institute that is formally linked to governments and anchored to international treaties such as the BWC or WHO, implying a more centralised, globally coordinated body [113-118][116-118].
POLICY CONTEXT (KNOWLEDGE BASE)
The debate mirrors recent calls for decentralized oversight with empowered local offices versus centralized international bodies, as discussed in biosafety workshops and AI safety institute forums [S49][S58].
Frequency and nature of oversight (continuous adaptive vs semi‑annual ritual)
Speakers: Speaker 1, Speaker 2
*Adopt adaptive, continuous oversight mechanisms* – Speaker 1: “We need something which is far more adaptive and quick… Traditional periodic paper-based inspections are insufficient…” [23-24][135-138] *Periodic global monitoring* – Speaker 2: “We recommended that governments and also independent researchers do this six-monthly ritual of monitoring and also assessment of risk on a continuous basis…” [105-108][111-112]
Speaker 1 stresses that oversight must evolve in real time with rapid AI advances, moving beyond occasional paper-based inspections to adaptive, continuous checks [23-24][135-138]. Speaker 2 suggests a concrete six-monthly monitoring cadence, supported by multilateral funding and automation, as the primary mechanism for ongoing risk assessment [105-108][111-112].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy circles are divided on oversight cadence, with some advocating continuous adaptive monitoring and others favoring periodic semi-annual reviews, reflecting differing interpretations of adaptive governance recommendations [S55][S57][S62].
Unexpected Differences
International anchoring of AI‑safety bodies vs national‑level decentralised approach
Speakers: Speaker 1, Speaker 2
*Decentralized oversight* – Speaker 1: emphasises local empowerment without reference to international treaties [24-26][27-31] *Anchor AI-safety institutes within existing international frameworks* – Speaker 2: explicitly calls for linking the institute to the Biological Weapons Convention or WHO [116-118]
Speaker 1’s vision stays within national institutional reforms, whereas Speaker 2 introduces an unexpected layer of international treaty-based anchoring, revealing a divergence in the perceived locus of legitimacy and authority for AI-biosecurity governance. This contrast was not anticipated given the predominantly national focus of the earlier discussion. [24-26][116-118]
POLICY CONTEXT (KNOWLEDGE BASE)
Tensions between establishing international AI-safety institutions and maintaining national-level decentralized oversight have been highlighted in discussions on global AI safety institutes and the need for coordinated yet locally responsive frameworks [S58][S49].
Overall Assessment

The panel shows strong consensus on the need for enhanced AI‑biosecurity governance, capacity building, and multi‑stakeholder collaboration. The principal disagreements centre on the architecture of oversight (decentralised national networks vs a centralised international institute) and on the cadence of monitoring (continuous adaptive mechanisms vs a fixed six‑monthly ritual). These divergences reflect differing assumptions about feasibility, resource allocation, and legitimacy, but they do not undermine the shared recognition of risk.

Moderate – while all participants agree on the problem and the overarching goal of safer AI‑enabled biology, they diverge on structural and procedural solutions. The implications are that any policy outcome will need to reconcile decentralised national capacities with some form of coordinated, possibly internationally‑anchored, monitoring framework, and must balance the desire for real‑time adaptability with the practicality of periodic reviews.

Partial Agreements
All three speakers agree that robust safety evaluation is essential, but they differ on the implementation route: Speaker 1 favours embedding AI risk checks within existing institutional biosafety and grant‑review processes; Speaker 2 favours a semi‑annual, credentialed, globally coordinated monitoring institute; Speaker 3 pushes for a regional South‑South network and a shared safety commons to provide capacity‑building and evaluation tools. [18-20][105-108][164-166]
Speakers: Speaker 1, Speaker 2, Speaker 3
*Integrate AI evaluation into existing biosafety systems* – Speaker 1: “Integrating AI evaluation into biosafety system, strengthening the institutional readiness…” [18-20] *Periodic global monitoring* – Speaker 2: “…six-monthly ritual of monitoring and also assessment of risk…” [105-108] *Establish a Global South network for trustworthy AI* – Speaker 3: “We are going to launch a global south network for trustworthy AI…” [164-166]
Takeaways
Key takeaways
AI‑enabled biodesign tools shift bio‑risk upstream from physical labs to the design phase, requiring new governance structures. Centralised oversight (e.g., a single authority in Delhi) is insufficient; oversight must be decentralized to institutional biosafety, information‑security offices, and regional bodies. Open science can be preserved by using tiered, capability‑level access controls and contextual norms rather than blanket bans, while retaining the benefits of open‑source tools for low‑resource settings. There are significant AI readiness and capacity gaps in India and other Global South countries; governance must be tailored to local socio‑cultural contexts and resource levels. Existing safety benchmarks often fail for biological applications; participatory, region‑specific assessments are needed. Independent, periodic (e.g., six‑monthly) evaluation and red‑team exercises, supported by a dedicated AI safety institute, are essential for continuous risk monitoring. Pre‑deployment assessments using structured rubrics are a critical safeguard before releasing frontier models. Cross‑border biosurveillance suffers from fragmented data standards and legal regimes; harmonised federated standards and pre‑negotiated safe‑harbor agreements are required. A comprehensive taxonomy of harms should include physical, psychological, cyber‑incident, socio‑economic, and environmental impacts. Model performance can degrade over time due to data drift; continuous monitoring and adaptation are necessary. Effective incident‑response frameworks need empowered biosafety officers at the institutional level reporting to coordinated central leadership, with clear agency roles as exemplified by Singapore.
Resolutions and action items
Develop and deploy tiered access mechanisms and contextual norms for high‑risk AI biodesign tools (pre‑deployment assessment, KYC‑style credentialing). Create a six‑monthly, government‑backed AI safety institute to conduct independent evaluations and share findings through a credentialed network. Establish AI safety commons for the Global South to provide shared evaluation resources and benchmarks. Launch a Global South network for trustworthy AI and an incident‑reporting framework tailored to Indian and regional contexts. Integrate AI evaluation modules into grant‑review processes and form cross‑trained AI biosafety review panels at institutions. Promote capacity‑building programmes for biosafety officers, information‑security staff, and other stakeholders in the Global South. Adopt federated data‑standard frameworks (e.g., HL7‑FHIR‑style) for biosurveillance interoperability across countries. Negotiate pre‑emptive legal safe‑harbor agreements to enable rapid cross‑border data sharing during public‑health emergencies. Implement continuous model‑monitoring pipelines to detect and mitigate temporal data drift. Encourage decentralized yet coordinated incident‑response structures, drawing on multi‑agency models such as Singapore’s.
Unresolved issues
Specific mechanisms and governance models for decentralising oversight while maintaining effective central coordination remain undefined. Funding models and international collaboration structures for the proposed AI safety institute and Global South safety commons are not settled. How to enforce tiered access and credentialing without stifling legitimate research, especially in low‑resource environments, needs further clarification. The process for creating and maintaining region‑specific socio‑cultural safety benchmarks and participatory assessment frameworks is still open. Legal pathways to establish pre‑negotiated safe‑harbor agreements across diverse jurisdictions have not been detailed. Strategies for monitoring and regulating DIY or small‑scale commercial biodesign activities outside formal oversight structures are not resolved. Methods to ensure consistent incident reporting and data sharing between institutions and central authorities are still under discussion.
Suggested compromises
Adopt differentiated, capability‑level governance (tiered access, contextual norms) instead of blanket restrictions on AI tools. Combine decentralized institutional oversight with a coordinated central leadership layer for incident aggregation and response. Allow open‑source development while applying pre‑deployment assessments and credentialed access for high‑risk capabilities. Balance rapid, adaptive safety measures with existing periodic review processes by introducing faster, AI‑assisted monitoring cycles. Integrate both technical (model‑level) and socio‑technical (institutional, cultural) assessments to capture the full risk spectrum.
Thought Provoking Comments
AI biodesign tools are decoupling risk from physical lab containment and moving the risk upstream to the design phase, fundamentally changing the biosafety landscape.
Highlights a structural shift where AI enables biological design without traditional physical safeguards, creating new upstream vulnerabilities that existing governance models may not address.
Set the stage for the discussion on the need for new oversight mechanisms, prompting later speakers to propose decentralized checks, pre‑deployment assessments, and capability‑aware safeguards.
Speaker: Speaker 1
We should adopt a tiered access and contextual norms approach—using pre‑deployment assessments and KYC‑style credentialing—to differentiate between defensive research and unrestricted open‑source tools.
Introduces a concrete, nuanced governance framework that balances openness with security, moving beyond binary yes/no answers.
Shifted the conversation from abstract risk to actionable policy ideas, leading the moderator to ask about institutional gaps and influencing later suggestions about differentiated capability‑level governance.
Speaker: Speaker 2
AI readiness varies dramatically across regions; Southeast Asian countries need sociocultural benchmarks and small‑language‑model solutions tailored to low‑resource settings rather than importing Western‑centric frameworks.
Points out the mismatch between global AI safety standards and local capacities, emphasizing the importance of culturally aware evaluation and participatory design.
Redirected the dialogue toward equity and capacity‑building, prompting discussions on localized incident reporting, AI safety commons for the Global South, and the need for adaptable frameworks.
Speaker: Speaker 3
A six‑monthly, independent, credentialed institute—modeled after the IAEA—should conduct continuous risk monitoring and share assessment results through a tiered‑confidentiality network.
Proposes an institutional model that institutionalizes red‑teamings and continuous oversight, linking technical evaluation with multilateral governance structures.
Introduced the idea of a formal, recurring global safety ritual, influencing later remarks about building AI safety institutes in India and the need for sustained governmental investment.
Speaker: Speaker 2
Safety measures must move upstream, include tiered risk classification for biodesign tools, and integrate AI evaluation into grant reviews and domestic evaluation capacity, while also leveraging tech‑sovereignty to control data flows.
Combines practical steps (grant‑review integration, cross‑trained panels) with strategic concepts (tech sovereignty), bridging policy and technical domains.
Deepened the conversation about implementation, leading to concrete suggestions such as AI safety institutes, incident‑reporting frameworks, and the need for proportionate, capability‑aware safeguards.
Speaker: Speaker 1
Fragmentation in biosurveillance arises from incompatible data standards and lack of legal safe‑harbors; we need federated standards (e.g., HL7‑FHIR‑like), pre‑negotiated cross‑border data‑sharing agreements, and shared evaluation criteria.
Identifies a concrete technical‑legal bottleneck that hampers coordinated pandemic response and links it to AI safety, offering a clear roadmap for harmonization.
Steered the discussion toward interoperability challenges, prompting audience questions about temporal data drift and reinforcing the theme of cross‑border collaboration.
Speaker: Speaker 2
Incident response must be decentralized yet integrated: empower biosafety officers at the lab level, provide clear reporting channels to central leadership, and ensure top‑down visibility of grassroots incidents.
Synthesizes earlier points about decentralization with a practical governance chain, addressing both prevention and rapid response.
Served as a concluding turning point, aligning the panel around a shared vision of layered oversight and influencing the final audience question about a “web of prevention and incident response framework.”
Speaker: Speaker 1
Overall Assessment

The discussion evolved from recognizing a fundamental shift in biosafety risk—AI moving threat creation upstream—to debating concrete governance mechanisms that balance openness with security. Early insights about upstream risk and tiered access reframed the conversation, prompting participants to surface regional capacity gaps, propose institutionalized monitoring bodies, and stress the need for interoperable data standards. These pivotal comments redirected the dialogue from abstract concerns to actionable, context‑sensitive solutions, ultimately converging on a shared vision of decentralized yet coordinated oversight that can be adapted by emerging scientific powers in the Global South.

Follow-up Questions
How do we preserve the benefits of open science while preventing the destabilizing diffusion of high‑risk AI capabilities?
Balancing openness with security is crucial to retain scientific collaboration without enabling misuse of powerful biodesign tools.
Speaker: Moderator (directed to Speaker 2)
What are the most immediate gaps in evaluating systems, technical capability, regulatory and coordination from a policy perspective?
Identifying priority policy gaps helps focus resources on the most pressing weaknesses in AI‑biosecurity governance.
Speaker: Moderator (directed to Speaker 3)
Should independent evaluation and red‑team­ing of AI systems that generate biological outputs become a norm and part of the global scientific specialist infrastructure? If so, how would we implement it?
Establishing systematic, independent oversight could provide continuous risk monitoring and build trust across nations.
Speaker: Moderator (directed to Speaker 2)
How can we ensure safety measures remain rigorous and feasible within heterogeneous research ecosystems, especially in low‑resource settings?
Practical, adaptable safety frameworks are needed to work across institutions with varying resources and expertise.
Speaker: Moderator (directed to Speaker 1)
Can emerging scientific powers in the Global South shape AI governance, and what would leadership look like in scientific AI ecosystems?
Understanding the role of middle‑income countries can inform inclusive, context‑aware governance models.
Speaker: Moderator (directed to Speaker 3)
Should safety focus be primarily at the model level, or should broader socio‑technical readiness and misuse considerations be emphasized?
Determining the appropriate scope of safety assessment influences how risks are identified and mitigated.
Speaker: Moderator (directed to Speaker 1)
How do we ensure safety, evaluation, and interoperability across legacy and emerging AI systems without fragmentation?
Coordinated standards prevent siloed efforts and enable seamless integration of new AI tools with existing public‑health infrastructure.
Speaker: Moderator (directed to Speaker 2)
What work is being done on defining and categorising non‑physical harms (psychological, socio‑economic, etc.) in AI safety?
Expanding harm taxonomies beyond physical risks is essential for comprehensive AI safety assessments.
Speaker: Audience Member 1 (directed to Speaker 3)
How will temporal data drift affect model performance, and how can we mitigate it?
Models may degrade over time; systematic monitoring and adaptation are needed to maintain safety and reliability.
Speaker: Audience Member 2 (directed to Speaker 3)
What would a successful web of prevention and incident‑response framework look like, and who are exemplars in this space?
A clear, coordinated response architecture is vital for rapid containment of biosecurity incidents across borders.
Speaker: Audience Member 3 (directed to Speakers 1 and 2)
Research needed on decentralized checks and balances / oversight mechanisms for AI bio‑risk.
Centralized authority may be ineffective; exploring decentralized models could improve responsiveness and coverage.
Speaker: Speaker 1
Research needed on tiered access and contextual norms for AI biodesign tools.
Differentiated governance can allow legitimate research while restricting malicious use.
Speaker: Speaker 2
Research needed on AI readiness benchmarks and sociocultural safety evaluations for Southeast Asia.
Current models trained on Western data underperform in regional contexts; tailored benchmarks are required.
Speaker: Speaker 3
Research needed on institutionalizing six‑monthly independent monitoring via an AI safety institute linked to international bodies.
Regular, credentialed assessments could provide continuous oversight but require multilateral investment and governance structures.
Speaker: Speaker 2
Research needed on designing proportionate, capability‑aware safeguards that are adaptive and quick for low‑resource labs.
Traditional periodic, paper‑based audits are too slow for fast‑moving AI developments.
Speaker: Speaker 1
Research needed on building incident‑reporting frameworks and taxonomies tailored to Indian and Global‑South contexts.
Context‑specific reporting captures diverse harms and improves response in varied regulatory environments.
Speaker: Speaker 3
Research needed on creating a Global South network for trustworthy AI and an AI safety commons.
Shared resources and standards can accelerate capacity building across developing nations.
Speaker: Speaker 3
Research needed on harmonising data standards and establishing legal safe harbours for cross‑border biosurveillance data sharing.
Standardised, legally protected data exchange is critical for effective regional outbreak detection and response.
Speaker: Speaker 2
Research needed on enhancing AI literacy and capacity‑building in marginalized communities.
Equitable understanding of AI risks ensures that vulnerable groups are not disproportionately affected.
Speaker: Speaker 3
Research needed on integrating AI evaluation modules into grant review processes and establishing cross‑trained biosafety review panels.
Embedding safety checks early in funding decisions can pre‑empt risky deployments.
Speaker: Speaker 1
Research needed on applying tech‑sovereignty measures to AI safety and security.
Domestic control over AI tools may reduce reliance on external platforms and improve national security.
Speaker: Speaker 1
Research needed on developing a comprehensive taxonomy for psychological and other non‑physical harms, and tools to assess perceptions among healthcare workers.
Understanding user perception and psychological impact informs targeted training and risk mitigation.
Speaker: Speaker 3
Research needed on systematic monitoring of model drift and distribution shift as part of safety monitoring.
Continuous detection of data drift ensures models remain accurate and safe over time.
Speaker: Speaker 3

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

AI Algorithms and the Future of Global Diplomacy

AI Algorithms and the Future of Global Diplomacy

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel examined how artificial intelligence is being integrated into diplomatic work, focusing on its practical use within the German Federal Foreign Office and its broader geopolitical implications. Raphael Leuner explained that the German government launched data labs across all federal ministries in 2021, creating sixteen labs including one in the Foreign Office, where he works as a data scientist [12-15]. He emphasized that being embedded in the ministry allows rapid co-creation of AI tools with short feedback loops, which is essential for a fast-moving field that cannot rely on traditional two-year IT projects [18-21][22-24].


The participants agreed that AI is not a new technology but a new tactical layer in international relations, echoing historical shifts such as the industrial, nuclear, and space revolutions [36-41]. Shahani Yaktiyami highlighted that middle powers like Germany and India can leverage AI by focusing on regulatory influence and sector-specific applications rather than competing for frontier model leadership, which is dominated by the United States and China [48-51]. Norman Schulz warned that without international cooperation AI poses risks comparable to nuclear technology and called for a future US-China dialogue to establish safety regulations [66-78].


He also described Germany’s role in negotiating the UN Global Digital Compact and the creation of an Independent Scientific International Panel on AI to ensure inclusive, science-based governance [166-176][179-184]. Shyam Krishnakumar pointed out that India, while not yet building large-scale frontier models, excels in contextual innovation, large-scale inference at low cost, and a skilled workforce, making it a strong partner for application-driven AI projects [92-102][104-110]. He suggested concrete Indo-German cooperation in industrial AI and healthcare, where Germany contributes automation expertise and data, and India provides model development and extensive surgical data [108-119].


Raphael Leuner added that the Foreign Office deliberately adopts open-source AI, reusing existing applications and developing negotiation-support tools, while monitoring the growing influence of Chinese open-source models and encouraging Indian alternatives [128-136][137-138]. The panel stressed that AI should augment, not replace, diplomatic analysis; AI can accelerate document processing and free diplomats for strategic thinking, but final decisions remain human [250-259].


Regarding media narratives, Shahani warned that allowing AI to shape geopolitical stories risks bias and manipulation, advocating for human-led narrative framing and bias-detection tools [274-283]. Norman noted that AI can help detect bias but should not be used to generate repetitive news content, reinforcing the need for human creativity [287-292].


Overall, the discussion concluded that while AI presents both opportunities and challenges for diplomacy, middle-power collaboration, open-source development, and inclusive governance are essential to harness its benefits responsibly [202-214][170-176].


Keypoints


Major discussion points


AI implementation inside the German Foreign Office – The ministry created 16 data labs across federal ministries in 2021-2022, allowing data scientists like Raphael to work directly with diplomats and develop fast-co-created AI solutions, especially for tasks such as negotiating support and document analysis. The team deliberately relies on open-source models and re-uses existing applications to keep development agile. [12-15][18-23][127-133]


Geopolitical framing of AI as “technology diplomacy” – Panelists stressed that AI is the latest layer of a long-standing pattern where technology reshapes foreign policy (Industrial, nuclear, space revolutions). While great powers compete for frontier AI leadership, middle powers such as India and Germany can leverage their specific strengths-regulatory influence for Germany and application-driven deployment for India-to carve out a role on the AI value chain. [35-41][46-51][78-85][90-110]


Governance, security, and sovereignty concerns – There is a consensus that AI’s rapid diffusion creates risks (bias, weaponisation, dependence on foreign models). Participants called for international cooperation, regulation, and the development of indigenous or trusted AI (avoiding unchecked Chinese open-source models). The UN-led Global Digital Compact and its Independent Scientific International Panel on AI are highlighted as mechanisms to embed inclusive, values-aligned governance. [68-77][146-149][157-166][166-184]


Indo-German cooperation on applied AI – Both sides see concrete collaboration opportunities in sectoral AI (industrial automation, healthcare, robotics). India’s large talent pool and cost-effective model development complement Germany’s industrial data, automation expertise, and investment capacity. Joint open-source projects and shared use-case pilots are proposed as a way for middle powers to create “more than one-plus-one” value. [108-119][130-136][202-214]


AI’s impact on information narratives and media – While AI can accelerate data processing, panelists warned against letting algorithms dictate geopolitical narratives. Risks include amplification of biased or malicious content, especially when AI-generated fake sites are used for influence operations. Human oversight, bias-detection tools, and regulatory safeguards are deemed essential. [274-280][287-292][294-298]


Overall purpose / goal of the discussion


The panel aimed to explore how AI is being adopted as a practical tool within foreign ministries, to assess its broader geopolitical implications, and to identify pathways for responsible governance and collaborative action-particularly between middle powers like India and Germany-so that AI can be harnessed for diplomatic effectiveness while mitigating security and ethical risks.


Overall tone and its evolution


Opening (0:00-3:00): Informative and optimistic, highlighting the novelty of data labs and the speed advantages of internal AI development.


Mid-section (3:00-12:00): Shifts to a broader, more analytical tone, situating AI within historical technology-diplomacy and emphasizing strategic competition among great powers versus opportunities for middle powers.


Later segment (12:00-22:00): Becomes cautionary and policy-focused, stressing governance, sovereignty, and the need for international frameworks (global digital compact, AI panel).


Final part (22:00-38:00): Returns to a collaborative, solution-oriented tone, discussing concrete Indo-German projects, open-source initiatives, and the balance between AI’s benefits and its risks to narrative integrity.


Overall, the conversation moves from enthusiasm about AI’s potential, through a sober assessment of geopolitical and ethical challenges, to a constructive call for cooperative, values-aligned action.


Speakers

Raphael Leuner – Data Scientist at the German Federal Foreign Office; works on AI tools and data labs within the Foreign Office [S1].


Gunda Ehmke – Moderator/Host of the panel discussion [S2].


Norman Schulz – Consul at the Coordination Staff for AI and Digital Technologies, German Foreign Office; diplomat [S4].


Shyam Krishnakumar – Associate at the Pranav (Pranava) Institute; focuses on emerging technology, public policy, and society from an India-first perspective [S5][S6].


Shahani Yaktiyami – Dr., Senior Officer, Technology Program at the German Marshall Fund; specialist in technology diplomacy and AI policy [S1][S7].


Audience – Various audience members (e.g., Sreeni, a student at Ashoka University; Sanjeevni, a radio journalist in the UK) who asked questions during the session.


Additional speakers:


Jian – Mentioned in the transcript as a participant to be addressed by the moderator; no further role or title provided.


Full session reportComprehensive analysis and detailed insights

The panel opened with Gunda Ehmke introducing the participants – Raphael Leuner, a data scientist from the German Federal Foreign Office; Dr Shahani Yaktiyami, senior officer for technology programmes at the German Marshall Fund; and Norman Schulz, a consular officer responsible for AI and digital technologies at the Foreign Office – and set the agenda to explore AI both as a diplomatic tool and as a policy issue [1-4].


German data-lab model – Leuner explained that the 2021 government decision to create data labs in every federal ministry resulted in sixteen labs, one of which is embedded in the Foreign Office [12-15]. Being inside the ministry enables “very, very short contacts and short paths” to diplomats, allowing rapid co-creation of AI solutions that would be impossible with traditional two-year IT projects and large, costly teams [18-21]. The lab’s early work focused on breaking down data silos; since the rise of generative AI it has shifted to building AI tools for negotiations, notably analysing large document collections [16-18][22-24][140-150] (originally cited [130-133]). From the outset the team chose open-source technologies and to reuse existing state-government applications, a strategy intended to keep development agile and cost-effective [127-133][128-136]. Leuner noted the growing worldwide adoption of Chinese open-source models and expressed enthusiasm for Indian large-language-model alternatives, without linking this to Europe’s strategic concerns [134-138].


Geopolitical framing – Shahani placed AI within a long historical pattern in which new technologies reshape foreign policy – from the Industrial and nuclear revolutions to the space race – arguing that while the technology is new, the diplomatic tactics it enables are not [35-41][42-44]. She highlighted that the current “AI race” is dominated by the United States and China, but that middle powers can still exert influence: Germany through regulatory and rule-making power, and India through large-scale application and deployment [46-51][78-85]. Shahani’s view was echoed by Shyam Krishnakumar, who described India as a “digital powerhouse” with a vast talent pool capable of building context-specific models and performing cheap, large-scale inference, even though it does not yet develop frontier-scale models [90-102][104-110].


Regulation and global governance – Norman stressed that AI’s rapid diffusion creates security risks, and in his case also sovereignty risks linked to Chinese models [66-78]. He answered Gunda’s question on how the Foreign Office governs AI, stating “short answer would be no… we need stronger regulation and international cooperation” [70-73]. He compared the emerging AI threat to the nuclear era and called for a future US-China dialogue to set safety limits [66-78]. Schulz outlined Germany’s leadership in negotiating the UN Global Digital Compact and in establishing the Independent Scientific International Panel on AI, which will produce its first scientific report and feed into a follow-up global AI dialogue in July in Geneva alongside the ITU AI for Good summit [166-176][179-184]. Norman added that each country must factor its own security context – such as Germany’s concerns over Ukraine and India’s border disputes – into technology decisions, and noted that many corporations now appoint geopolitical-risk advisors [146-149][141-148].


Open-source versus domestic development – Leuner argued that using open-source AI, even when sourced from China, is a pragmatic way to accelerate deployment and advocated for Indian open-source alternatives to diversify the ecosystem [128-136][137-138]. Schulz counter-argued that the safest route to align AI with national values is to develop systems in-house [157-158] and later warned that Chinese models embed “Chinese ways of thinking” that could compromise sovereignty [157-158]. This tension reflects a broader disagreement on whether reliance on existing open-source models is acceptable or whether new, domestically-controlled models are required [Disagreements].


Middle-power cooperation – Norman referenced Canadian Prime Minister Mark Carney’s call for middle-power collaboration, reinforcing the idea that Germany and India can jointly shape the AI value chain [78-85]. Krishnakumar proposed concrete Indo-German projects: in industrial AI, Germany would contribute automation expertise and industrial data while India would supply model-building capacity; in healthcare AI, India would bring a massive surgical-data set and Germany would provide investment power [108-119][130-136]. Leuner reinforced this vision, stating that middle powers should co-develop non-frontier, open-source AI projects that can be readily adopted by a broad range of users [202-214].


Day-to-day impact on diplomatic work – In response to an audience question on automating foreign-policy research, Norman explained that AI will not replace decision-making but will speed up information consumption, freeing diplomats to focus on analysis, relationship-building and strategic thinking [250-259]. Leuner and Schulz agreed that AI can automate the labour-intensive task of processing large document troves, while final decisions remain human-led [130-133][211-212]. Both cautioned that AI should not replace the creative and innovative thinking diplomats bring to negotiations [159-164].


Risks of AI-shaped narratives – An audience query about AI-generated propaganda prompted Shahani, Norman and Leuner to warn that generative models are already being weaponised to amplify disinformation, create fake websites and shape geopolitical narratives [295-298]. Shahani advocated for bias-detection tools and human oversight; Norman echoed that AI can help detect bias but should not be used to generate repetitive news content; Leuner stressed the need to develop alternatives to Chinese models to safeguard against strategic dependence [277-280][284-286][288-292][294-298].


Points of contention – (1) No consensus on whether open-source models from strategic rivals should be used or whether wholly domestic development is required [Disagreements 1]; (2) Divergence on the novelty of AI-driven diplomatic tactics – Shahani sees continuity with past technologies, while Schulz disputes that claim [Disagreements 4]; (3) Varying emphasis on multilateral regulation versus rapid internal innovation [Disagreements 3].


Take-aways – (1) The German Foreign Office’s data-lab model demonstrates the value of fast, internal co-creation of AI tools for negotiation support and document analysis; (2) AI should be viewed as an augmenting tool that automates routine information processing while preserving human decision-making; (3) Middle powers can wield influence on the AI value chain through regulatory leadership (Germany) and application-driven deployment (India), especially via sector-specific collaborations in industrial and health AI; (4) Inclusive, science-based global governance – exemplified by the UN Global Digital Compact and the Independent Scientific International Panel on AI – is essential to manage security, sovereignty and bias risks, particularly given the proliferation of Chinese open-source models.


The panel committed to continue using open-source technologies, to pursue joint Indo-German pilots in industrial and health AI, and to support the Global Digital Compact and the upcoming UN AI dialogue in Geneva. Unresolved issues include the precise mechanisms for ensuring AI systems align with national values, the development and scaling of non-frontier open-source models, and the detailed governance structures for bilateral cooperation. A suggested “managed interdependence” approach would combine Germany’s regulatory expertise with India’s model-building capacity while jointly developing open-source alternatives to reduce reliance on any single external source [Key takeaways and suggested compromises].


Session transcriptComplete transcript of the session
Gunda Ehmke

Institute. Then we have Raphael Leuner, Data Scientist at the German Federal Foreign Office. We have Dr. Shahani Yaktiyami, Senior Officer, Technology Program at the German Marshall Fund. And we have Norman Schulz, Consulate at the Coordination Staff, AI and Digital Technologies at the German Foreign Office. And to kick off the conversation today, so we will cover both AI as a topic and as a tool, I would like to first start with a tool. So going to Raphael, who is a Data Scientist, how do you use AI in the Foreign Office? And I also know that you have data labs, data and AI labs in the Foreign Office. So could you maybe share a little bit of your day -to -day work?

And yeah, actually, how could AI be used in diplomacy?

Raphael Leuner

Yeah, thanks so much. Yeah, maybe to get to take a step back and answer the question, how like someone like me as a as a data scientist by training ends up in a foreign ministry. I think that’s something that at least when we talk to colleagues around the world is still rather rare. We had kind of the lucky coincidence that I think in 2021, the German government decided to start data labs in all of its federal ministries. And so in the coming years since then, 2022 or until 2022, kind of 16 data labs have been founded in the German federal government. And I was lucky enough to be part of the one in the German federal foreign office. Yeah.

And I was working on AI ever since we started more on traditional data science, I would say. So tearing down data silos between governments or government institutions, in Germany and, of course… And ever since JetGPT and the AI revolution, we have been working mostly on AI tools. And I think the big advantage that we see is that we are in the ministry itself and have very, very short contacts and short paths to our colleagues who are working in Berlin and, of course, all around the world. And I believe in a field that is as fast moving as AI, that is so important because it doesn’t really work to develop these tools in sort of a traditional IT way of doing things, right?

We used to have IT development projects that take two years, have huge teams, cost a lot of money, but that are just not fast enough to deliver on an AI solution that our colleagues, our colleagues are experiencing themselves in their private lives, right? And some of them even… some official aspects. So what we think is the big advantage that we have and what we kind of from our experience would always advertise for is kind of this fast co -creation from within an organization. And I think that is for a topic like diplomacy that is the best way of leveraging AI. And I’m happy to go into more detail about that.

Gunda Ehmke

Thank you. And I will later ask you more on concrete use cases. But first, I would like to switch to the geopolitical dimension. So Sharini, switching over to you. Taking a step back, AI is now in the political landscape more or less present everywhere. From the Arctis, but also here at the Summit. Can you give us a broader picture? How is AI shaping diplomacy or foreign policy in general? What is the debate and where are we at the moment?

Shahani Yaktiyami

Thank you. Thank you for the question and also the invitation to be here, which is actually also me being in my home country. So you’ve invited me to my home country, which is an interesting space to be in. But at the broader sort of geopolitical level, AI is shaping not only sort of how we use technology in our strategic communication as countries as well, but as a tool of technology diplomacy. And I don’t necessarily think this is particularly new. Throughout the history of international relations and foreign policy, technology has always shaped our foreign policy. So this is the AI revolution. But if we take it back to the Industrial Revolution, if we take it back to the nuclear revolution, if we take it back to the space race, technology has always informed diplomacy.

And today it is artificial intelligence. So the technology is not new. Yes. But the tactics aren’t. And today we are here at the AI Summit, and this is also India’s way of communicating that it is being a part of a particular technological revolution, which in its previous histories, because of colonial encounters and things, we’ve been excluded. So in this space, this is a way in which countries from our parts of the world are also trying to kind of claim a space in global technology diplomacy. And this is through AI. And what I would also kind of want to just qualify is what we’re seeing in this particular sort of AI race is narratives of competition.

So if you look at sort of policy documents coming out of the United States, coming out of China, there’s a clear connection between kind of winning an AI race or securing leadership in artificial intelligence. And if you are a country of that size and you are the country that has, invented the frontier technology and you’ve been sort of the first movers in that. if a kind of geopolitical leverage which countries like Germany and India perhaps don’t have because we aren’t at that frontier capability but that being said we’re not powerless we just have a different form of power expressing power and that is when the entire middle power conversation comes into play both India and Germany can see themselves are in fact arguably middle powers and they have different ways of using their specific leverage on an AI value chain as geopolitical leverage so for Germany historically this has been through rules and through regulation and regulatory power for India now it is making a case for applications so India and we’ve seen the fact that the summit has changed from the AI action summit which was the French presidency now is seeing India framing it as the impact summit the slogans of the summit are very very much to do with aspirations to deployment or aspirations to impact.

So that is really a way in which a middle power like India is also trying to kind of claim its position on the stack. So what you’re seeing are the great powers who are competing at the frontier level, and then there are middle powers who are claiming their specific power on the value chain in different ways. And I’ll stop there for a second.

Gunda Ehmke

Thank you very much. And I would like to pick up this statement that you said. But the tech is new, but the tactics aren’t. So I have here a diplomat sitting next to me. Would you agree with the statement? And how do you govern AI in the Ministry of Foreign Affairs? And would you say, is this still the right approach to AI?

Norman Schulz

Oh, well, the short answer would be no. But the topic is so broad that obviously I could give you a four -hour talk about it. But as a diplomat, as you said, one has to start by saying that the AI Impact Summit here in Delhi, where we are all gathered, showcases the broad variety of AI and the broad picture that AI is now part of every day’s life, of all strands of life. That it is a tool in communication. It is a tool in agriculture, in industrial entrepreneurship, in finance, and also in diplomacy and foreign policy. So I find that very interesting what you alluded to, that we have these revolutions all the time. like the Industrial Revolution, like the nuclear revolution after the Second World War?

And where do the foreign ministries, where do foreign policy comes in? I mean, the technological revolution created frontrunners like the UK, maybe a little bit like France. But there was a point in time when people saw that only being at the front and adapting the frontier models is not the way to success. But we have to find a way to regulate things because otherwise people will lose their lives. It’s not work safe. It’s polluting the environment. Even back then, there was a problem. Nuclear power, the same thing. There was a race in the 50s. And the Cuba crisis beginning at the 60s showed to the world that the nuclear race could not go on like it was.

But we need international cooperation to mitigate somehow the risks of it. And I think AI is at a similar point. Maybe it needs a couple of more years when the U .S. and China will actively come together and work out what limitations and regulations we have to put on the technology because the risks in the end are outweighing possible and potential benefits. And the other great question is where do the middle powers come in? And this is what India and Germany are talking about. Well, we had the speech of Mark Carney, the Canadian Prime Minister in Davos, where he actively called for the middle power cooperation. And he said, well, we don’t have the power to do that.

I think India is at a one. wonderful place because you are a digital powerhouse and you have all the structures and all the workforce to also become an AI powerhouse. I would also make the case that Germany has also some advantages. We have infrastructure, we have the money to invest into AI, and we also have industrial data to be a frontrunner. Even if we didn’t succeed at the stage of large language models, maybe when it comes to robotics and embodied AI, Germany will still have a role to play. And obviously we at the Foreign Office are there to accompany the development of this and to prepare. Prepare the ground for international cooperation. And I believe it at that because others…

Gunda Ehmke

thank you thank you i would like to turn now to the printing perspective um the pranav institute works at the intersection of emerging technology public policy and society from an india first perspective um how do you see um how do you see potential room for cooperation between india and germany like we hear now the middle power those are middle power i hear a lot at the summit that india is leading in a ai adoption um i wouldn’t say so maybe in germany maybe my german colleagues would agree or disagree with me but from your perspective where do you see cooperation like potential cooperation could you also go a step back and um explain to the audience where you see india at the moment maybe also in light of the ai summit

Shyam Krishnakumar

yeah can you hear me i think that’s a very challenging question to answer. Where is India at? India is at a very interesting place, certainly. India is not lagging behind. India is not yet at a place where we can build frontier models. I think the infrastructure capacity for that is very high. I do see some interesting innovation coming out of India. When we saw those 14 models that was released over 14 days and very, very interesting in the sense that this is innovation which is grounded, contextual. It is coming from the grassroots. You are able to find native language use cases. You are able to do inference at scale at much an order of magnitude cheaper costs.

So, you are seeing technical innovation which is more context appropriate coming from India. There is, of course, a large workforce which is talented in technology and there is an upscaling possibility that certainly exists when into AI and that is a very large pipeline. So I think India is a very interesting place. India is adopting, India is innovating, India is building applications and use cases, which is a very useful way to think about the technology in its early stages, right? Because there is a huge possibility of investment booms and busts that can come in when you go in a technologically challenging direction without being adaptive. So I think the focus on saying what can we solve is a very useful way to think.

I think the counselor did allude to industrial AI. That’s a fantastic use case of cooperation where you and India could possibly, Indo -German cooperation would certainly work out in that sense because there is industrial expertise, there is automation expertise in Germany, there is industrial data. India has the capability to build technology, build models. So I think if we were to identify and not worry about the race for frontier models, because transformers are not going to be the same. They’re not going to be the only technology paradigm out there and not play the game that leading powers are, but to really think as middle powers do as Sharon said and say that can we focus on sectoral expertise?

For example, AI in healthcare is a fantastic opportunity for. Indo -German cooperation, there is fantastic data available. India performs 10 times the number of surgeries that other countries do. So there’s very interesting data available. Germany has the capacity to invest. Can we cooperate? Germany has expertise in automation. India has, you know, people who can build AI models. Can we cooperate? So I think there is possibility for bilateral cooperation that, you know, gives an argument that is more than one plus one in the case of some of these. And I don’t think it’s a zero -sum game that U .S. is winning or China is winning and they’re all left behind. I think the focus on applications is really where a differentiator is possible, and that need not come at frontier -level costs.

Gunda Ehmke

Thank you. And I would like to focus now on this application side because this is maybe the way to react to big tech or like us as a country being in the middle between these mentioned countries. Rafa, can I hand over to you to share a little bit how you have the foreign office approach? I know that you are working on it. It’s a negotiation tool. And to what extent can open source also be a solution or might be a solution? to the situation where we are at the moment.

Raphael Leuner

Sure. Yeah, so I think it’s exactly as you said, that the focus is on application. We have made a consequential, but I think important decision at the beginning that when we are implementing AI, we are focusing for most of what we do on open source technologies, not just the models themselves, but also a lot of the kind of scaffolding and applications around it. So on the one hand, for example, we are reusing applications that, for example, come from one of our state governments who have done like kind of a general chat and knowledge -based application that we are reusing. But of course, we have specific applications in the foreign office like supporting negotiations. A lot of what diplomats nowadays do is not necessarily sitting in rooms and negotiating face -to -face, but actually digging through huge piles of documents and…

trying to understand the positions of other countries, the impact that NGOs, academia, corporations bring into huge negotiation processes. And, of course, that’s, as we probably all know, is a great chance for artificial intelligence to leverage. I think one important point when we’re talking about AI and open source AI in governments is that we have seen a big trend shift or a shift in the trend last year where we have seen that a lot of the kind of leading open source AI models and actually also the ones that have been adopted in many parts of the world are coming from China nowadays. I think that’s an interesting intersection between my position as a technical observer here where we are looking at the numbers and seeing that really, like, you know, the world is adopting Chinese AI models at the moment.

And, of course, the consequences that that might bring for a country like Germany or a country like Europe. Like India on a global scale, if maybe… some of our partners are implementing Chinese AI models. So that is something that when it comes to open source, I think it’s really important that countries like India, and I think India is at a great position, and I’m super excited to see these new Indian AI models as well, these Indian LLMs, to see if there can be pushes that offer alternatives to these Chinese models.

Gunda Ehmke

Thank you. I would like to come back to this impact aspect. Now we heard impact in the public sector, but maybe also reflecting on the summit, AI Impact Summit. What are your thoughts on how will we now continue the conversation regarding impact, regarding really being concrete and not only writing governance formats or governance frameworks, so how can we make this cooperation very concrete and also continue where we are and face this geopolitical challenge

Shahani Yaktiyami

Shoni, yeah I like that all the geopolitical questions then somehow come back to me but I don’t blame you because my background is in international relations so that serves very well this purpose but I want to kind of also connect your point to what you just said about open source and the China connection I think we’re reaching a stage in international relations in which geopolitics and technology can’t be separated when we are integrating artificial intelligence into our daily life and into our government systems we can’t really separate the security risks that come with it And I think every country has a unique security situation. For Germany, obviously, there is the concern with Ukraine. With India, we have border security challenges as well.

We have territorial disputes that are very significant and have very serious national security implications. So the kind of technology we deploy into our systems, and if it’s open source Chinese models or any other form where we perceive or any country would perceive a national security risk, that needs to be factored in. And that is why even in our technology decisions, they have to factor geopolitical risk, which back in the day was not something that, say, companies would have to do. But now every single company that I see has now a position for a geopolitical risk advisor. And that really comes from the fact that we are living in a world in which if we are using technologies so seriously in our lives, lives, we do need to factor in how those technologies can be weaponized in a particular geopolitical situation.

And then that kind of brings me back to also some of the points that were on, you said, you know, where we, you, foreign office would like find it helpful for reports to be kind of processed to AI. As a think tank, I think I’m a little bit hurt, I have to say, because a lot of our work is producing a lot of those reports, but we will force you to read them. We’re very persistent at the German Marshall Fund. We will reach out and invite you and make you read them. But jokes aside, it is really, we’re aware also that our ability to consume information as well is kind of becoming shorter, but the world is getting more complex.

And therefore, we are also kind of preparing, even in the think tanking that we do, even in the way in which we kind of do our daily jobs, to factor in that. There will be an AI in this system, and we kind of need to put that into consideration as well.

Gunda Ehmke

and since there will be an AI in the system we have to make sure that we can trust this AI and that it’s also inclusive and that it’s yeah ethical in a sense or trustworthy regarding to standards so how do you and the government react to this could you also share more about the global digital compact and what is this panel about this scientific panel I think it’s called and how do we make sure that from this governance it goes to the system to the AI system like how do we make sure that the systems are aligned with our values

Norman Schulz

well that’s big question the best way to align the systems with our values is to develop to develop them ourselves right and not just procure them from from outside and I couldn’t agree more with the point that you made about the Chinese models, that even if it is open source, even if it runs on our servers, there are still Chinese models. They still have the Chinese ways and the Chinese ways of thinking, which comes through maybe not all the time. So using AI to do diplomatic work will not be the way because then every report will be the same, right? So I hope that Germany will not go the way to write the diplomatic reports now only using AI or summarizing it.

But we need our diplomats to insert that innovative thinking. And innovative thinking does not come from AI. Because AI… AI is much rather replicating, summarizing, in my understanding. The new ideas still come from the human side. As far as I make it out. Global digital compact. Thanks for the question. The Foreign Office was the lead in Germany to negotiate the global digital compact. And obviously you can make a point that this is a UN compact and the UN system is under immense pressure at the moment. So what does it achieve? And I would make the point that despite all that, it has at least produced two valuable avenues for future cooperation and discussion, two platforms.

The first is the AI panel. I think it’s called Independent Scientific International Panel on AI, but I could be wrong with the two I’s. It’s rather complicated. But it was just yesterday that the UN Secretary General made the point that the AI panel and the second one, the dialogue I will come to in a second, are the two major things where the UN is coming into the picture. And the panel has the task to put our discussions that we have on a global level about AI on a scientific basis. So those are experts, and I’m happy that there are two experts from Germany on the panel. Only the U .S. and China have also two experts.

I’m terribly sorry. I don’t know how many Indian experts are on the panel. But we’ll find. We’ll find that out. True. So they will produce a first report, a summary of where the AI science is now standing in time for the first global dialogue on AI governance, which will happen in July in Geneva in the margins or back to back with the AI for Good Summit at the International Telecommunications Union. And this dialogue serves the other big purpose of the Global Digital Compact, which is to make the AI discussion inclusive. And so it’s also the UN Secretary General nonetheless that said that AI cannot be a discussion among the few, the ones that are the front runners like the US and China.

They should not be the one to they should not. Not be the only one to set the rules, but it has to be a truly inclusive discussion about the AI. Up until now, more than 100 countries were not part of this discussion because they were not members of the European Union, not members of the Council of Europe, not members of the G7 or the G20. But they are the ones that will use AI, that will adapt AI, and they will also feel the bad results if AI is not doing what it is supposed to do. So it’s good that they have a voice at the table, that all UN member states will in July come together and talk about AI on the scientific base that the panel has provided.

So that is something that the Global Literature Compact is doing. And, of course, we can talk about geopolitics all the time, but I think that’s a way forward. And it’s to make a point. And I stop here and make a point.

Gunda Ehmke

Thank you. Thank you. And, Jian, let me turn to you now. And the fact that there is not a zero -sum game in a lot of this. I think the idea that we can work together to bring a larger voice beyond the worries of two countries or three countries which are able to compete at the top. I think that’s something that they shared. And I think the role of middle powers in bringing a more inclusive conversation is really important. And I think Indo -German cooperation is an opportunity for that. Including, for example, industrial AI that the counsellor mentioned or other opportunities where we can practically create tools like what Rafael is also talking about. Where we can practically create tools that are beneficial and maybe open source.

Why should open source models only come from strategically challenging sources? There could be Indo -German open source models, smaller models, not frontier models that could be beneficial.

Raphael Leuner

Yeah, I can maybe react to that directly because I think it’s super critical. I don’t think some people believe or make us believe that the AI race is already over or kind of only being decided between the US and China. I don’t believe that. I think we’re more at the start of what’s going to come. And I think we can feel this at the summit. And Gunda, you asked the question, what comes next? I think next comes building and implementing AI in all these kind of fields that we have. I think we see so many ideas around here and first steps towards that. But we don’t really see widespread AI adoption in every field, in every kind of part of life.

I do think this is going to happen over the next five years. And I don’t believe for a second that this is only going to be done by the U .S. or China. And, yeah, I think that when it comes to middle powers, Germany, India, I think we are going to see much closer collaboration in like smaller groups that don’t try to kind of, you know, build dependence, right, making you dependent on us, making us dependent on you, but rather ensure that every country can bring to the table what they are particularly good at and make the results kind of improve the application of AI for everybody involved. I do think there is a strategy for that.

And I think, yeah, the way forward you have asked is to start with it and to build AI together. I think this is a great, you know, a great rally cry for

Gunda Ehmke

Yeah, yeah, please.

Shyam Krishnakumar

Rafal, you led me on to a very interesting trail, so I had to intervene. I think one of the interesting moments, if you think about technology again, in the 1990s was the open source revolution, right? And when you really saw operating systems, consider the frontier technology of that time being built by volunteers at a fraction of the cost. diffuse the race in a certain way or diffuse the dominance in a certain way, but also enabled accessibility across the world. So I think even coming together as middle powers, the power of open source and democratizing and reducing the factor costs of access to AI, it becomes very powerful if you draw from it. And now you led me on to a trail as well.

I just want to kind of contextualize the sovereignty thing as well.

Shahani Yaktiyami

And I do think that when we talk about artificial intelligence, it’s not just one application that we see when we use our phones or interact with a particular model, right? It’s an entire stack. And the question of sovereignty or the concerns vis -a -vis the sovereignty debate is also born out of geopolitics, right? So we don’t want to be as a particular country in which suddenly one day we wake up and our technology is not available to us because, because of something else that happened in another country. corner of the world. So the sovereignty debate is coming out of geopolitics as well. That being said, we don’t need to be beholden to it. I fully agree.

And I really like the point on us understanding what our strengths are. I mean, Germany had a high -tech strategy that came out last year. There’s also an emphasis on Germany being a space for data as a data hub. And India is trying to do that as well. Germany already has that. One of the things China is really good at actually is industrial data, because they have been collecting this data for a very long time because they automated quicker than a lot of us. And that’s something where we can collectively build competitiveness. So I do think we need to reset some of the inequalities in the AI stack and that sovereignty, as much as I kind of understand where that comes from, I don’t always think that that’s the…

best language to talk about where we are at. I do think we need more sophisticated and nuanced ways of kind of talking about a managed interdependence where I have a certain value on an AI stack. That is my strength. And the likelihood of you weaponizing that makes it very limited. So that’s why I have leverage. And I do think leveraging a country’s strength on a specific AI stack is a prominent and powerful middle power strategy.

Gunda Ehmke

Thank you. These are all beautiful closing remarks, but I would like to open the floor to the audience. Are there any questions? Yes, everyone around.

Audience

Hi, I’m Sreeni. I’m a student at Ashoka University. I have a question for everyone in the panel. Feel free to. answer. The question is, what are some parts of foreign policy research, decision making and implementation which can be automated by AI or that will use a significant uses of AI to sort of do your day -to -day tasks?

Norman Schulz

Maybe I can quickly answer this question. Well, I certainly don’t think that AI will make any decision in any time soon. So there’s always going to be the human that is making the decision. And it’s not going to be me. It’s not going to be my boss. It’s going to be a collaborative decision by the government and the legislature and all of that. But our job will also not go away. We will use AI to make our job easier to consume data, to consume, I would like to say information, but it’s nothing. Consuming information easier, quicker and which in turn will free diplomats to do the other time, which is connecting, which is connecting the dots, which is thinking out innovative ways of cooperation, which is, it’s basically like drinking coffee and shaking hands.

These start traveling to India and learn a lot about the situation here. So, AI will free us from tedious tasks of skipping through these very valuable documents written by not only NGOs, but also governments, and will make our lives easier, but our work will not go away. Thank you.

Gunda Ehmke

Okay, one more question. The lady maybe in the back.

Audience

Hi, I’m Sanjeevni, and I work in radio journalism in the UK. And my question was for you, Norman. So, specifically, So, Norman and Sharini, both of you, actually. So, you guys are doing your Masters in Journalism. I was studying about how journalists were framing the Russia -Ukraine war. And we were observing how the narratives were changing based on different outlets. But something when we were talking about how AI is coming into play, do you think AI will help change narratives for the better? And I’m not speaking from journalism point of view. In general, geopolitically, do you think the narratives will be framed in a way that’s unbiased? Or how do you think it will help in that?

Shahani Yaktiyami

I don’t take a stab. I’d be very curious to point a question, actually, to you, because one of the things I’ve been really intrigued in in my line of work is how AI has been being deployed in the media and newsrooms. And I’d be very interested to have a chat after to learn how you’re doing that in terms of methodology. But to your question on AI shaping, I think it’s a great question. narratives, I would not let AI shape narratives. I would hope we shape narratives as human beings, depending on sort of what we think and feel and analyze through empirical evidence about the world. And I would be very worried in a world in which AI, we allow AI the space to shape narratives, especially on geopolitics, because then that would depend upon what that particular AI model that is doing the narrative shaping has been trained on.

But that being said, I would also see how AI can do the harm in terms of amplifying incorrect narratives or geopolitically challenging narratives. And that’s when we know that AI cannot replace society and it cannot do. So I do think that we are in a world in which if AI, we allow AI to shape narratives, that’s not a world we want to live in. Thank you. but at the same time, if it is a world in which that can happen, we need to find the right mitigation strategies to do that. One thing that I know India is doing, Shyam, we talked about it earlier, which is these bias detection technologies that are critical. AI is a technology that, on one hand, we do need strong regulation to make sure that we can prevent the harms from doing what they can, but at the same time, we need also technological tools to deal with some of those harms and push a democratic innovation as well in AI for exactly these harms.

And I’ll stop there.

Norman Schulz

Just one sentence. If you let AI write the newspapers, they are becoming incredibly dull because it’s going to be repetitive all the time. But I agree with your point about bias. This is something that we all have to challenge and to face. And AI is helping us. That’s a good thing. AI is not only a risk, it’s also the opportunity, helping us detect bias and then contravene it. Thank you.

Raphael Leuner

Just one sentence I want to add because I found your point so important. I don’t see any risk that AI is going to shape the narratives itself somehow, but of course it’s an incredible tool for actors trying to shape narratives. And we have seen this on so many fronts already. We have colleagues in the foreign office who are actually monitoring this and seeing that it’s, for example, used to amplify certain messages across social media, increasingly now across faked websites that, with the help of AI, you can pull up in seconds and suddenly you don’t have one of them or two of them, but you have thousands of them. And that is something that AI has already used quite heavily as a tool of certain actors who are trying to influence geopolitical discussions in exactly that way.

Gunda Ehmke

Sorry, we are running out of time, but I’m sure that the speakers stay here a little bit. So thank you for listening to our panel discussion. Maybe a big applause. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (34)
Factual NotesClaims verified against the Diplo knowledge base (4)
Confirmedhigh

“Gunda Ehmke introduced Raphael Leuner (data scientist, German Federal Foreign Office), Dr Shahani Yaktiyami (senior officer, German Marshall Fund), and Norman Schulz (consular officer for AI and digital technologies, German Foreign Office).”

The participant list and their roles are confirmed by the knowledge base entries that name Raphael Leuner, Dr. Shahani Yaktiyami and Norman Schulz with the same titles [S1] and [S8].

Additional Contextmedium

“The lab’s work includes building AI tools for negotiations, notably analysing large document collections.”

The knowledge base describes AI applications in negotiations such as data analysis, scenario modelling and document analysis, providing context for the lab’s focus [S100].

Additional Contextlow

“The team chose open‑source technologies and to reuse existing state‑government applications to keep development agile and cost‑effective.”

European policy discussions highlight a preference for open-source solutions in government systems and view open source as a competitive tool, adding nuance to the lab’s strategy [S109] and [S104].

Additional Contextlow

“Leuner noted the growing worldwide adoption of Chinese open‑source models and expressed enthusiasm for Indian large‑language‑model alternatives.”

The knowledge base mentions China’s use of open-source AI models and Europe’s interest in leveraging open-source technology, providing background for the observation about Chinese models, though it does not specifically reference Indian LLMs [S109].

External Sources (115)
S1
AI Algorithms and the Future of Global Diplomacy — Institute. Then we have Raphael Leuner, Data Scientist at the German Federal Foreign Office. We have Dr. Shahani Yaktiya…
S2
AI Algorithms and the Future of Global Diplomacy — -Gunda Ehmke: Moderator/Host of the discussion
S3
DAgA EgHIC\_ ʧ GLwE[FAFCE — BYX, EW XLI] EPWS TSMRX SYX, MJ [I HSRƶX JMRH [E]W XS WLEVI HEXE [I [MPP FI QYGL PIWW EFPI XS QEREKI XVERWMXMSRW, GY…
S4
AI Algorithms and the Future of Global Diplomacy — – Shahani Yaktiyami- Norman Schulz
S5
AI Algorithms and the Future of Global Diplomacy — -Shyam Krishnakumar: Works at an institute (appears to be associated with Pranav Institute based on context), focuses on…
S7
AI Algorithms and the Future of Global Diplomacy — Hi, I’m Sreeni. I’m a student at Ashoka University. I have a question for everyone in the panel. Feel free to. answer. T…
S8
https://dig.watch/event/india-ai-impact-summit-2026/ai-algorithms-and-the-future-of-global-diplomacy — Institute. Then we have Raphael Leuner, Data Scientist at the German Federal Foreign Office. We have Dr. Shahani Yaktiya…
S9
WS #280 the DNS Trust Horizon Safeguarding Digital Identity — – **Audience** – Individual from Senegal named Yuv (role/title not specified)
S10
Building the Workforce_ AI for Viksit Bharat 2047 — -Audience- Role/Title: Professor Charu from Indian Institute of Public Administration (one identified audience member), …
S11
Nri Collaborative Session Navigating Global Cyber Threats Via Local Practices — – **Audience** – Dr. Nazar (specific role/title not clearly mentioned)
S12
Open Forum: A Primer on AI — Artificial Intelligence is advancing at a rapid pace
S13
https://dig.watch/event/india-ai-impact-summit-2026/nextgen-ai-skills-safety-and-social-value-technical-mastery-aligned-with-ethical-standards — But still, you know, the amount of manpower you need for developing AGI kind of systems. And it is yet to see just a mat…
S14
Cybermediation: What role for blockchain and artificial intelligence? — After explaining in further detail some aspects of NLP, she suggested that these tools can be used to support the work o…
S15
How AI Is Transforming Diplomacy and Conflict Management — And Charlie, you talked a little bit before about, you know, there’s an obvious role that LLS has. I think that’s a real…
S16
The strategic imperative of open source AI — This AI shift has been counterintuitive. Chinese companies historically favoured proprietary software, and Republicans w…
S17
China’s AI industry is transforming with open-source models, challenging the OpenAI proprietary approach — China’s AI landscape iswitnessinga profound transformation as it embraces open-source large language models (LLMs), larg…
S18
UNGA Resolution on enhancing international cooperation on AI | ‘China’ AI Resolution — Calls upon other international, regional and subregional organizations and international financial institutions and all …
S19
To share or not to share: the dilemma of open source vs. proprietary Large Language Models — Bilel Jamoussi:Great. Thank you. Bilel Jamoussi:Great. Thank you. Google has a history of both open source contribution…
S20
https://dig.watch/event/india-ai-impact-summit-2026/how-the-global-south-is-accelerating-ai-adoption_-finance-sector-insights — See, I think the core aspects of regulation, as sir said, I generally don’t go into technology or which technology to us…
S21
Open Forum #16 AI and Disinformation Countering the Threats to Democratic Dialogue — – **Frances** – From YouthDIG, the European Youth IGF Chine Labbé: Hi, thank you very much for having me. So I’ll start…
S22
Islamic State exploits AI to enhance propaganda — Islamic State supporters increasingly use AI tobolstertheir online presence and create more sophisticated propaganda. A …
S23
Global AI adoption reaches record levels in 2025 — Global adoption of generative AIcontinued to risein the second half of 2025, reaching 16.3 percent of the world’s popula…
S24
AI diplomacy — However, we must remain masters of our tools. The final analysis, the subtle art of negotiation, the building of trust; …
S25
Crisis management — This collaboration also helps mitigate the limitations of both approaches. Human oversight ensures accountability, corre…
S26
Main Session on Artificial Intelligence | IGF 2023 — Seth Center:IAEA is an imperfect analogy for the current technology and the situation we faced for multiple reasons. One…
S27
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — And how do we demonstrate that the risks have been managed well? And that is where the assurance ecosystem that Rebecca …
S28
Networking Session #26 Transforming Diplomacy for a Shared Tomorrow — – **Claire Patzig**: Junior data associate at the Data Innovation Lab by the Federal Republic of Germany from the FFO (F…
S29
The role of diplomacy in AI geopolitics | AGDA — He also advised diplomatic services to start AI transformation through small projects such as the automation of administ…
S30
Artificial intelligence and diplomacy: A new tool for diplomats? — Artificial intelligence (AI) is transitioning from science fiction into our everyday lives. Over the past few years, the…
S31
Discussion Report: Sovereign AI in Defence and National Security — Faisal responds to concerns about competing global AI policies by arguing that the sovereign AI framework is adaptable t…
S32
What is it about AI that we need to regulate? — Speakers from developing nations advocated for stronger transparency requirements. In thelocal AI policy session, it was…
S33
Global AI Policy Framework: International Cooperation and Historical Perspectives — The speakers demonstrate significant consensus on key principles including the need for inclusive governance, building o…
S34
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — The level of disagreement is moderate but significant for implementation. While speakers share fundamental goals of resp…
S35
IndoGerman AI Collaboration Driving Economic Development and Soc — “Productivity and resilience.”[4]. “As Anandi said, we already have an MOU with Fraunhofer, which we are working togethe…
S36
Diplomatic policy analysis — Overreliance on technology:While machine learning and analytics are powerful tools, they are not infallible. Overdepende…
S37
AI Technology-a source of empowerment in consumer protection | IGF 2023 Open Forum #82 — Investigations are currently underway to assess the hazards posed by AI-powered language models that generate human-like…
S38
WS #236 Ensuring Human Rights and Inclusion: An Algorithmic Strategy — Abeer Alsumait: assistive technologies, but there are challenges like a very minor issue might also be a kind of we ca…
S39
The open-source gambit: How America plans to outpace AI rivals by democratising tech — The AI openness approach will spark a heated debate around the dual nature of open-source AI. The benefits are evident i…
S40
The strategic imperative of open source AI — Meta’s Chief AI Scientist, Yann LeCun, captured this shift clearly. Responding to those who see DeepSeek’s rise as ‘Chin…
S41
Global AI Policy Framework: International Cooperation and Historical Perspectives — Mirlesse outlines practical steps for implementing open sovereignty, emphasizing domestic AI deployment in key sectors w…
S42
Discussion Report: Sovereign AI in Defence and National Security — Faisal advocates for a strategic approach where countries focus their limited sovereign resources on the most critical c…
S43
AI Algorithms and the Future of Global Diplomacy — And today it is artificial intelligence. So the technology is not new. Yes. But the tactics aren’t. And today we are her…
S44
What does a former coffee-maker-turned-AI say about AI policy on the verge of the 2020s? — As we approach a new decade of policy discussions, we could say that this quote presents common thoughts, without provid…
S45
Empowering Workers in the Age of AI — Current AI models suffer from significant bias because they are trained primarily on data from developed countries and h…
S46
Laying the foundations for AI governance — Legal and regulatory | Cybersecurity Need for international cooperation despite geopolitical challenges
S47
How to make AI governance fit for purpose? — Given that AI technologies are inherently global, effective governance requires international engagement and cooperation…
S48
AI Meets Cybersecurity Trust Governance & Global Security — “AI governance now faces very similar tensions.”[27]”AI may shape the balance of power, but it is the governance or AI t…
S49
[WebDebate #22 summary] Algorithmic diplomacy: Better geopolitical analysis? Concerns about human rights? — Riordan started by acknowledging there are great potential applications for Big Data analysis both in content policy and…
S50
Diplomatic policy analysis — Overreliance on technology:While machine learning and analytics are powerful tools, they are not infallible. Overdepende…
S51
EU Artificial Intelligence Act — (44) High quality data and access to high quality data plays a vital role in providing structure and in ensuring the per…
S52
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — Ante este panorama, los países del sur global debemos priorizar estrategias y normativas para un uso ético y responsable…
S53
Scaling Trusted AI_ How France and India Are Building Industrial & Innovation Bridges — Unexpectedly, these speakers represent different philosophies toward AI development. Sheth emphasizes building indigenou…
S54
Networking Session #60 Risk & impact assessment of AI on human rights & democracy — Adopted by the Council of Europe, includes modules for risk analysis, stakeholder engagement, impact assessment, and mit…
S55
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — The level of disagreement is moderate but significant for implementation. While speakers share fundamental goals of resp…
S56
Judiciary engagement — Legal and regulatory | Development Need for global cooperation but sovereignty starts with informed national decision-m…
S57
The US push for AI dominance through openness — In a bold move to maintain its edge in the global AI race—especially against China—the United States has unveiled a swee…
S58
Keynote interview with Geoffrey Hinton (remote) and Nicholas Thompson (in-person) — Hinton highlighted the potential of AI to democratize education by serving as a personal tutor for every learner, thus b…
S59
HIGH LEVEL LEADERS SESSION I — Politicians may not always be aware of new technology
S60
New Technologies and the Impact on Human Rights — Current technology foresight exercises are designed primarily to enable ease of business with minimal negative externali…
S61
Defence against the DarkWeb Arts: Youth Perspective | IGF 2023 WS #72 — A lack of understanding of technology can lead to poor policy outcomes and enforcement outcomes It is believed that tru…
S62
AI Algorithms and the Future of Global Diplomacy — I think India is at a one. wonderful place because you are a digital powerhouse and you have all the structures and all …
S63
Networking Session #26 Transforming Diplomacy for a Shared Tomorrow — – **Claire Patzig**: Junior data associate at the Data Innovation Lab by the Federal Republic of Germany from the FFO (F…
S64
[WebDebate #19 summary] What is the potential of big data for diplomacy? — Höne pointed out that capacity building is a big part of the report. However, awareness-building needs to happen first, …
S65
https://dig.watch/event/india-ai-impact-summit-2026/ai-algorithms-and-the-future-of-global-diplomacy — Yeah, thanks so much. Yeah, maybe to get to take a step back and answer the question, how like someone like me as a as a…
S66
AI diplomacy — For centuries, power was defined by territory, armies, and economic might. Today, a new element is paramount: data and t…
S67
[WebDebate #22 summary] Algorithmic diplomacy: Better geopolitical analysis? Concerns about human rights? — Riordan remarked that while the culture of the technology sector is difficult to understand for diplomats and government…
S68
The role of diplomacy in AI geopolitics | AGDA — He also advised diplomatic services to start AI transformation through small projects such as the automation of administ…
S69
How AI Is Transforming Diplomacy and Conflict Management — I’ve been a major figure in international policy for the United States and in education at the Belfer Center, where our …
S70
Open Forum #30 High Level Review of AI Governance Including the Discussion — Need for inclusive international cooperation and avoiding fragmentation
S71
Strengthen Digital Governance and International Cooperation to Build an Inclusive Digital Future — ## Governance Frameworks and International Cooperation Models
S72
How to make AI governance fit for purpose? — AI governance must address various risks brought by AI technology, including data leakage, model hallucinations, AI acti…
S73
Global AI Policy Framework: International Cooperation and Historical Perspectives — Mirlesse outlines practical steps for implementing open sovereignty, emphasizing domestic AI deployment in key sectors w…
S74
IndoGerman AI Collaboration Driving Economic Development and Soc — Building confidence and security in the use of ICTs | Data governance | Artificial intelligence India’s demographic div…
S75
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — We’re expanding compute infrastructure, securing energy supply and access. We work on foundational models and AI solutio…
S76
Hard power of AI — Another noteworthy observation is the prevalence of Russian propaganda on the internet, particularly concerning the conf…
S77
UNSC meeting: Artificial intelligence, peace and security — Brazil:Thank you, Mr. President, Mr. President, dear colleagues. I thank the Secretary General for his briefing today an…
S78
What is it about AI that we need to regulate? — Long-term Implications of Private Digital Platforms in Shaping Public Discourse and Democratic ProcessesThe Internet Gov…
S79
Generative AI and Synthetic Realities: Design and Governance | IGF 2023 Networking Session #153 — However, it is important to note that there is a potential risk associated with the use of such systems, as they may pro…
S80
AI Technology-a source of empowerment in consumer protection | IGF 2023 Open Forum #82 — Investigations are currently underway to assess the hazards posed by AI-powered language models that generate human-like…
S81
Sticking with Start-ups / DAVOS 2025 — The overall tone was informative and optimistic. Panelists spoke candidly about challenges in the startup world but main…
S82
AI Innovation in India — The tone was consistently celebratory, inspirational, and optimistic throughout the discussion. Speakers expressed pride…
S83
Media Briefing: Unlocking the North Star for AI Adoption, Scaling and Global Impact / DAVOS 2025 — The overall tone was optimistic and forward-looking. Panelists expressed excitement about AI’s capabilities and potentia…
S84
AI Development Beyond Scaling: Panel Discussion Report — The tone began as optimistic and technically focused, with researchers enthusiastically presenting their innovative appr…
S85
Inclusive AI Starts with People Not Just Algorithms — The tone was consistently optimistic and empowering throughout the discussion. Speakers maintained an enthusiastic, forw…
S86
What role can AI play in diplomatic negotiation? (WebDebate #57) — 14:00 UTC (09:00 EDT | 15:00 CET | 19:30 IST) Given the stakes of this task, any tool that might support the process de…
S87
Diplomacy in beta: From Geneva principles to Abu Dhabi deliberations in the age of algorithms — Geopolitical disruption may be permanent—there seems to be little expectation of a return to a pre-existing order. The o…
S88
Network Session: Digital Sovereignty and Global Cooperation | IGF 2023 Networking Session #170 — In conclusion, the tension between cooperation and sovereignty in the digital cooperation landscape is a complex and mul…
S89
Leaders’ Plenary | Global Vision for AI Impact and Governance Morning Session Part 1 — And as a small country and as a small island developing state, as echoed by Mauritius, we do not have the same access to…
S90
Global Digital Compact in focus at one of the main sessions on IGF 2023 — The Global Digital Compact (GDC) took centre stage at theInternet Governance Forum (IGF) 2023, where key stakeholders co…
S91
Transforming Agriculture_ AI for Resilient and Inclusive Food Systems — The tone was consistently optimistic yet pragmatic throughout the conversation. Speakers maintained an encouraging outlo…
S92
AI for equality: Bridging the innovation gap — The conversation maintained a consistently optimistic yet realistic tone throughout. Both speakers demonstrated enthusia…
S93
Impact & the Role of AI How Artificial Intelligence Is Changing Everything — The discussion maintained a cautiously optimistic tone throughout, balancing enthusiasm for AI’s potential with realisti…
S94
Driving Indias AI Future Growth Innovation and Impact — The discussion maintained an optimistic and forward-looking tone throughout, characterized by enthusiasm for India’s AI …
S95
Building Population-Scale Digital Public Infrastructure for AI — The tone is optimistic and collaborative throughout, with speakers sharing concrete examples of successful implementatio…
S96
Open Forum #48 The International Counter Ransomware Initiative — – Niles Steinhoff: Cyber Foreign Policy and Cybersecurity Coordination Division at the German Federal Foreign Office Je…
S97
How to prepare diplomats for the AI era? — AI between one day, month, and year: The good news is that you can start ‘your AI’ fast by, for example, using the profe…
S98
When AI use turns dangerous for diplomats — Diplomats are increasingly turning to tools like ChatGPT and DeepSeek to speed up drafting, translating, and summarising…
S99
Collaborative AI Network – Strengthening Skills Research and Innovation — about those. So obviously, it’s not just creating applications. It’s the same old story of digital transformation, right…
S100
Negotiations — Artificial Intelligence (AI)has various applications in diplomacy. It can be used for data analysis to predict the outco…
S101
Open Forum #27 Make Your AI Greener a Workshop on Sustainable AI Solutions — Mario Nobile: Buongiorno, thank you and good morning to all. Our Italian strategy rests on four pillars. Education, firs…
S102
Nri Collaborative Session Data Governance for the Public Good Through Local Solutions to Global Challenges — Nancy Kanasa: Good morning, everyone. I’m Nancy Kanassa from the Pacific, KGF. I work with the government of Papua New G…
S103
Creating digital public infrastructure that empowers people | IGF 2023 Open Forum #168 — Mark Irura:To add on to what’s been shared already, the supply and the demand side were mentioned. And on the supply sid…
S104
Gathering and Sharing Session: Digital ID and Human Rights C | IGF 2023 Networking Session #166 — Audience:Just to add a bit to what the Secretary-General just said. The best thing a government can do is to develop the…
S105
WS #145 Revitalizing Trust: Harnessing AI for Responsible Governance — Sarim Aziz: At the risk of contradicting Matisse, but just to say yes, I mean, that’s one option. But I think the ans…
S106
Software.gov — In conclusion, Doreen Bogdan-Martin emphasizes the importance of GovStack as an efficient and reusable tool for implemen…
S107
Z.ai unveils cheaper, advanced AI model GLM-4.5 — Chinese AI startup Z.ai, formerly Zhipu, is increasing pressure on global competitors with its latest model,GLM-4.5. The…
S108
The rise of large language models and the question of ownership — What are large language models? Large language models (LLMs) are advanced AI systems that can understand and generate va…
S109
Global Perspectives on Openness and Trust in AI — “which is why in France and in Europe we’re very much in favor of open source as a competitive tool and as a way to leve…
S110
Keynotes — Historical Context of Technological Revolutions
S111
Satellite diplomacy — Satellite diplomacy cuts across all three aspects of diplomacy. Satellites are part of reshaping geopolitical environmen…
S112
AI for Democracy_ Reimagining Governance in the Age of Intelligence — I believe this had been the most important event. We are more or less actually reaching to the… culmination of this hi…
S113
Military AI: Operational dangers and the regulatory void — While international forums are yet to find consensus on key issues, many states are straying further from regulation to …
S114
AI race shows diverging paths for China and the US — The US administration’s new AI action plan frames global development as anAI racewith a single winner. Officials argue A…
S115
Building Trusted AI at Scale – Keynote Anne Bouverot — This comment shifts the discussion from acknowledging competition to actively proposing strategic alliances. It introduc…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
R
Raphael Leuner
8 arguments157 words per minute1172 words446 seconds
Argument 1
Fast co‑creation via data labs enables rapid AI solutions (Raphael)
EXPLANATION
Raphael explains that the establishment of data labs across German ministries created short, direct channels to colleagues, allowing the Foreign Office to develop AI tools quickly. This fast co‑creation bypasses the slow, costly traditional IT projects.
EVIDENCE
He notes that the German government launched data labs in 2021, resulting in 16 labs across ministries by 2022, and that he joined the one in the Foreign Office [12-14]. He stresses that being inside the ministry gives very short contact paths to colleagues, which is crucial for the fast-moving AI field and enables rapid co-creation of solutions [18-23].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The establishment of data labs across German ministries in 2021 and their role in rapid AI prototyping are described in the AI Algorithms and the Future of Global Diplomacy report [S1].
MAJOR DISCUSSION POINT
Implementation of AI within the German Foreign Office
DISAGREED WITH
Norman Schulz, Shahani Yaktiyami
Argument 2
AI supports negotiation preparation by processing large document sets (Raphael)
EXPLANATION
Raphael describes AI tools that help diplomats analyse massive amounts of documents to understand other countries’ positions and the impact of NGOs, academia and corporations. This supports more informed negotiation preparation.
EVIDENCE
He mentions a specific application that assists negotiations by digging through huge piles of documents to extract relevant positions and impacts, highlighting AI’s role in this process [130-133].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI’s capacity to synthesize massive document collections for negotiations is highlighted in How AI Is Transforming Diplomacy and Conflict Management [S15] and in the Cybermediation discussion of AI-assisted preparation [S14].
MAJOR DISCUSSION POINT
Implementation of AI within the German Foreign Office
AGREED WITH
Norman Schulz
Argument 3
Prioritising open‑source AI reduces dependence on external vendors, but Chinese open‑source models pose strategic concerns (Raphael)
EXPLANATION
Raphael states that the Foreign Office deliberately uses open‑source AI technologies and scaffolding to avoid vendor lock‑in, yet observes that many leading open‑source models currently come from China, raising strategic worries for Europe and India.
EVIDENCE
He explains that the office focuses on open-source models and tooling, reusing state applications, and notes the recent surge of Chinese open-source AI models being adopted worldwide, which could have consequences for Germany and Europe [128-136].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The strategic shift toward open-source AI and the emerging dominance of Chinese open-source models are analysed in The strategic imperative of open source AI [S16] and China’s AI industry transformation with open-source models [S17].
MAJOR DISCUSSION POINT
Governance, regulation, and international cooperation on AI
AGREED WITH
Norman Schulz
DISAGREED WITH
Norman Schulz, Shahani Yaktiyami
Argument 4
Joint development of non‑frontier AI projects can strengthen both countries’ positions (Raphael)
EXPLANATION
Raphael argues that middle powers like Germany and India can collaborate on AI projects that are not at the frontier, focusing on shared strengths to create practical applications without creating dependency on the US or China.
EVIDENCE
He emphasizes that AI adoption will expand over the next five years, that middle powers will collaborate in smaller groups, and that building AI together on non-frontier models will benefit all participants [202-213].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Collaboration on non-frontier AI models among middle powers is advocated in the AI Algorithms and the Future of Global Diplomacy transcript [S1] and the open-source vs. proprietary debate [S19].
MAJOR DISCUSSION POINT
Indo‑German cooperation and sectoral AI applications
AGREED WITH
Shyam Krishnakumar, Gunda Ehmke, Shahani Yaktiyami
Argument 5
The German Foreign Office adopts open‑source technologies and reuses existing state applications for AI projects (Raphael)
EXPLANATION
Raphael outlines the office’s strategy of leveraging open‑source AI models and reusing existing applications from German states to accelerate AI deployment while minimizing development costs.
EVIDENCE
He cites the reuse of a general chat and knowledge-based application from a state government and the broader focus on open-source technologies for AI projects [128-130].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The Foreign Office’s reuse of state-level applications and its open-source AI stack are documented in the AI Algorithms and the Future of Global Diplomacy report [S1] and reinforced by the strategic imperative of open-source AI [S16].
MAJOR DISCUSSION POINT
Open‑source AI strategy and strategic considerations
Argument 6
There is a need to develop Indian open‑source alternatives to counter the dominance of Chinese models (Raphael)
EXPLANATION
Raphael calls for the creation of Indian open‑source large language models to provide alternatives to the currently dominant Chinese models, thereby diversifying the AI ecosystem.
EVIDENCE
He mentions his excitement about emerging Indian AI models and suggests they could serve as alternatives to Chinese offerings, reducing strategic dependence [135-137].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for Indian open-source large language models as alternatives to Chinese offerings are echoed in analyses of open-source strategy and Chinese model risks [S16] and [S17].
MAJOR DISCUSSION POINT
Open‑source AI strategy and strategic considerations
Argument 7
AI is already being used as a tool for propaganda; continuous monitoring is required (Raphael)
EXPLANATION
Raphael warns that AI is being exploited to amplify political messages and generate large numbers of fake websites, necessitating ongoing monitoring by the Foreign Office.
EVIDENCE
He describes how AI is used to amplify certain messages across social media and to quickly create thousands of fake websites, a practice already observed by colleagues in the Foreign Office [295-298].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The use of AI for large-scale disinformation and propaganda, and the need for ongoing monitoring, are discussed in the AI and Disinformation forum [S21] and in reports on extremist exploitation of AI [S22].
MAJOR DISCUSSION POINT
AI’s impact on diplomatic work and narrative formation
AGREED WITH
Norman Schulz, Shahani Yaktiyami
DISAGREED WITH
Shahani Yaktiyami, Norman Schulz
Argument 8
AI adoption in the Foreign Office is expected to expand significantly over the next five years.
EXPLANATION
Raphael predicts that AI will become widely used across many functions of the Foreign Office within a medium‑term horizon, moving beyond pilot projects to broader deployment.
EVIDENCE
He states that while AI is not yet widespread in every field, he expects widespread AI adoption in the next five years [211-212].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Projected rapid expansion of AI within foreign ministries aligns with global adoption trends reported in Global AI adoption reaches record levels [S23] and the forward-looking outlook in the AI Algorithms report [S1].
MAJOR DISCUSSION POINT
Future trajectory of AI implementation in diplomacy
N
Norman Schulz
7 arguments127 words per minute1437 words675 seconds
Argument 1
AI frees diplomats from tedious data‑consumption tasks, but decisions remain human (Norman)
EXPLANATION
Norman explains that AI can automate the processing of large volumes of information, allowing diplomats to focus on higher‑level analysis and relationship‑building, while final decisions will still be made by humans.
EVIDENCE
He states that AI will make consuming information easier and quicker, freeing diplomats for connecting the dots and innovative cooperation, but emphasizes that decision-making will remain a collaborative human process [257-259].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI’s role in automating data consumption while keeping decision-making human is described in How AI Is Transforming Diplomacy and Conflict Management [S15] and reinforced by AI diplomacy commentary emphasizing human mastery [S24].
MAJOR DISCUSSION POINT
Implementation of AI within the German Foreign Office
AGREED WITH
Raphael Leuner
Argument 2
International cooperation is needed to mitigate AI risks, analogous to nuclear‑era agreements (Norman)
EXPLANATION
Norman draws parallels between the nuclear arms race and the current AI race, arguing that global cooperation is essential to manage AI risks and prevent harmful competition.
EVIDENCE
He references historical nuclear tensions, the Cuban crisis, and suggests that the US and China will eventually need to cooperate on AI regulations, highlighting the need for international risk mitigation [66-76].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The analogy between AI risk mitigation and nuclear arms control is drawn in the IGF 2023 session on AI governance [S26] and in the UNGA Resolution on enhancing international cooperation on AI [S18].
MAJOR DISCUSSION POINT
Geopolitical implications of AI and the role of middle powers
AGREED WITH
Shahani Yaktiyami
DISAGREED WITH
Raphael Leuner, Shahani Yaktiyami
Argument 3
Germany leads the Global Digital Compact and the UN AI scientific panel to ensure inclusive governance (Norman)
EXPLANATION
Norman outlines Germany’s leadership in negotiating the Global Digital Compact and its participation in the UN‑mandated Independent Scientific International Panel on AI, which aims to provide a scientific basis for global AI governance.
EVIDENCE
He details Germany’s role in the Compact, the AI panel’s composition, its upcoming report, and the July AI dialogue in Geneva that will feed into inclusive AI governance discussions [166-184].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Germany’s leadership role in the Global Digital Compact and the UN-mandated Independent Scientific International Panel on AI is noted in the UNGA resolution on AI cooperation [S18].
MAJOR DISCUSSION POINT
Governance, regulation, and international cooperation on AI
AGREED WITH
Shahani Yaktiyami
Argument 4
AI automates information processing, freeing diplomats for higher‑level work, but does not replace decision‑making (Norman)
EXPLANATION
Norman reiterates that AI will handle routine data‑processing, enabling diplomats to concentrate on strategic thinking and relationship‑building, while decision authority stays with human officials.
EVIDENCE
He repeats that AI will free diplomats from tedious document review, allowing them to focus on connecting the dots and innovative cooperation, but stresses that decisions will still be made collaboratively by government and legislature [257-259].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Automation of information processing while preserving human decision authority is highlighted in How AI Is Transforming Diplomacy [S15] and AI diplomacy commentary [S24].
MAJOR DISCUSSION POINT
AI’s impact on diplomatic work and narrative formation
Argument 5
AI can be employed to detect and mitigate bias in media and diplomatic narratives.
EXPLANATION
Norman highlights AI’s capacity to identify biased content and help correct it, turning the technology into a safeguard rather than a source of bias.
EVIDENCE
He notes that AI helps detect bias and then counter it, describing it as both a risk and an opportunity for bias mitigation [288-292].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI-driven bias detection and mitigation tools for media and diplomatic content are discussed in the AI and Disinformation forum [S21] and in crisis-management literature on bias mitigation [S25].
MAJOR DISCUSSION POINT
AI as a tool for bias detection
Argument 6
AI should not replace human creativity in diplomatic reporting; human innovative thinking remains essential.
EXPLANATION
Norman warns that relying solely on AI for drafting diplomatic reports would erase the unique analytical contributions of diplomats, emphasizing the need for human insight.
EVIDENCE
He argues that using AI to write diplomatic reports would make them uniform and that innovative thinking must come from humans, not AI [159-164].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The necessity of human creativity in diplomatic reporting is emphasized in AI diplomacy commentary that stresses human mastery of the tool [S24] and in crisis-management oversight discussions [S25].
MAJOR DISCUSSION POINT
Limits of AI in diplomatic work
Argument 7
Developing AI domestically ensures alignment with national values and avoids foreign model biases.
EXPLANATION
Norman suggests that building AI systems in‑house is the best way to guarantee they reflect a country’s own values and are not influenced by external ideological biases.
EVIDENCE
He states that the best way to align systems with values is to develop them ourselves rather than procure from outside, and cites concerns about Chinese model biases [157-158].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Domestic AI development to safeguard national values and avoid foreign model bias is supported by analyses of the strategic imperative of open-source AI [S16] and concerns about Chinese model bias [S17].
MAJOR DISCUSSION POINT
Domestic AI development for value alignment
S
Shahani Yaktiyami
6 arguments162 words per minute1665 words615 seconds
Argument 1
AI continues the historic pattern of technology shaping diplomacy; tactics are not new (Shahani)
EXPLANATION
Shahani observes that technology has always influenced foreign policy, citing past revolutions such as the Industrial, Nuclear, and Space eras, and positions AI as the latest iteration of this long‑standing dynamic.
EVIDENCE
She references the historical impact of the Industrial, nuclear, and space revolutions on diplomacy, stating that today artificial intelligence continues this pattern [36-40].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The view that AI is the latest in a series of technological revolutions affecting diplomacy is reflected in the AI Algorithms and the Future of Global Diplomacy overview of past revolutions [S1] and AI diplomacy commentary [S24].
MAJOR DISCUSSION POINT
Geopolitical implications of AI and the role of middle powers
Argument 2
Middle powers can leverage AI through regulation (Germany) and application‑focused strategies (India) (Shahani)
EXPLANATION
Shahani argues that Germany can use its regulatory expertise while India can focus on AI applications, allowing both middle powers to exert influence on the AI value chain despite not leading in frontier model development.
EVIDENCE
She explains that Germany traditionally leverages rules and regulation, whereas India emphasizes application-driven AI, illustrating how each middle power can claim a strategic position on the AI stack [48-50].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The complementary roles of Germany’s regulatory expertise and India’s application-driven AI approach are discussed in the UNGA AI cooperation resolution [S18] and the strategic imperative of open-source AI [S16].
MAJOR DISCUSSION POINT
Geopolitical implications of AI and the role of middle powers
AGREED WITH
Raphael Leuner, Shyam Krishnakumar, Gunda Ehmke
DISAGREED WITH
Norman Schulz, Raphael Leuner
Argument 3
AI deployment must factor security and sovereignty risks specific to each country (Shahani)
EXPLANATION
Shahani stresses that AI systems must be evaluated for national security implications, citing Germany’s concerns about Ukraine and India’s border disputes, and notes that companies now employ geopolitical risk advisors.
EVIDENCE
She mentions Germany’s concern with Ukraine, India’s border security challenges, and the emergence of geopolitical risk advisors in companies, underscoring the need to embed security considerations into AI deployment [141-148].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Country-specific security and sovereignty considerations for AI deployment are highlighted in the UNGA resolution on AI capacity-building [S18] and in risk-aware AI governance literature [S25].
MAJOR DISCUSSION POINT
Governance, regulation, and international cooperation on AI
AGREED WITH
Norman Schulz
Argument 4
AI should not autonomously shape geopolitical narratives; human oversight is essential (Shahani)
EXPLANATION
Shahani argues that narratives should remain under human control, warning that AI‑generated narratives could reflect the biases of the underlying models and potentially amplify misinformation.
EVIDENCE
She states that AI should not be allowed to shape narratives, emphasizing the need for human judgment and expressing concern about AI-driven amplification of incorrect or geopolitically charged narratives [277-280].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for human oversight over AI-generated geopolitical narratives is advocated in AI diplomacy commentary emphasizing human mastery [S24] and bias-mitigation discussions [S25].
MAJOR DISCUSSION POINT
AI’s impact on diplomatic work and narrative formation
AGREED WITH
Raphael Leuner, Norman Schulz
DISAGREED WITH
Norman Schulz, Raphael Leuner
Argument 5
AI can assist in detecting and mitigating bias in media narratives (Shahani)
EXPLANATION
Shahani highlights that AI tools can be employed to identify and counteract bias in news and social media, suggesting that such technologies are part of India’s efforts to ensure fair information flows.
EVIDENCE
She points to India’s development of bias-detection technologies and calls for strong regulation combined with technical tools to mitigate harms while fostering democratic innovation [284-286].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI tools for bias detection in media and social platforms are covered in the AI and Disinformation forum [S21] and in crisis-management bias mitigation studies [S25].
MAJOR DISCUSSION POINT
AI’s impact on diplomatic work and narrative formation
Argument 6
AI serves as a tool for strategic communication and technology diplomacy, enabling countries to project influence.
EXPLANATION
Shahani describes AI not only as a technology but also as a means for states to conduct strategic communication and advance their diplomatic objectives in the global arena.
EVIDENCE
She notes that AI is shaping how we use technology in strategic communication as countries, and that it functions as a tool of technology diplomacy [35-36].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI’s role as a strategic communication instrument in technology diplomacy is examined in How AI Is Transforming Diplomacy and Conflict Management [S15] and in the AI and Disinformation discussion [S21].
MAJOR DISCUSSION POINT
AI as an instrument of technology diplomacy
S
Shyam Krishnakumar
4 arguments198 words per minute648 words195 seconds
Argument 1
India excels in context‑specific models and has a large, skilled AI workforce, though it does not yet build frontier models (Shyam)
EXPLANATION
Shyam notes that while India is not yet creating large frontier language models, it produces context‑relevant AI solutions, benefits from a sizable talent pool, and can develop applications at lower cost.
EVIDENCE
He states that India is not lagging but lacks frontier model capability; however, it shows strong innovation with 14 models released over 14 days, a large skilled workforce, and cost-effective inference capabilities [92-100].
MAJOR DISCUSSION POINT
Indo‑German cooperation and sectoral AI applications
Argument 2
Cooperation opportunities exist in industrial AI and healthcare, combining German data/automation expertise with Indian model‑building capacity (Shyam)
EXPLANATION
Shyam proposes joint projects where Germany contributes industrial data and automation know‑how, while India provides AI model development, especially in sectors like healthcare where large datasets exist.
EVIDENCE
He cites industrial AI as a promising cooperation area, mentions India’s capacity to build models, Germany’s automation expertise, and highlights healthcare data (India performs ten times more surgeries) as a fertile ground for joint AI initiatives [108-119].
MAJOR DISCUSSION POINT
Indo‑German cooperation and sectoral AI applications
AGREED WITH
Raphael Leuner, Gunda Ehmke, Shahani Yaktiyami
Argument 3
Open‑source democratizes AI, lowers entry costs, and enables middle‑power collaboration (Shyam)
EXPLANATION
Shyam reflects on the 1990s open‑source revolution, arguing that open‑source AI reduces barriers to entry and allows middle powers to collaborate without being dominated by large vendors.
EVIDENCE
He references the open-source revolution of the 1990s, noting how volunteer-built operating systems lowered costs and democratized access, and connects this to current AI collaboration among middle powers [219-222].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The democratizing effect of open-source AI and its suitability for middle-power collaboration are analysed in The strategic imperative of open source AI [S16] and the open-source vs. proprietary debate [S19].
MAJOR DISCUSSION POINT
Open‑source AI strategy and strategic considerations
Argument 4
India’s large, skilled AI workforce creates a strong pipeline for upskilling and capacity development.
EXPLANATION
Shyam points out that the extensive talent pool in India provides a foundation for continuous AI skill development and scaling of AI initiatives.
EVIDENCE
He mentions a large workforce that is talented in technology and an upscaling possibility that certainly exists for AI, indicating a strong pipeline for capacity development [102-103].
MAJOR DISCUSSION POINT
India’s AI talent pipeline
G
Gunda Ehmke
3 arguments111 words per minute927 words500 seconds
Argument 1
AI has become pervasive in the political landscape, influencing diplomacy and foreign policy.
EXPLANATION
Gunda notes that artificial intelligence is now present everywhere in politics, shaping how diplomacy and foreign policy are conducted.
EVIDENCE
She observes that AI is now in the political landscape more or less present everywhere and asks how it is shaping diplomacy and foreign policy in general [28-32].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The pervasiveness of AI in politics and its impact on diplomacy is noted in AI diplomacy commentary stressing human mastery of the tool [S24].
MAJOR DISCUSSION POINT
Geopolitical impact of AI on diplomacy
Argument 2
Open‑source AI models should be co‑developed by middle powers like India and Germany rather than relying on models from strategic rivals.
EXPLANATION
Gunda argues that middle powers can create their own open‑source AI solutions, reducing dependence on Chinese or other strategically sensitive sources and fostering inclusive innovation.
EVIDENCE
She questions why open-source models should only come from strategically challenging sources and suggests that Indo-German open-source models could be developed as alternatives [200-201].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The case for co-development of open-source models by middle powers is made in The strategic imperative of open source AI [S16] and the open-source vs. proprietary discussion [S19].
MAJOR DISCUSSION POINT
Indo‑German cooperation on open‑source AI
Argument 3
The AI Impact Summit should move from high‑level statements to concrete cooperation mechanisms.
EXPLANATION
Gunda emphasizes the need to translate summit discussions into tangible actions and partnerships rather than merely producing governance frameworks.
EVIDENCE
She asks how the conversation on impact can become concrete and lead to real cooperation, highlighting the gap between discussion and implementation [138-140].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for translating high-level AI summit statements into concrete cooperation actions appear in the IGF 2023 session urging practical mechanisms [S26].
MAJOR DISCUSSION POINT
From AI impact discussion to concrete cooperation
A
Audience
2 arguments148 words per minute183 words73 seconds
Argument 1
AI can automate parts of foreign‑policy research, decision‑making and implementation, easing diplomats’ day‑to‑day tasks.
EXPLANATION
A student asks which components of foreign‑policy work could be handled by AI, implying that automation is feasible and desirable for routine analytical work.
EVIDENCE
The audience member asks what parts of foreign-policy research, decision making and implementation can be automated by AI for day-to-day tasks [246-249].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Automation of foreign-policy research and decision support by AI is highlighted in How AI Is Transforming Diplomacy and Conflict Management [S15] and reinforced by AI diplomacy commentary on human-AI collaboration [S24].
MAJOR DISCUSSION POINT
Potential automation of foreign‑policy work
Argument 2
AI has the potential to improve media narratives and reduce bias, but safeguards are needed to ensure neutrality.
EXPLANATION
An audience participant questions whether AI can help create more unbiased narratives in journalism and geopolitics, suggesting that AI could be a tool for better framing if properly managed.
EVIDENCE
The audience asks if AI will help change narratives for the better and whether it can produce unbiased geopolitical narratives [264-273].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The potential of AI to improve narratives while requiring safeguards is discussed in the AI and Disinformation forum [S21] and in bias-mitigation studies within crisis-management literature [S25].
MAJOR DISCUSSION POINT
AI’s role in shaping and bias‑checking narratives
Agreements
Agreement Points
AI can automate large‑scale information processing for diplomats, freeing them for higher‑level analysis while final decisions remain human.
Speakers: Raphael Leuner, Norman Schulz
AI supports negotiation preparation by processing large document sets (Raphael) AI frees diplomats from tedious data‑consumption tasks, but decisions remain human (Norman)
Both speakers state that AI helps handle massive document collections – supporting negotiation prep (Raphael) and easing data consumption (Norman) – but emphasise that diplomatic decisions will still be made by humans. [130-133][257-259]
POLICY CONTEXT (KNOWLEDGE BASE)
The potential of algorithmic diplomacy to support foreign-policy analysis has been highlighted, but experts stress that human judgment must remain central to avoid biased or incomplete conclusions [S49][S50].
Open‑source and domestically developed AI are preferred to avoid strategic dependence on foreign (especially Chinese) models.
Speakers: Raphael Leuner, Norman Schulz
Prioritising open‑source AI reduces dependence on external vendors, but Chinese open‑source models pose strategic concerns (Raphael) Developing AI domestically ensures alignment with national values and avoids foreign model biases (Norman)
Both argue for using open-source or in-house AI to maintain strategic autonomy, noting the rise of Chinese open-source models as a risk. [128-136][157-158]
POLICY CONTEXT (KNOWLEDGE BASE)
Policy papers argue that open-source AI reduces reliance on single foreign providers and mitigates strategic risk, a stance echoed in the U.S. AI Action Plan and calls for diversification of AI sources [S39][S40][S57][S41].
Middle powers such as Germany and India should cooperate on sector‑specific, non‑frontier AI projects and co‑develop open‑source models.
Speakers: Raphael Leuner, Shyam Krishnakumar, Gunda Ehmke, Shahani Yaktiyami
Joint development of non‑frontier AI projects can strengthen both countries’ positions (Raphael) Cooperation opportunities exist in industrial AI and healthcare, combining German data/automation expertise with Indian model‑building capacity (Shyam) Open‑source AI models should be co‑developed by middle powers like India and Germany rather than relying on models from strategic rivals (Gunda) Middle powers can leverage AI through regulation (Germany) and application‑focused strategies (India) (Shahani)
All four speakers see value in Germany-India collaboration on practical AI applications (e.g., industrial AI, healthcare) and in jointly building open-source models, leveraging each country’s comparative strengths while avoiding reliance on frontier-model leaders. [202-213][108-119][200-201][48-50]
POLICY CONTEXT (KNOWLEDGE BASE)
Initiatives like the France-India trusted-AI bridge and South-global inclusive AI norms illustrate how middle powers can jointly develop sector-focused, open-source solutions while preserving strategic autonomy [S53][S52][S41].
AI introduces security and sovereignty risks that require international, inclusive governance and cooperation.
Speakers: Norman Schulz, Shahani Yaktiyami
International cooperation is needed to mitigate AI risks, analogous to nuclear‑era agreements (Norman) Germany leads the Global Digital Compact and the UN AI scientific panel to ensure inclusive governance (Norman) AI deployment must factor security and sovereignty risks specific to each country (Shahani)
Norman stresses the need for multilateral risk-mitigation frameworks (drawing on nuclear-era lessons) and highlights Germany’s role in the Global Digital Compact and UN AI panel, while Shahani points out country-specific security and sovereignty considerations, together underscoring the necessity of inclusive global governance. [66-76][166-184][141-148]
POLICY CONTEXT (KNOWLEDGE BASE)
Reports on sovereign AI emphasize the need to manage critical control points and call for inclusive, multilateral governance frameworks to address security and sovereignty challenges [S42][S46][S47][S48].
AI can be weaponised to spread biased or false narratives, but it also offers tools for bias detection; human oversight remains essential.
Speakers: Raphael Leuner, Norman Schulz, Shahani Yaktiyami
AI is already being used as a tool for propaganda; continuous monitoring is required (Raphael) AI can be employed to detect and mitigate bias in media and diplomatic narratives (Norman) AI should not autonomously shape geopolitical narratives; human oversight is essential (Shahani)
All three agree that AI poses a double-edged risk: it can amplify disinformation (Raphael) yet also provide bias-detection capabilities (Norman), and therefore must be supervised by humans to prevent harmful narrative shaping (Shahani). [295-298][288-292][277-280]
POLICY CONTEXT (KNOWLEDGE BASE)
Studies show AI models inherit biases from training data, enabling both misinformation and bias-detection capabilities, underscoring the necessity of human oversight and risk-assessment mechanisms [S45][S50][S54][S60].
Similar Viewpoints
Both highlight that AI is now a central, recurring factor in diplomatic practice, continuing a long‑standing pattern where new technologies reshape foreign policy. [28-32][36-40]
Speakers: Gunda Ehmke, Shahani Yaktiyami
AI has become pervasive in the political landscape, influencing diplomacy and foreign policy. AI continues the historic pattern of technology shaping diplomacy; tactics are not new.
Both anticipate a rapid scaling of AI use within diplomatic institutions, with the technology handling routine analysis while humans retain decision authority. [211-212][257-259]
Speakers: Raphael Leuner, Norman Schulz
AI adoption in the Foreign Office is expected to expand significantly over the next five years. AI frees diplomats from tedious data‑consumption tasks, but decisions remain human.
Unexpected Consensus
Both diplomatic and policy‑analysis perspectives converge on the need for an inclusive, multilateral AI governance architecture despite their different institutional roles.
Speakers: Norman Schulz, Shahani Yaktiyami
Germany leads the Global Digital Compact and the UN AI scientific panel to ensure inclusive governance (Norman) AI deployment must factor security and sovereignty risks specific to each country (Shahani)
It is notable that a senior diplomat (Norman) and a policy analyst (Shahani) both stress the importance of inclusive, global governance mechanisms (UN AI panel, Global Digital Compact) to manage AI risks, indicating cross-sectoral alignment on multilateral solutions. [166-184][141-148]
POLICY CONTEXT (KNOWLEDGE BASE)
Multiple policy analyses call for a structured, inclusive global AI governance architecture that balances diplomatic and regulatory insights within multilateral institutions [S46][S47][S48][S55].
Overall Assessment

The panel shows strong convergence on four main themes: (1) AI as a supportive tool for processing diplomatic information while preserving human decision‑making; (2) a shared preference for open‑source or domestically built AI to safeguard strategic autonomy; (3) consensus that Germany and India, as middle powers, should co‑develop sector‑specific, non‑frontier AI applications and open‑source models; (4) agreement that AI’s security, sovereignty and bias risks demand inclusive, multilateral governance frameworks. These points cut across artificial intelligence, capacity development, the enabling environment for digital development, and governance/security topics.

High – the speakers largely align on strategic priorities and risk‑mitigation approaches, suggesting a solid foundation for coordinated Indo‑German initiatives and for shaping broader multilateral AI governance.

Differences
Different Viewpoints
Use of open‑source AI models versus the need for domestically‑developed models to avoid strategic dependence
Speakers: Raphael Leuner, Norman Schulz, Shahani Yaktiyami
Prioritising open‑source AI reduces dependence on external vendors, but Chinese open‑source models pose strategic concerns (Raphael) Developing AI domestically ensures alignment with national values and avoids foreign model biases (Norman) There is a need to develop Indian open‑source alternatives to counter the dominance of Chinese models (Shahani)
Raphael argues that the Foreign Office should rely on open-source AI, even though many leading models now come from China, seeing this as a pragmatic way to accelerate deployment [128-136]. Norman counters that the safest way to align AI with national values is to build it in-house, warning that Chinese models embed their own ways of thinking [157-158]. Shahani adds that India should create its own open-source alternatives to reduce reliance on Chinese offerings [224-237]. The speakers therefore disagree on whether using existing open-source models (including Chinese) is acceptable or whether new domestic models are required.
POLICY CONTEXT (KNOWLEDGE BASE)
Debates centre on whether open-source AI reduces or creates vulnerabilities, with some policymakers advocating domestic development to safeguard strategic assets while others promote openness for resilience [S39][S57][S41].
Whether AI should be allowed to shape geopolitical narratives or only be used for bias detection and support
Speakers: Shahani Yaktiyami, Norman Schulz, Raphael Leuner
AI should not autonomously shape geopolitical narratives; human oversight is essential (Shahani) AI can be employed to detect and mitigate bias in media and diplomatic narratives (Norman) AI is already being used as a tool for propaganda; continuous monitoring is required (Raphael)
Shahani maintains that narratives must remain under human control and warns against AI-generated geopolitics [277-280]. Norman acknowledges AI’s risk but emphasizes its role in detecting bias and assisting humans, while also cautioning against AI-written diplomatic reports [288-292][159-164]. Raphael points out that actors already exploit AI to amplify messages and create fake websites, highlighting a security risk [295-298]. The disagreement lies in the permissible role of AI: Shahani limits it to human-only narrative creation, Norman sees a supportive but limited role, and Raphael highlights current misuse.
POLICY CONTEXT (KNOWLEDGE BASE)
Algorithmic diplomacy discussions raise concerns about AI-driven narrative shaping versus its role as a bias-detection aid, highlighting the tension between influence and oversight [S49][S45][S50].
Priority of global cooperation versus domestic rapid development for AI governance and risk mitigation
Speakers: Norman Schulz, Raphael Leuner, Shahani Yaktiyami
International cooperation is needed to mitigate AI risks, analogous to nuclear‑era agreements (Norman) Fast co‑creation via data labs enables rapid AI solutions (Raphael) Middle powers can leverage AI through regulation (Germany) and application‑focused strategies (India) (Shahani)
Norman draws parallels with nuclear arms control and calls for international agreements to manage AI risks [66-76]. Raphael emphasizes internal fast-co-creation within German data labs as the main advantage for AI deployment, focusing on speed over broader governance [18-23]. Shahani proposes that middle powers should use their comparative strengths-Germany’s regulatory expertise and India’s application focus-to shape AI governance without relying on great-power competition [48-50]. The speakers disagree on whether the primary path forward is global regulatory cooperation, rapid domestic innovation, or a middle-power-focused regulatory-application mix.
POLICY CONTEXT (KNOWLEDGE BASE)
Policy literature contrasts the push for swift national AI capability building with the imperative for coordinated international governance, stressing strategic diversification and sovereignty considerations [S41][S42][S55][S56].
Historical claim that AI tactics are not new versus the view that they are new
Speakers: Shahani Yaktiyami, Norman Schulz
The tactics aren’t new (Shahani) The short answer would be no (to the statement that tactics aren’t new) (Norman)
Shahani asserts that while AI is a new technology, the diplomatic tactics it enables have long historical precedents, citing past revolutions [36-40][42]. Norman directly rejects this claim, answering “no” to Gunda’s question about the statement, implying that AI introduces genuinely new tactics [59]. This constitutes a clear disagreement on the novelty of AI-driven diplomatic tactics.
POLICY CONTEXT (KNOWLEDGE BASE)
Analysts note that while AI technology is novel, many underlying influence tactics have historical precedents, a perspective articulated in recent diplomatic summits [S43].
Unexpected Differences
Interpretation of the statement that "the tech is new, but the tactics aren’t"
Speakers: Norman Schulz, Shahani Yaktiyami
The short answer would be no (Norman) The tactics aren’t new (Shahani)
The panelists unexpectedly diverged on a seemingly straightforward historical claim. While Shahani emphasized continuity of diplomatic tactics across technological revolutions, Norman directly contradicted her, suggesting that AI introduces novel tactics. This disagreement was not anticipated given the broader consensus on AI’s transformative potential.
POLICY CONTEXT (KNOWLEDGE BASE)
The same observation about the continuity of tactics despite emerging AI capabilities has been used to frame policy debates on the need for governance that addresses enduring influence methods [S43].
Overall Assessment

The discussion revealed substantive disagreements on three main fronts: (1) the strategic choice between using existing open‑source AI models (including Chinese) versus building domestic alternatives; (2) the permissible role of AI in shaping diplomatic narratives versus merely detecting bias; (3) the priority of global multilateral governance versus rapid domestic innovation or middle‑power‑focused strategies. Additionally, there was a clear split on whether AI introduces new diplomatic tactics. While participants shared the overarching goal of leveraging AI for diplomatic advantage, they diverged sharply on the pathways to achieve it.

High – The disagreements span strategic, technical, and normative dimensions, indicating that consensus on AI governance and deployment strategies among the panelists is limited. This fragmentation could hinder coordinated policy actions and suggests that further dialogue is needed to reconcile differing national priorities and risk assessments.

Partial Agreements
All three agree that AI must be harnessed to strengthen diplomatic capacity, but differ on the primary mechanism: Raphael stresses internal rapid development, Norman stresses multilateral risk‑mitigation agreements, and Shahani stresses a middle‑power blend of regulation and application focus. The shared goal is effective, secure AI use in diplomacy, yet the pathways diverge.
Speakers: Raphael Leuner, Norman Schulz, Shahani Yaktiyami
Fast co‑creation via data labs enables rapid AI solutions (Raphael) International cooperation is needed to mitigate AI risks, analogous to nuclear‑era agreements (Norman) Middle powers can leverage AI through regulation (Germany) and application‑focused strategies (India) (Shahani)
All three want to reduce reliance on external strategic rivals for AI. Gunda proposes Indo‑German co‑development, Raphael supports open‑source use while noting Chinese dominance, and Norman advocates domestic development to ensure value alignment. They share the objective of strategic independence but propose different collaborative or national approaches.
Speakers: Gunda Ehmke, Raphael Leuner, Norman Schulz
Open‑source AI models should be co‑developed by middle powers rather than relying on strategic rivals (Gunda) Prioritising open‑source AI reduces dependence on external vendors, but Chinese open‑source models pose strategic concerns (Raphael) Developing AI domestically ensures alignment with national values and avoids foreign model biases (Norman)
Takeaways
Key takeaways
The German Foreign Office uses fast, internal co‑creation via data labs to develop AI tools quickly, especially for processing large document sets and supporting negotiations. AI is viewed as a diplomatic tool that can automate tedious information‑processing tasks, freeing diplomats for higher‑level analysis and relationship‑building, while final decisions remain human. Historically, technology shapes diplomacy; AI continues this pattern, but the tactics (competition, regulation) are familiar. Middle powers such as Germany and India can leverage AI through regulation (Germany) and application‑focused strategies (India), rather than trying to win the frontier AI race. International cooperation and inclusive governance (e.g., the UN Global Digital Compact and the Independent Scientific Panel on AI) are seen as essential to manage AI risks, similar to nuclear‑era agreements. Open‑source AI is preferred to reduce dependence on external vendors; however, the prevalence of Chinese open‑source models raises strategic concerns, prompting calls for Indo‑German alternatives. Sector‑specific cooperation (industrial AI, healthcare, robotics) is identified as a practical avenue for Indo‑German collaboration, combining German data/automation expertise with Indian model‑building capacity. AI should not autonomously shape geopolitical narratives; human oversight is required, and AI can be used to detect and mitigate bias in media. AI is already being weaponised for propaganda and misinformation, necessitating continuous monitoring by diplomatic services.
Resolutions and action items
German Foreign Office commits to using open‑source technologies and re‑using existing state applications for AI projects. Proposal to pursue joint Indo‑German AI projects in industrial automation and healthcare, leveraging complementary strengths. Support for the Global Digital Compact and participation in the UN Independent Scientific Panel on AI to promote inclusive, science‑based AI governance. Agreement to monitor and counter AI‑driven misinformation and propaganda as part of diplomatic work.
Unresolved issues
Specific mechanisms for ensuring AI systems align with national values and security/sovereignty requirements remain undefined. How to develop and scale non‑frontier, open‑source AI models that can serve as alternatives to Chinese offerings is not yet resolved. Details of concrete Indo‑German cooperation frameworks, funding, and governance structures were discussed but not finalized. Implementation pathways for AI‑driven bias detection tools in media and diplomatic analysis need further elaboration. The broader question of how to balance rapid AI innovation with regulatory oversight across middle powers remains open.
Suggested compromises
Adopt a middle‑power strategy focused on sector‑specific collaboration rather than competing for frontier AI dominance. Utilise open‑source AI to democratise access while jointly developing Indo‑German models to reduce reliance on any single external source. Pursue managed interdependence: each country contributes its strengths in the AI stack (e.g., Germany’s data/automation, India’s model‑building) without creating dependency. Combine regulation (German approach) with application‑driven innovation (Indian approach) to achieve balanced AI governance.
Thought Provoking Comments
We used to have IT development projects that take two years, have huge teams, cost a lot of money, but that are just not fast enough to deliver on an AI solution that our colleagues are already experiencing in their private lives.
Highlights the mismatch between traditional bureaucratic IT cycles and the rapid pace of AI development, emphasizing the need for agile, in‑house co‑creation within ministries.
Set the practical tone of the discussion, prompting other panelists to consider speed and internal collaboration as critical factors for AI adoption in diplomacy.
Speaker: Raphael Leuner
The technology is not new. Yes. But the tactics aren’t. AI is the latest tool in a long history where technology shapes diplomacy—from the Industrial Revolution to the nuclear age.
Challenges the notion that AI is a completely novel disruptor, reframing it as a continuation of historical tech‑driven diplomatic shifts while stressing new strategic tactics.
Shifted the conversation from a purely technical focus to a geopolitical and historical perspective, leading to deeper discussion on middle‑power strategies and value‑chain leverage.
Speaker: Shahani Yaktiyami
Middle powers like India and Germany can express power on the AI value chain: Germany through rules and regulation, India through applications and deployment.
Introduces the concept that countries not leading in frontier AI can still exert influence by focusing on regulatory frameworks or sector‑specific applications.
Prompted subsequent speakers (e.g., Norman and Shyam) to explore concrete cooperation avenues and the role of regulation versus innovation in AI diplomacy.
Speaker: Shahani Yaktiyami
The best way to align AI systems with our values is to develop them ourselves, not just procure them from outside—even if they are open‑source Chinese models.
Raises sovereignty and ethical concerns, arguing for domestic development to maintain control over AI’s underlying biases and strategic implications.
Steered the dialogue toward the importance of indigenous AI capabilities, influencing Raphael’s later remarks on open‑source models and the need for alternative (e.g., Indian) solutions.
Speaker: Norman Schulz
Open‑source is a democratizing force—think of the 1990s open‑source OS revolution. By collaborating as middle powers we can reduce costs and spread access to AI.
Draws a historical parallel to illustrate how open‑source can level the playing field, suggesting a practical pathway for Indo‑German cooperation.
Expanded the conversation from high‑level geopolitics to actionable collaboration models, leading to discussion of sectoral projects like healthcare and industrial AI.
Speaker: Shyam Krishnakumar
We see a lot of AI models from China being adopted globally; it’s important for countries like India and Germany to develop their own models to offer alternatives.
Points out the geopolitical risk of dependence on Chinese AI, linking model provenance to strategic autonomy.
Reinforced Norman’s sovereignty argument and motivated the panel to consider building a diversified, multi‑regional AI ecosystem.
Speaker: Raphael Leuner
AI cannot replace human diplomatic creativity; it can summarize and replicate, but innovative thinking must come from people.
Counters the hype that AI will automate diplomatic analysis, emphasizing the irreplaceable human element in policy formulation.
Grounded the discussion in realistic expectations, influencing the audience Q&A about automation and reinforcing the theme of AI as a tool, not a decision‑maker.
Speaker: Norman Schulz
I would not let AI shape narratives. If we allow AI to do that, we risk bias and manipulation; we must instead develop mitigation strategies and use AI to detect bias.
Raises ethical concerns about AI‑generated propaganda and stresses the need for human oversight and bias‑detection tools.
Prompted a rapid exchange on media influence, leading Raphael to note AI’s role in amplifying fake narratives and highlighting the dual‑use nature of the technology.
Speaker: Shahani Yaktiyami
Overall Assessment

The discussion was driven forward by a series of pivotal insights that moved it from a surface‑level overview of AI tools to a nuanced debate about geopolitical strategy, sovereignty, and ethical governance. Raphael’s emphasis on agile, internal co‑creation highlighted operational challenges, while Shahani’s historical framing and middle‑power lens reframed AI as a diplomatic lever rather than a mere technology. Norman’s sovereignty argument and calls for domestic development introduced a critical security dimension, which Raphael and Shyam reinforced by pointing to the dominance of Chinese open‑source models and the democratizing potential of open‑source collaboration. Together, these comments redirected the conversation toward concrete Indo‑German cooperation, sector‑specific applications, and the limits of AI in shaping policy and narratives, ultimately shaping a balanced view of AI as both an opportunity and a risk for foreign ministries.

Follow-up Questions
What concrete use cases of AI can be applied in diplomacy and foreign policy?
Understanding practical applications will help ministries move from theory to implementation.
Speaker: Gunda Ehmke
How should AI be governed within the German Foreign Ministry and is the current approach adequate?
Effective governance frameworks are needed to ensure responsible AI use in diplomacy.
Speaker: Gunda Ehmke (question), Norman Schulz (response)
What specific areas of Indo‑German cooperation in AI (e.g., industrial AI, healthcare AI) are most promising?
Identifying concrete bilateral projects can leverage complementary strengths of both countries.
Speaker: Gunda Ehmke (prompt), Shyam Krishnakumar (response)
Can open‑source AI be a viable solution for foreign ministries, and what are the security implications of relying on Chinese open‑source models?
Evaluating open‑source options is crucial for cost‑effectiveness and sovereignty concerns.
Speaker: Raphael Leuner
How can the discussion on AI impact be turned into concrete, actionable cooperation rather than just governance frameworks?
Moving from high‑level dialogue to implementation steps is needed to realize AI benefits.
Speaker: Gunda Ehmke (prompt), Shahani Yaktiyami (response)
What are the details of the Global Digital Compact and the Independent Scientific International Panel on AI, and how can we ensure AI systems align with democratic values?
Clarifying these mechanisms will help embed values and inclusivity into global AI governance.
Speaker: Gunda Ehmke (prompt), Norman Schulz (response)
Which parts of foreign‑policy research, decision‑making and implementation can be automated by AI?
Identifying automation opportunities can increase efficiency for diplomats and analysts.
Speaker: Audience member Sreeni (question)
Will AI help create more unbiased media narratives, and how can potential bias be mitigated?
Understanding AI’s influence on information flows is vital for democratic discourse and security.
Speaker: Audience member Sanjeevni (question), Shahani Yaktiyami, Norman Schulz, Raphael Leuner (responses)
How should geopolitical risk be assessed when adopting open‑source AI models from countries like China?
Risk assessment is needed to prevent unintended strategic dependencies.
Speaker: Raphael Leuner (implied), Norman Schulz (implied)
Can Indian open‑source large language models serve as alternatives to Chinese models for Europe and other partners?
Diversifying model sources could reduce reliance on any single geopolitical bloc.
Speaker: Raphael Leuner
Should foreign ministries develop sovereign AI capabilities in‑house to ensure alignment with national values?
Building domestic AI may safeguard against external influence and ensure value alignment.
Speaker: Norman Schulz
How can the concept of ‘managed interdependence’ replace traditional sovereignty debates in AI governance?
A nuanced framework could better reflect the interconnected AI ecosystem.
Speaker: Shahani Yaktiyami
What will be the outcomes of the upcoming UN AI dialogue in Geneva and how will they shape global AI governance?
Monitoring the dialogue’s results will inform future policy and cooperation strategies.
Speaker: Norman Schulz (implied)
What bias‑detection technologies and mitigation strategies are needed to address AI‑driven misinformation and narrative manipulation?
Technical tools are required to counteract AI‑enabled bias and protect democratic processes.
Speaker: Shahani Yaktiyami

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Agentic AI in Focus Opportunities Risks and Governance

Agentic AI in Focus Opportunities Risks and Governance

Session at a glanceSummary, keypoints, and speakers overview

Summary

The opening panel was convened to explore both the business case for agentic AI and the public-policy measures needed to encourage and safeguard its use [1-3]. Austin Mayron, Acting Director of the U.S. Center for AI Standards and Innovation (CAISI), introduced the agency’s placement within the Department of Commerce and its partnership with NIST to develop voluntary standards that help industry adopt AI agents [15-20][26-29].


CAISI has recently launched an AI-agent standards initiative, issued a request for information on agent security, and announced sector-specific listening sessions on health-care, education and finance to gather industry challenges [32-38][39-41]. Prith Banerjee described how Synopsys is creating “agentic engineers” that augment human designers in rapid chip-to-system development, enabling yearly product cycles that would otherwise be impossible [73-81][88-94]. Caroline Louveaux explained that MasterCard is moving from AI that merely recommends to AI agents that act in real-time fraud detection, and she outlined four guardrails-knowing the agent, security-by-design, clear consumer intent, and traceability-to ensure safe, accountable payments [105-112][218-226][229-236]. Syam Nair highlighted that NetApp is embedding agents near storage controllers to improve data quality for AI workloads, noting that the technology is still early (around level three of a five-level autonomy scale) and that multi-level guardrails are required [132-140][141-148].


Austin urged a bottom-up, industry-driven approach to standards, citing ongoing RFI processes and upcoming listening sessions, and suggested that CAISI could develop benchmarks for handling personally identifiable information in regulated sectors [156-164][168-171]. Prith warned that autonomous, software-defined systems such as cars or aircraft could become weapons if compromised, emphasizing the need for exhaustive verification and validation before hardware prototyping [191-207]. Syam added that data governance is critical because agents act on data without empathy, and that ultimate accountability must remain with human owners, requiring coordinated public-private guardrails [240-248].


Panelists agreed that voluntary, consensus-based standards are preferable to top-down regulation and identified the OECD as the leading multilateral forum for AI principles and reporting frameworks [172-176][386-393]. Additional recommendations included developing technical benchmarks for multi-agent systems and leveraging events such as Singapore International Cyber Week and global bodies like the ITU and UN to foster inclusive coordination [401-406][429-434]. The discussion concluded that aligning industry standards, robust guardrails, and international cooperation will be essential to unlock the benefits of agentic AI while managing its risks [435-440].


Keypoints

Major discussion points


Government-industry collaboration on standards and security for agentic AI – CAISI (the U.S. Center for AI Standards and Innovation) explains its placement within the Department of Commerce and NIST, its role as a “front door” for industry, and its recent AI-agent standards initiative, including RFIs on security and sector-specific listening sessions [13-18][19-30][32-38][156-166][168-171].


Business use-cases of agentic AI across sectors


Semiconductor design: Synopsys creates “agentic engineers” that augment human designers to handle the exploding complexity of chip and system design, enabling faster product cycles [55-73][88-95][90-94].


Payments & fraud prevention: Mastercard moves from recommendation-only AI to “agentic” AI that detects and blocks fraudulent transactions in milliseconds, and it has defined four guard-rails (know-your-agent, security-by-design, clear consumer intent, traceability) to ensure safe autonomous payments [105-115][218-231].


Data-centric cloud services: NetApp develops storage-proximate agents that improve data quality and enable real-time security actions, while emphasizing the need for multi-level guard-rails and strong data governance [132-141][235-244].


Enterprise guard-rails and risk-management concerns – Panelists stress that agentic systems must operate under clear permissions, human oversight, and robust governance. Mastercard’s four guard-rails, NetApp’s layered safeguards (public-private partnership, data lineage, human accountability), and Prith’s safety warnings about autonomous physical systems (e.g., weaponised cars or aircraft) illustrate the breadth of risk-management strategies [179-208][218-231][236-248].


Policy recommendations focused on voluntary, consensus-based standards and global coordination – Austin highlights a bottom-up approach to standards development; Ellie urges regulators to consider the autonomy continuum and human-in-the-loop vs. human-on-the-loop models [277-284][287-289]; Carly calls for open standards and cross-regional harmonisation (Singapore, India); Danielle and Sam point to the OECD as the primary multilateral venue, while also noting the role of safety-institute consortia; Jennifer adds that regional groups should complement OECD work; Combiz stresses inclusion of bodies such as the ITU and UN [386-398][401-402][418-423][430-434].


Purpose of the panel – The session is framed as a two-part discussion: first to map business use-cases of agentic AI, then to explore public-policy implications and what governments should do to encourage safe adoption [1-6][249-256].


Overall purpose/goal


The panel aims to bridge the business and policy worlds by showcasing concrete agentic-AI applications, identifying the practical challenges and guard-rails needed for safe deployment, and delivering concrete recommendations to policymakers on how standards, coordination mechanisms, and regulatory approaches can foster responsible innovation while protecting consumers and critical infrastructure [1-6][249-256].


Tone of the discussion


Opening: Formal and forward-looking, with a clear agenda-setting tone [1-6][13-18].


Technical deep-dives: Energetic and optimistic as speakers describe transformative use-cases (Synopsys, Mastercard, NetApp) [55-73][105-115][132-141].


Cautionary moments: A shift to a more urgent, even “scary” tone when highlighting safety risks in physical AI (autonomous cars, weaponised systems) and the sushi-order anecdote [179-208][228-231].


Collaborative & constructive: Returns to a cooperative tone as panelists discuss standards, share best-practice recommendations, and acknowledge the need for global coordination [156-166][277-284][386-398].


Closing: Appreciative and hopeful, emphasizing partnership between industry and governments and thanking participants [435-441].


Overall, the conversation moves from informative introductions to enthusiastic showcase of technology, through a brief but pointed warning about risks, and culminates in a collaborative, solution-oriented tone aimed at shaping policy.


Speakers


Jason Oxman


Area of expertise: Technology industry leadership, AI policy moderation


Role / Title: Moderator/Host; President & CEO of the Information Technology Industry Council (ITI) [S14][S15]


Austin Mayron


Area of expertise: AI standards, innovation policy, government-industry liaison


Role / Title: Acting Director, U.S. Center for AI Standards and Innovation (CAISI) [S9][S10]


Prith Banerjee


Area of expertise: Semiconductor design automation, AI-driven engineering


Role / Title: CTO and SVP, Synopsys (design software automation semiconductor company) [S17][S18]


Caroline Louveaux


Area of expertise: Payments security, privacy, AI-enabled fraud detection


Role / Title: Chief Privacy AI and Data Responsibility Officer, MasterCard [S16]


Syam Nair


Area of expertise: Multi-cloud storage, data quality, AI-driven data preparation


Role / Title: Chief Product Officer, NetApp (global multi-cloud service provider) [S1]


Danielle Gilliam-Moore


Area of expertise: AI public policy, governance frameworks


Role / Title: Director of Global Public Policy, Salesforce (leads AI policy work) [S2][S3]


Combiz Abdolrahimi


Area of expertise: Governance, standards, policy implementation (former regulator)


Role / Title: Industry professional with former government/regulatory experience (specific title not specified) [S4]


Ellie Sakhaee


Area of expertise: AI public policy, machine learning, human-in-the-loop governance


Role / Title: Public Policy Team Member, Google; Ph.D. in Computer Science / Machine Learning [S5][S6]


Sam Kaplan


Area of expertise: Cybersecurity policy, AI risk standards


Role / Title: Assistant General Counsel for Global Policy, Palo Alto Networks [S7]


Jennifer Mulvaney


Area of expertise: Technology policy advocacy, human-centered AI


Role / Title: Public Policy Lead, Adobe [S11]


Carly Ramsey


Area of expertise: Internet infrastructure, AI standards, regional policy coordination


Role / Title: Lead, Public Policy for Asia Pacific, Cloudflare (based in Singapore) [S12][S13]


Additional speakers:


None (all speakers appearing in the transcript are included in the list above).


Full session reportComprehensive analysis and detailed insights

The discussion opened at the AI Impact Summit, organized by the Institute for Technology Innovation (ITI), with Jason Oxman outlining a two-part agenda: first to map the business case for “agentic AI” – AI that can act autonomously rather than merely provide recommendations – and second to explore the public-policy measures needed to encourage its use while safeguarding society [1-6].


Austin Mayron (Acting Director, U.S. Center for AI Standards and Innovation – CAISI) then described CAISI’s role and organisational context. CAISI sits within the Department of Commerce and, as the “front door for industry to the United States government,” serves as the primary entry point for industry engagement with federal AI policy [13-20]. He clarified that “the other aspect of our organization that bears note is that we are co-located with the National Institute of Standards and Technology (NIST)” [13-20]. CAISI also draws talent from Frontier AI Labs, which helps explain novel concepts to other parts of the administration [13-20]. The centre evolved from the U.S. AI Safety Institute to a standards-and-innovation focus in June 2025, signalling a shift from prescriptive safety to enabling innovation [16-18][S1][S19].


Just this week, CAISI kicked off an AI-agent standards initiative [32-38]. It issued a Request for Information (RFI) on AI-agent security [32-38] and, concurrently, “CAISI also points to a draft NIST-ITL publication on AI identity and verification that is currently open for public comment” [32-38][S-X]. Within days it announced sector-specific listening sessions on health-care, education and finance to collect industry-level barriers [32-38][156-166][168-171].


The business-case speakers followed.


Prith Banerjee (Synopsys) presented a hardware-centric use case. Synopsys, the leading electronic-design-automation provider, is expanding from chip design to “chips-to-systems” after acquiring Ansys for $35 billion [61-63]. He described “agentic engineers” – AI-driven agents that perform low-level reasoning tasks in chip and system design, complementing rather than replacing human engineers [90-94]. Accelerating product cycles in automotive and aerospace (from multi-year to annual cadences) and the growing complexity of designs (now trillions of transistors) exceed what human designers alone can manage [73-84][85-88]. These agents enable rapid verification and validation before hardware prototyping, a necessity when physical AI controls safety-critical functions such as brakes or steering [85-87][88-95].


Caroline Louveaux (Mastercard) offered a financial-services example. Mastercard has moved from AI that merely recommends actions to “agentic AI” that actively detects suspicious transactions, triages fraud signals and initiates secure payment flows in milliseconds [105-108][109-115]. She emphasized that such AI agents must operate within clearly defined permissions and be subject to continuous human oversight [111-115]. To institutionalise this, Mastercard devised a four-point guard-rail playbook: (1) “Know Your Agent” – verify the agent’s legitimacy; (2) security-by-design – protect credentials through tokenisation; (3) explicit consumer intent – ensure the user authorises each purchase; and (4) traceability/auditability – maintain records for dispute resolution and regulator confidence [218-236].


Syam Nair (NetApp) described a data-centric deployment. NetApp embeds AI agents close to storage controllers so that data can be prepared for AI workloads without moving it through cumbersome pipelines [135-137]. This proximity improves data quality, especially for unstructured data, and enables real-time security actions such as detecting threats within the 59-second average breach window [138-140]. He placed NetApp’s capability at roughly level 3 of a five-level autonomy spectrum, indicating an early-stage but rapidly progressing effort [141-148]. Nair warned that the “blast radius” of an error grows when many agents operate across an enterprise, so guardrails must be layered: public-private partnership on policy, rigorous data-governance to preserve lineage, and the principle that ultimate accountability remains with human owners [239-248].


The panel then turned to enterprise-wide risk management and guard-rail design. Prith Banerjee warned that software-defined physical systems (autonomous cars, aircraft) could be weaponised if compromised, citing a hacked car in Mumbai used as a weapon [191-198]. He argued that exhaustive digital-level verification (aiming for near-100 % coverage) is essential before any hardware is fabricated [205-207]. Caroline’s four-guard-rail framework and Syam’s layered approach echoed this need for clear permissions, human-in-the-loop oversight, and auditability [218-236][239-248]. Austin reinforced that standards development must be bottom-up, gathering input from field experts before defining problems and adopting a “humility-driven” approach that treats industry as the primary source of insight [158-162]. He also highlighted the importance of interoperability in future standards [172-174][172-174].


All panelists agreed that voluntary, consensus-based standards driven by industry-government collaboration are preferable to prescriptive regulation [172-174][19-22][26-29][332-339][304-307][364-368]. Carly Ramsey stressed that open models and open standards are needed to avoid fragmented regional regimes [304-307][S1]. Combiz Abdolrahimi added that abstract principles must be translated into concrete playbooks, benchmarks and operational guidance [364-368].


Policy recommendations converged on a human-centric, risk-based approach. Jennifer Mulvaney (U.S. Department of Labor) reminded the audience that policy should always protect humans first, asking “what does this mean for humans and how can we prevent harm?” [263-270][S73]. Ellie Sakhaee (Google) proposed regulating the applications of AI agents rather than the underlying models and suggested a continuum of autonomy that moves from “human-in-the-loop” to “human-on-the-loop” and eventually “human-in-command” as agents become more reliable [277-284][285-286]. This graduated oversight model mirrors the FAA’s transition from pilot-always-in-sight to pilot-on-the-loop for drones [285-286].


International coordination was identified as essential. Danielle Gilliam-Moore (U.S. State Department), Sam Kaplan (OECD) and Jennifer Mulvaney all pointed to the OECD’s AI principles and reporting framework as the primary global anchor, noting its influence on the EU AI Act and numerous U.S. state drafts [386-393][401-403][418-420]. Carly Ramsey added that regional events such as Singapore International Cyber Week provide a practical venue for cross-border dialogue and for aligning Singapore’s AI-governance framework with NIST standards [404-406][S1]. Sam highlighted the International Consortium of Safety Institutes as a tactical forum for developing technical taxonomies, while Combiz broadened the scope to include the ITU, UN and AI-for-Good initiatives [401-402][432-434].


Modest disagreement emerged over the optimal multilateral platform and the balance between global standards and agile, sector-specific frameworks. Danielle advocated a top-down reliance on the OECD; Carly preferred a region-focused cyber-week; Sam suggested a safety-institute consortium; and Combiz called for broader UN-based engagement [386-393][404-406][401-402][432-434]. Similarly, Danielle argued for fast, ministry-driven governance to fill gaps left by slow-moving ISO processes, whereas others (Carly, Sam, Austin) emphasised the need for globally harmonised voluntary standards [353-358][304-313][158-162].


Key take-aways


1. Safe, widespread adoption of agentic AI depends on (i) voluntary, consensus-based standards developed through a bottom-up industry-government partnership; (ii) layered enterprise guardrails that embed security-by-design, clear permissions, data-governance and human accountability; (iii) a human-centric policy lens that scales oversight with agent autonomy; and (iv) coordinated international effort anchored by the OECD but complemented by regional forums and technical consortia.


2. Unresolved issues include (a) precise technical specifications for AI-agent security and benchmarks for multi-agent interactions; (b) mechanisms for harmonising regional and global standards; and (c) definition of autonomy thresholds for shifting oversight models.


Continued collaboration among standards bodies, industry, academia and governments will be required to close these gaps.


Session transcriptComplete transcript of the session
Jason Oxman

Our second discussion will be this panel, which will discuss the business case use of agentic AI. And then we’ll follow that with a second panel, which will discuss the public policy implications of agentic AI. That is to say, what government should be doing to encourage and to safeguard the use of agentic AI. We all know that agentic AI is quite literally the AI of agents. And there’s been a lot of discussion here at the AI Impact Summit about how agentic AI is creating new opportunities for jobs, for societal benefits, for use cases across different industries. And one of the most important questions is, of course, what public policy solutions are going to be necessary to encourage the use of agentic AI.

So I’m very pleased to welcome as our opening speaker, Austin Mayron, who is the Acting Director of the Center for AI Standards and Innovation, and a senior, you have the longest title in the world, Austin. Thank you. Senior Legal Advisor to the Undersecretary of Commerce for Intellectual Property and Director of the United States Patent and Trademark Office. office. Austin, we are thrilled to have you here. You have some very interesting updates on how the U .S. administration is approaching agentic AI, including what the office is doing, which I think is enormously important as well. So you’re going to join us for a few minutes of table -setting remarks, if you will, and we’re thrilled to have you here.

Austin, I’ll turn it over to you.

Austin Mayron

Absolutely. Thank you, Jason. Thank you to ITI, and thank you all for coming today. As Jason said, my name is Austin Mayron, and I’m the Acting Director of the U .S. Center for AI Standards and Innovation, also called CAISI. CAISI was originally founded as the U .S. AI Safety Institute, but last year in June of 2025, Secretary of Commerce Howard Lutnick refounded us as the Center for AI Standards and Innovation. That signaled a shift away from safety principles, more towards standards and innovation. I think there’s two organizational aspects of CAISI that are worth note. The first is that we’re located within the Department of Commerce. We are very focused on helping industry. The Secretary has tasked us to be the front door for industry to the United States government, and we really see ourselves as serving in that role.

We collaborate with various aspects of the AI ecosystem, including the Frontier Labs, for instance, on pre -deployment evaluations. And we like to partner with industry to help understand government. As one example, sometimes there’s a lack of AI expertise within the U .S. government. And CAISI, because we have talent from Frontier AI Labs, we’re able to help explain novel concepts to other aspects of the administration. The other aspect of our organization that bears no is that we’re located with the NIST, the National Institute for Standards and Technology. And the thing that’s worth noting there is that NIST, throughout its history, it hasn’t been a regulatory organization. It’s been an organization that’s promoted economic growth and technological development by developing standards and facilitating the development of standards and best practices.

And so CAISI, we see our role as partnering with industry to develop the standards and best practices they need to flourish. And here, we’re here today to talk about AI agents, which is an incredibly timely topic. And so I thank ITI for organizing this. Just this week, CAISI, my organization, we kicked off an AI agent standards initiative. Our goal is to hear from industry how traditional standards work, best practices, guidelines can help unlock and facilitate adoption. So one area where we’ve already started that work is on AI agent security. We put out a request for information or RFI about what challenges industry is facing with AI agent security. Our colleagues at NIST at the Information Technology Laboratory also have a publication out for comment on AI identity and verification, which we encourage you, if you’re interested, please look at the documents, review them, send in your comments.

We also announced this week that we’re going to be holding sector specific listening sessions on barriers to adoption, the sectors of health care, education and finance. And our goal here is we want to learn actually what are the challenges that industry is facing. These AI agents, they have tremendous potential, but we want to understand. How CAISI and NIST and the U .S. government can help unlock adoption through standards and best practices. So I’m delighted to be here and take part in this conversation and learn more from my fellow panelists.

Jason Oxman

Thanks, Austin, so much. Really appreciate your being here and helping set the stage for us for our discussion of agentic AI. As I mentioned, we have three great experts here to start us off on the business side discussion before we move to the policy side discussion, because I really think it’s important for us to understand exactly what use cases of agentic AI are happening across different segments of the AI stack. So we’re very fortunate to have three experts here to help us with this discussion. Prith Banerjee is the CTO and SVP of Synopsys, the design software automation semiconductor company. Great to have you here, Prith. Caroline Louveaux is Chief Privacy AI and Data Responsibility Officer at MasterCard.

Caroline, thanks for being here. And also delighted to have Syam Nair, who is Chief Product Officer at NetApp, the global multi -cloud service provider. And so the three of them are each going to share a couple. A couple minutes. of opening remarks on agentic AI use cases. What we’ve asked them each to do is share with all of you kind of the top favorite agentic AI use case that’s happening so that we can use that as a way to frame the discussion around business and policy to solutions. So if we could, Prith, I’ll start with you for your favorite agentic AI use case that’s happening at Synopsys.

Prith Banerjee

Sure. So I’m Prith Banerjee, and my role is to look at sort of future directions of where Synopsys is headed. And agentic AI is actually the core of this. But before I do that, I want to share with you what Synopsys does. Synopsys is the leading provider of electronic design automation tools and IP to design chips. So the chips from, say, NVIDIA or AMD or Broadcom, Qualcomm are designed with these billion transistor chips, trillion transistor chips designed with Synopsys tools. But the opportunity that Synopsys has, seen is these chips are going into systems, systems that are like cars or… aircraft or spacecraft or system data centers, healthcare, et cetera, right? So we have this vision of chips to systems that, and because of that, Synopsys recently acquired Ansys for $35 billion, right, to be a chips to systems company.

I came into Synopsys as CTO at Ansys. So now the challenge that I want to share with all of you is as you are designing a car, right, it’s a software -defined car, right, a Tesla car has more than 100 million lines of C code in that car. That code runs on an ECU, an ECU designed by NXP or STMicro or Qualcomm. And that chip is still not yet designed, right? It is being designed with, say, Synopsys tools, but you’re writing software on the tool or on that chip, and so you have to do what is called software -defined verification validation, right, before the software is, before the chip is designed. Right. And that. that control will control the electric brakes, the electric steering, the autonomous driving of the car.

And the car is, it’s a physical product, it is being driven on the road, right? And so you use ANSYS physics simulation like Fluent for aerodynamics or LS Dyna for crash or HFS for electromagnetic. So essentially what we are doing is bringing the physics of the world around us powered by AI along with the chip design in this what we call intelligent product design which is silicon design. So the chip inside any complex design, software enabled, so you can do software updates over there, updates and AI driven. So that’s all the context and if we are a $10 billion company with a market cap of 100 billion. So the agentic AI part is the following, that the pace of innovation in the world is changing.

You used to design a new car every 7 years or maybe 5 years. That pace of innovation is changing. like Tesla, Elon Musk said we have to do it every year. Every year they want to bring a new car to market. Or NVIDIA Jensen, right? The chip design used to be every three years. NVIDIA Jensen says you have to do it every year. So the pace of innovation is becoming faster and the complexity. You used to have a chip with maybe a million transistors. Now it’s a billion transistors. It’s a trillion transistors. It’s incredibly complex. And then you have the chip with all the complicated system. The complexity is so hard that you used to have human designers at the Qualcomm, NVIDIA, etc.

who could use those things using the Synopsys tools. You cannot do that anymore. It is very, very hard. That’s where agentic AI is coming in. So at Synopsys what we have created is agentic engineers. These are like human engineers that are not trying to take the jobs of human engineers away. They are going to complement the job of a human engineer so you at Broadcom, Qualcomm, we have a hundred thousand engineers. but you will be complemented with another 200 ,000 agentic engineers from Synopsys who will do the lower level reasoning job like a human, right? But the human will still be in the loop to make sure that you are not doing drastic sort of bad things, right?

This is the incredible opportunity. But as the world talks about agentic AI in the world of large language models and data and words as tokens, our world is what we call physical AI, which is physics, and it’s the physical AI part where we are applying our agentic engineering technology to. Very, very exciting area.

Jason Oxman

That’s great. And I love how you described the human engineers being complemented by, not replaced by, the agentic AI that’s helping them be more efficient and do their jobs better. Caroline, I think of payments networks as having used AI for decades, literally. The fact that you can take a plastic card and tie it back to a, a human being, no matter where they are in the world, is actually truly remarkable. When you think about how payments networks work, it is truly remarkable, the technology. especially since you’re processing literally millions of transactions a second around the world. So with that, you look over global AI for MasterCard, and I’m curious how agentic AI is influencing the work that you and your colleagues do to make these payments rails run around the world.

Caroline Louveaux

Absolutely, and hi, everyone. It’s great to be here with you. As you said, for MasterCard, AI is nothing new. We have been leveraging AI for decades to make our payment network safer and more secure for everyone. Now with agentic, we are moving from AI systems that recommend to AI systems that act, right? And in cybersecurity and payments, the shift is already real today. AI agentic systems are being deployed, for example, to detect suspicious transactions, to triage fraud signals, to initiate secure payment flows. If you think about it, if we want to be able to detect and to block fraud in real time, decisions have to be made in milliseconds. at scale. And of course, while speed and scale matter a lot, accountability is a must.

What’s important is that these agents don’t make decisions with open -ended autonomy. They must act within clear values, principles, within clear permissions. What is the agent allowed to do? What is not allowed to do? And when does a human need to step in? And of course, humans have to have full oversight end -to -end. So, I mean, there are many other use cases. I’m happy to talk more about that, but I think that’s really our main use case. But of course, the technology is moving really, really fast. We are now talking about this multi -agent ecosystem that raises a whole new range of opportunities as well as novel challenges. And so that’s where these kind of summits where we all come together are really, really important to really get it right.

Jason Oxman

I love how you characterize it as moving from what we call assistive AI to operational AI. In other words, instead of just helping with a task, the AI, as an agent, can actually take a task on. Still oversight in the system, and that, I should have previewed this. We’re going to come back around and talk to the panelists about guidelines and protections, and as Austin importantly noted at the outset, the security of the system, how that’s built in as well. And, Siam, I want to come to you next. The multi -cloud that NetApp operates obviously is moving data around the world on behalf of customers, storing data around the world and allowing your customers to access data in a multi -cloud environment.

How is agentic AI helping NetApp with that level of customer service?

Syam Nair

Thank you. So NetApp actually, as you said, multi -cloud, we both power public cloud as well as private cloud. Many of the largest infrastructure is actually the data infrastructure. It’s built on NetApp. I’m a file storage standpoint. One of the key challenges in AI itself is having quality of data. Data quality is super important, and the previous session actually talked about it. And data quality, especially from unstructured, truly unstructured, how do you really get the structured value out of it? And that’s where agents can actually help and agents help, which is we are developing agents which are sitting closer to the storage controller. If you know the storage architecture says that without moving data and going through cluttered pipelines and, you know, positioning the data ready for AI, you can actually have the data at the source itself, which will be ready for AI.

And how this helps is, you know, many of the areas, cybersecurity, as it continues to grow as a threat, you know, 59 seconds is the average breakout of a threat these days, risk and threat will become super important to manage. And you need to do that at the layer where the data sits. So agentic has a really good use case with respect to that. We are still in our journey, early journey in terms of building these capabilities. One would say, look, if you have five levels of AI where, you know, agentic AI where level one is mostly assisted, co -pilot to autonomous agents, running a network of agents at level five, we’re still in that journey somewhere in the three range.

And that’s what we see from customers in terms of how they want to leverage data. So that’s one of my favorite use cases in preparing the data, making sure that the right data is available both for the agents and the agents can make it available for the use cases.

Jason Oxman

Yeah, interesting. So the agents are actually helping you expose any risks that may need to be addressed as part of that provisioning of data. And, Austin, I’m going to ask you to set up our second round question with me, not for me. And that is, you know, the industry has a responsibility to inform governments about risks and how they’re being addressed. So as we move into the next question for the panel around enterprise guardrails that companies are seeing. So, Austin, I’m going to ask you to set up your question. And then I’m going to ask you to set up your question. And then I’m going to ask you to set up your question. So, Austin, I’m going to ask you to set up your question.

And then I’m going to ask you to set up your question. anything in particular you would flag that you’re looking to hear from industry in the U .S. administration about those guardrails. You are overseeing an operation that asks for industry input, which I think is rare and particularly great. So thank you for doing that. Perhaps some practice tips that you can provide to everyone in the room about what it is helpful to provide government, the U .S. administration or other government colleagues that you’ve heard from on these issues and how it’s helpful to provide that information.

Austin Mayron

Yeah, absolutely. So at CAISI , our focus right now is truly on unlocking innovation and adoption. And we work in the standards space, and so we look to how NIST -fostered standards and best practices and guidelines documents can help with that innovation and that adoption. And so the NIST process, the way it normally works is we like to gather and collaborate. It can be an industry to… understand the challenges they’re facing. It’s more of a bottom -up, grassroots approach than a top -down. We’re not sitting there in Washington and saying, you know, this is the problem and we’re going to fix it. We take a little bit of humility and say, we don’t actually know what the problem is until we talk to the people who are closest to the issue, because we only have a narrow slice of the world from our vantage point, and the people who are actually in the field working on innovation, working on adoption, they have a better sense of what the barriers are.

And so we encourage everyone in industry and across the ecosystem to really engage with us, to tell us the problems that you’re encountering, and we have structured formal ways for you to do that. For instance, the request for information on AI agent security, I think it’s open for about another month, and some have already submitted comments, but we look forward to comments. As I said, we’re also convening listening sessions, I think in April, on barriers to adoption, particularly on agent issues for education, healthcare, and finance. We’re starting with those three sectors, but we really welcome that type of engagement, because we want to facilitate adoption. And one example that I sort of like to use…

I don’t know if it’s actually a barrier to adoption, but let’s say in a regulated field like healthcare or education, there’s PII, and there’s a reluctance to adopt because it’s unclear how the AI agents and systems are treating PII and whether it will satisfy regulatory burdens. CAISI could play a role in helping settle concerns about that because we could develop benchmarks, methodologies, and evaluation methods to give industry the confidence they need that, for instance, the model that they’re looking to procure and adopt and implement handles PII the way they need to to satisfy their regulatory obligations. So that’s a way where Casey, through measurement science, best practices, and standards, can help facilitate adoption. We’re also looking at interoperability, and we’ll have more about that in the coming months.

Jason Oxman

That’s great. Really appreciate that, Austin. And I love the focus on voluntary, industry -driven, consensus -based standards because that’s how the tech industry prefers to operate. It’s better than government regulation, particularly because those standards are global in nature, and NIST is a great example, as you noted, of support of those voluntary. consensus -based industry standards, which we would all prefer to operate. And, Prith, I’ll come back to you on this question of, I guess I’d call them guardrails, kind of the enterprise guardrails around risk management that you’re putting in place. Governments are paying attention. We want to handle these issues in the private sector. What are you seeing that’s important as far as those enterprise guardrails for risk management?

Prith Banerjee

So that’s a great question. Actually, at the AI Summit yesterday, there were a lot of speakers, from starting with Prime Minister Modi to President Macron, everybody kind of talked about responsible, safe AI and AI for everyone. But I want everybody in the audience to understand what is going on in this world, right? So there is a problem, right? You have a video that you can watch on, say, YouTube or Facebook, and you want to prevent a young child from watching that, right? And that is responsible AI, and you want to make sure that a 12 -year -old doesn’t watch it. But if he or she watches it, it’s not the end of the world. I mean, yes, you have seen this, but the world that we live in is this intelligent product design, right?

You are designing a car, and we have, as Syam was mentioning, level 1, which is assistive, all the way to level 5, which is fully autonomous. Now, imagine a world – I’m now doing the scary part so you understand how scary it can be, right? An autonomous car that is driving on the streets of Mumbai, right? And it’s supposed to be autonomous, making sure the pedestrians and the cows are being avoided. But suppose there is a cyber attack, right? And somebody goes in, and you want to use that car as a weapon, right? As you know, there are terrorists that go in, and they bang into these things, right? So we have to make sure that these software -defined systems – just imagine an airplane, right?

You know what has happened in the past. In 9 -11, an airplane hit a thing. So you could imagine a software -defined airplane being used as a – as a missile, right? So this is how important it is because unlike the world of Facebook and Google, and I’m not undermining Facebook, Google, I’m just saying you are dealing with people watching stuff and saying like, unlike, right? We are dealing with physical AI interacting with the real world. If real world some things happen, some really dangerous things can happen, right? And so we have to be extra careful. So that’s the challenge. What we are trying to do is to make sure that as part of this agentic engineering workflow, we are doing it in a responsible manner, in a safe manner, right?

And the work that we are doing in terms of verification, validation. So the software flow that we do before we actually do a hardware prototyping, we do full like 100 % coverage at the digital level. So we are designing the airplane on the computer, designing the car on the computer with as close to 100 % guarantee. Nothing is 100%. but I want you to understand how much more complicated this is right because in the hands we can design software defined sort of data centers or software defined nuclear arsenals right in the hands of the wrong person some bad things can happen so we have to be extra careful about the responsible safe AI that we do for our intelligent product design.

It is happening software defined is happening but we have to be super careful.

Jason Oxman

Thank you, sometimes the best way to get people to pay attention to what you’re saying is to scare them and so you’ve certainly done that and Caroline there’s a lot of bad stuff happening on the payment systems as well and the consequences of fraud and security breaches are or actual shutdown of the network is almost impossible to contemplate global commerce grinding to a halt I don’t know if you want to scare people like that as well when you talk about.

Caroline Louveaux

Let me go there.

Jason Oxman

Go ahead.

Caroline Louveaux

With enterprise guidelines coming to New Delhi I watched the companion it’s a movie around romance robot I’m not going to spoil the end, but that’s actually a scary story for sure. Now, back to the MasterCard vault. The principle is very simple. Autonomy can only scale if there’s trust. And so at MasterCard, we think we have a role to play when it comes to agentic commerce, meaning you use an agent to make payments on your behalf. And so we want these agentic payments to be safe and secure and trusted. And therefore, we came up with a playbook with four key guardrails. The first one is know your agent. Before an agent acts and before it makes a payment, we want to make sure that it’s verified and trusted.

So everyone needs to know that it’s a legitimate agent and not a rogue robot or a fraudster. Important, right? The second one, of course, is security by design. It has to remain the foundation. And so we are leveraging advanced technologies around customer authentication, tokenization. to make sure that the sensitive credentials, for example, your card number, is not visible and not exposed to third parties, to the merchants, to the agents, or anything like that. Third, and that’s a bit new, we want to make sure that we have clear consumer intent. The consumer has to be always in control of what he or she authorizes the agent to purchase on his or her behalf. We learned this the practical way just a couple of months ago.

An employee at Massaca decided to ask an agent, hey, are you able to buy sushis? The idea was just to test the agent’s capability to do so, but the agent took the question literally and placed an order using the employee card details on file. So, lesson learned, clarity matters, clarity of the intent that can be verified, otherwise you end up with these platters of sushis. And then last but not least, everything has to be, traceable and auditable. and that’s needed if you want to be able to give consumers the redress if things go wrong, dispute resolution and of course to make the regulators happy and comfortable and so these guardrails are not there to slow adoption, you know, if done well they’re going to be key to scale adoption in a way that is trusted by design.

Jason Oxman

Great, sushi is not scary but the use case you described is, so appreciate that

Caroline Louveaux

It’s only sushi, we’re good.

Jason Oxman

It’s only sushi, that’s right Syam, you get to wrap us up because we’re closing the panel out You don’t have to scare people if you don’t want to but I’d love to hear how NetApp is thinking about enterprise guardrails for risk management around agentic AI

Syam Nair

Yeah, no scary stories I think one of the ways I would say this is, you know, as humans we used to make mistakes but it was much more contained. As sometimes in enterprises you had insider threat but it was much more contained. But now you’re talking about a network of agents where the blast radius in terms of an error or a mistake or a threat is much more profound. So guardrails become important. They need to be at multiple levels. Number one is public -private partnership in identifying the guardrails in terms of how agents need to operate, being very specific to the enterprise, being very specific to the business is important, and working together with the customers, in some cases consumers, others in business -to -business, understanding the use case and for which how we need to build guardrails within the system.

And more importantly, I think, and I’ll go back to what one needs to figure out is the governance of the data because data is the one that is actually going to power how agents make these decisions, right? Unlike human, there is no empathy built into the agent, at least not at this point, and it is not making decisions based on situational awareness. It’s making decisions based on the data. And if the data can be manipulated, if the lineage of data is not properly understood, if it is not really governed, if there are no guardrails for that, then you could actually get outcomes from agents that are going to be scary. The last piece of this is, look, unlike agents, which can do everything, agents cannot take accountability.

They’re not responsible. They can’t take accountability. It’s the humans. It’s the business owner who takes it. So having those guardrails work in tandem with the customer, consumer, with the public -private sector partnership is super important in terms of defending.

Jason Oxman

Thank you. Thank you. policymakers looking at. And what should policymakers look at? Our goal in the tech industry, obviously, is to ensure that public policy is inspirational to innovators, that it doesn’t interfere with the ability of innovators to get the products and services out to market that we all want to see and benefit from. But of course, policymakers have other things in mind. They want to make sure that consumers are protected. They want to make sure that safety and security is part of the design of products that are deployed into the market. So we have a great industry panel of experts who are going to share their views on what policymakers should be thinking about and what they should be doing to inspire the use of agentic AI while also addressing important public policy concerns.

So I’ll ask each of our panelists to address that and to introduce themselves. Jennifer, I already said who you are. You can just introduce yourself and your company, and let’s take that as the prompt. And you get to pick one thing that you think policymakers should be most focused on. focusing on.

Jennifer Mulvaney

Great. Thank you, Jason. Jennifer Mulvaney with Adobe. And, you know, I learned a great Hindi term yesterday watching the prime minister speak, and that is mahaf, human. I mean, you really think about policy. Policy, you know, has been around since the dawn of time, and it really is about helping to prevent harms against humans. And so that is what policy still is meant to do today. I think when policymakers look at anything, whether it’s tech or welfare or tax policy, it’s what does this policy mean for humans and how to prevent harm and what does that mean? And we as lobbyists in Washington, D .C., or my former role there, you humans go in and talk about what it means for whatever that stakeholder group you’re talking about is.

So we’re now in a world of policy actually governing systems, not just people. But I think that the prime minister’s focus on human is something that Adobe talks a lot about as well, that should be humans before models. Our CEO of Adobe often says it’s not what we can do with technology, it’s what we should do. And I really love that statement because that really does think about what is this going to mean for humans? How can we advance that agenda?

Jason Oxman

Love that. Thank you, Jennifer. Yep. Ellie Sakhaee.

Ellie Sakhaee

Hi, everyone. I’m Ellie Sakhaee. I am part of public policy team within Google. Several of our colleagues in the previous panel mentioned that agentic AI is not a point in development, right? So it’s, as we think about agentic AI, we should be thinking about the continuum, depending on agent’s autonomy, depending on their access to memory, depending on the context of use, and depending on their ability to do long -term planning and basically act on the real world. So that is why I think it’s important when we think about policy to think about this continuum of agents rather than something is agentic and something is not agentic. That being said, I think that one of the main safeguards that we talk about is human in the loop for agentic AI.

And that also varies significantly with the ability or the reliability of an agent. So as we move from agents that need confirmation for every single step that they want to take, they need human approval. As we move from them to agents that are more autonomous, we should be thinking about moving from human in the loop to human on the loop or human in command. A similar analogy to this is how Federal Aviation Administration in the U .S. thinks about moving from pilot being always in sight of drones to pilots being in command of drones. So as the safety of these drones improve and safety of AI systems to keep track of these drones through detection and avoid system improves, we can move from pilot.

always keeping an eye line with the drone to pilot being on the loop or pilot being in command. So I think these analogies within different industries allow us to think about agents. And another thing that I think policymakers, as they think about agents, should consider is that agents may be a new technology, but they, at the end of the day, they may cause harm. So we should be thinking about regulating the use or application or the harm that they actualize compared to regulating the underlying technology. Otherwise, we end up regulating, let’s say, the AI models that by the time that the regulation goes into effect, the AI model has evolved into something that is now agentic.

Jason Oxman

Makes sense, and appreciate your perspective. And I should have noted that you’re not only doing public policy work for Google, but you’re actually a real agent. You’re a real computer scientist, Ph .D., machine learning. She knows how the machines think. which is important as well. And sometimes they talk to us, right? Sometimes. Let’s go to Carly Cloudflare next.

Carly Ramsey

Great. Thank you. Hi, everyone. My name is Carly Ramsey. I lead public policy for Asia Pacific for Cloudflare. I’m based in Singapore. And Cloudflare, just for those of you who don’t know us, Cloudflare runs a global network, and we kind of sit in between our customers and their users, and we protect the traffic that goes back and forth and take a large majority of all the AI model providers are our customers as well, so we’re protecting that traffic as it goes back and forth. So we have a unique viewpoint. We also offer developer tools as well, and people are building AI agents off of Cloudflare, so there’s that angle that Cloudflare sees as well. So, like you said, choose one thing that we recommend to policymakers.

That’s a hard one, but I was thinking in keeping with the theme of this summit, which is very much about inclusive AI, I think that something that policymakers should consider is whether or not we’re making agentic AI specifically available. for everyone, right? So that becomes, is it accessible? Are the standards perhaps open? I think open models, open standards are really interesting and are allowing people to access tools that they might not normally be able to access. And so as policymakers think about diffusing this technology more widely, maybe just even outside of the enterprises, one thing that as someone who sits in Asia Pacific, and this is really concerning to me, is like how do we ensure that the different governments when they’re making these tools accessible are talking to each other?

And I think ITI has a really neat role to play in that actually because we all know that NIST is the gold standard and these are voluntary standards. They’re often referenced a lot in Asia actually. Singapore just came out with their own framework on agentic AI governance, right? And the question is, is that going to be compatible with whatever NIST is going to put out? Big question. Singapore is a leader in cybersecurity standards in this region. And I’ve had some interesting conversations here in these past couple of days about India. India, obviously, with the bastion of tech talent that we see in India, they want to be involved in standard development and for the global south.

You know what I mean? So great. And how do we get them involved? And how do we make sure that as global companies that they’re not – all of these standards aren’t contradicting each other as well, right? So that harmonization piece is very important.

Jason Oxman

So important. Technology doesn’t want to stop at borders. It wants to serve the world, and such an important issue. Sam, Palo Alto? Palo Alto? Perfect. Palo Alto.

Sam Kaplan

You conveniently sat the two cyber companies, cybersecurity companies, next to each other. So my name is Sam Kaplan. I’m the Assistant General Counsel for Global Policy at Palo Alto Networks. And for those of you that don’t know us, we’re the world’s largest pure play cybersecurity company. Can you hear me? Yeah. Okay. There it’s better. Sorry. I need to project better. Anyways, I think, Jason, to pivot off of your question, I think, you know, at a high level, one of the – The one last question. and I think if we could impart to policymakers is, you know, start with the standards organizations, to tell you the truth. The standards organizations, both in the United States but also abroad, Carly referred to the Singapore agency, but they are in the midst of developing these voluntary frameworks that are really serving as the foundation, not only to understanding the technology but to better understand sort of the risk picture that we are facing when it comes to these types of technologies, where we started with traditional model security frameworks when it comes to LLMs that are all based on sort of prompt and responses.

These standards -setting organizations are now very, very deep into sort of developing these same standards on agentic, and as they are painting a better picture and working with industry to understand how that risk picture is changing and how what was once sort of… almost a two -dimensional… understanding of the risk when it comes to AI models is now very much a three -dimensional picture when you’re looking at agents, because these are the parts of the models that all of a sudden have arms and legs. So when you’re looking at this from a security perspective, you’re taking what could be sort of a digital threat that can sort of metastasize on networks. These are threats that all of a sudden can have kinetic consequences in real life as these agents are executing decisions across the financial system from your previous panel, but across autonomous systems.

So understanding that risk picture is going to be critically important. And last, I think that really pivots into one of the themes from the summit itself, as policymakers, in particular policymakers, are looking at sort of responsible and safe deployment. They need to understand and appreciate that security, security of those models, security of those agents, is a foundational layer to increasing trust, to facilitating response. deployment of AI because it’s the best way to secure and, as much as we can, understand the behavior of these models and agents as they’re interacting with the ecosystem and now the real physical world that we’re seeing.

Jason Oxman

Yeah, and policymakers are keeping an eye on all the products and services to see if that is done well or not, in which case they may step in. All right, to follow your thematic, we’re moving from cybersecurity to enterprise software. You’re going to take my joke, aren’t you? You sat me next to condos. I know, I know. It’s not my joke, it’s Sam’s joke. But, yes, I’m going to take it. I’m going to take it. So, Danielle, please commence the enterprise software portion of our program. I can speak for you if you want me to. I’m joking.

Danielle Gilliam-Moore

Danielle with Salesforce. I’m our director of global public policy, and I lead our AI policy work. The panelists have said a lot of great things, and they’ve also stolen a lot of what I’m going to say, so I’ll try to make this short. But when we think about AI, I think there’s – A governance response. Okay. needs to happen and when we talk about governance I think a lot of people conflate governance with regulation and governance is more than regulation. Governance can be regulation but it’s also standards, it’s also global norms, it’s also you know risk and quality assurance procedures in companies and so along with the standards piece I think a critical thing to remember is that you know ISO controls takes about three years to that process so it’s quite a long process.

So when you look at the ISO 42001 standard it’s a great standard but it’ll take time to further build on that which I think then makes in organizations likeness the different safety Institute’s incredibly important in filling in the gaps while work is being done to bring about new controls around agentic. The other thing I’ll say is on regulation there’s this emerging framework that it was first kind of started in the UK but I’m seeing governments like Indonesia on the other hand, there’s a lot of government that’s how we can make sure that we’re not just looking at the data and the data is is being used to make sure that we’re not just looking at the data and the data take this on of instead of having this large overarching AI regulation they’re looking at they’re allowing the different ministries that have core competencies on things like financial services or health care to take the lead so you have a more diffuse model that’s happening and I encourage I would encourage lawmakers to look at that you know some of these agencies have years and years and years and years of relationships and expertise and so wouldn’t they be best placed to think about not necessarily regulations but frameworks rules that best suit you know a small startup that isn’t that is operating you know a financial services agent or something like that some edge use case I think that is a more agile way to look at agentic which you know agility does I think bring about adoption and is very key to adoption.

Thanks.

Jason Oxman

Perfect. Combiz is anything left for you to say?

Combiz Abdolrahimi

I was just going to say ditto to everything that Danielle said because that’s basically what I was going to say, and she said it way better than I could ever do. Yeah, calm these up. They’re a human service now. I guess I would add just having worked in government now within industry, there’s kind of – I like to think like I could sort of have like the vantage point of like a former regulator, policymaker as well as now in industry. And I think what we are looking for and what we’ve heard earlier today is like we want clarity. We want clarity. We want standards. We want to – like we want to see what good governance looks like.

Don’t give us – if I could give a message to governments and regulators, don’t give us sort of theoretical abstract principles, but give us actually what practical standards, what does good governance look like, operational clarity, playbooks, model frameworks. Jason and I, I remember – for many years ago when I was at Treasury and you were at ETF, you know, there was this line, like, you know, these technologies are rapidly evolving. And as they’re evolving, policies and regulations need to evolve with them. Otherwise, it’s going to stifle these innovations, and it’s going to actually create more harm than good.

Jason Oxman

Well put. Well put. All right, so now that we’ve provided a wish list for regulators, the next question, and Danielle, I’m going to give you the chance to go first because of your observation that sometimes panels go down the line and it’s not fair to the people who are at the end of the panel. I think that’s absolutely true. I would have let Kambiz go first, but you’re speaking for the enterprise software industry generally. So the question is, you know, one of the big themes here at the AI Impact Summit is unification of the policy agenda across countries, across governments, across regions. So. So is there a particular platform you’ve seen or organization you’ve seen?

Is there a particular place where conversations like the ones we’ve been talking about here should be taking place? You know, the U .S., India, like -minded governments around the world, they want to be all on the same page. But there is a tendency for India -specific standards, for U .S.-specific standards. There’s a tendency for that in the physical world and in the digital world, and that’s very difficult for us to operate in. So in the agentic AI arena, I’m curious from all of you if there is a particular multilateral venue or a particular platform or a particular thing you’ve seen work well that you would recommend to governments here that they look to for this.

And, Danielle, have I bought you enough time to come up with your answer so that I can call on you first?

Danielle Gilliam-Moore

I woke up this morning knowing the answer to this question. Oh, excellent. Okay. I live for this question. It’s all yours. Which is the OECD. All right. The OECD, I think, is kind of – it’s not worth it. I remember it all started, but there was this really interesting moment where the OECD puts out principles in – was it 2019, I believe? And then it was like it set the floor for everyone else. I mean, the EU AI Act’s definitions are based off of those principles. We’ve seen draft legislation at the state level that’s based off of the OECD AI principles. Globally, when I was doing rounds of meetings in APAC, they were looking at the OECD principles.

So I feel like the world is shouting OECD and a lot of the regulatory work that they’re doing, but they don’t necessarily say they’re not always looking there. But the OECD has been doing such interesting work. They now have the reporting framework. They’re doing work with GPI. Them having that Hiroshima AI process framework, that was them taking the work of the G7 and bringing it into what they’re doing. So the OECD is doing so much work to reach out, and so I would encourage governments to look at what the OECD is doing and help them built.

Jason Oxman

That’s great. Sam? You can pick the same one if you want to or …

Sam Kaplan

well and I’m actually going to layer it because I think Danielle is exactly right I think when you’re looking from a policy and higher level governance the OECD has been the leader in this there are structures in place through the OECD to develop these if you look at legislation regulatory proposals that have come out even across the various US states they’ve based definitions off of what the OECD so that has been a foundational piece I think so from a broader perspective I think that’s a good layer I think you know the one that has potential I would like it to see move more tactical rather than being a little bit esoteric and studying is the International Consortium of Safety Institutes I think the structures are there you have the right players that are coming to the table I think if those organizations like what Cassie’s doing right now are advancing you know more tactical standards to create a taxonomy when it comes to agentic AI security how are we measuring how the attack is going to affect the surface has changed when it comes to agents.

To understand the scope of the scale of this problem, I think there’s a great deal of potential, but I think you need sort of these two levelings to talk policy and standards.

Jason Oxman

Fantastic. Carly?

Carly Ramsey

Just to add something different to the discussion is that based in Singapore, what I’ve seen in the years that I’ve been there is that the Singapore International Cyber Week has been, every year has gotten more attendance from governments from all around the world. So that is a potential, it’s an annual event, and so the positioning is on policy, bringing governments to discuss cyber policy. And so potentially that is an area that could be considered, sure, that the varying countries from around the world, the different, like India is well attended at Singapore International Cyber Week, make sure that they all have a voice in the future of Identity AI.

Jason Oxman

That’s great. Love it. Ellie, do you have a preferred platform? Multilateral?

Ellie Sakhaee

Yes, I’m going to add to what my colleague said here, and that is technical benchmarks. We talk about the standards, but we may understand what agents do, but we don’t fully understand what multi -agent systems may do. They may have emerging risks. They may have completely different behaviors that we don’t really know because we don’t really have real versions of multi -agent systems. There are some emerging, but the risk surface will change as these agents interact with each other. So I think the academic community, industry, all of us have a role to play to develop and expand the benchmarks for multi -agent systems to make sure before we put them into the world, they are tested.

Jason Oxman

Great. Jennifer, and then Cambiz, you’re going to get the last word.

Jennifer Mulvaney

Thank you for sharing. So what I would just say is I think that definitely OECD comes to mind as the largest. Most credible group, and I think that makes sense. But we do have to think about having space for some of the smaller, more regional groups as well. I’m speaking in Tokyo. couple weeks at the Friends of Hiroshima G7, where they had their principles there back when Japan hosted the G7. So I think that’s really important to have those types of smaller regional, perhaps even focused on specific policy areas to then feed into the bigger consortium in a way that people can understand. So I think that’s really important.

Jason Oxman

That’s great. That’s great. Kambiz, close us out.

Combiz Abdolrahimi

Yeah, hopefully. So actually, I was surprised that nobody mentioned the one that I was like, please don’t mention it. Please don’t mention it. Let me do it. So we’re talking about standards. We’re talking about technical benchmarks. We’re talking about principles. We’re talking about coordination at a global scale, private sector, governments, academia, institutions, the ITU, the UN, AI for Good. I mean, they do all of that. And I think that we want to engage with more countries, with more stakeholders in this conversation and make sure that we are being inclusive. and that’s one of the sort of multilateral forums that I would look to.

Jason Oxman

That’s a terrific one. Thanks for adding one to the list at the end of the round. This has been a fantastic discussion. I love the way we paired the business discussion of Agentic AI with the policy recommendations, and hopefully policymakers will pay attention to what we’re doing. ITI is proud to represent all of the companies here on the panel here today as part of the global tech industry and particularly proud to be partnered with Government of India on the AI Impact Summit. Our congratulations to the Prime Minister and to the entire Government of India for this incredible, incredible gathering. Thank you to all of you for being here to be a part of this important discussion, and please join me in thanking our terrific panelists.

Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (46)
Factual NotesClaims verified against the Diplo knowledge base (3)
Confirmedhigh

“CAISI was originally founded as the U.S. AI Safety Institute”

The transcript of Austin Mayron states that CAISI was originally founded as the U.S. AI Safety Institute, confirming the report’s claim [S2].

!
Correctionmedium

“The centre evolved from the U.S. AI Safety Institute to a standards‑and‑innovation focus in June 2025, signalling a shift from prescriptive safety to enabling innovation”

According to the same transcript, the transition from the U.S. AI Safety Institute to CAISI occurred “last year,” not specifically in June 2025, indicating a discrepancy in the reported timing [S2].

Confirmedmedium

“The discussion opened at the AI Impact Summit”

The knowledge base records the AI Impact Summit as a 2026 event, confirming that such a summit took place, though it does not specify the organizer [S106].

External Sources (118)
S1
Agentic AI in Focus Opportunities Risks and Governance — -Syam Nair- Chief Product Officer at NetApp (global multi-cloud service provider)
S2
https://dig.watch/event/india-ai-impact-summit-2026/agentic-ai-in-focus-opportunities-risks-and-governance — Yeah, and policymakers are keeping an eye on all the products and services to see if that is done well or not, in which …
S3
Agentic AI in Focus Opportunities Risks and Governance — Danielle Gilliam-Moore: Danielle with Salesforce. I’m our director of global public policy, and I lead our AI policy wo…
S4
Agentic AI in Focus Opportunities Risks and Governance — -Combiz Abdolrahimi- Role/company not clearly specified, appears to work in industry with former government experience
S5
Agentic AI in Focus Opportunities Risks and Governance — -Ellie Sakhaee- Public policy team member at Google, Ph.D. in computer science/machine learning
S6
https://dig.watch/event/india-ai-impact-summit-2026/agentic-ai-in-focus-opportunities-risks-and-governance — Hi, everyone. I’m Ellie Sakhaee. I am part of public policy team within Google. Several of our colleagues in the previou…
S7
Agentic AI in Focus Opportunities Risks and Governance — -Sam Kaplan- Assistant General Counsel for Global Policy at Palo Alto Networks (cybersecurity company)
S8
https://dig.watch/event/india-ai-impact-summit-2026/agentic-ai-in-focus-opportunities-risks-and-governance — So understanding that risk picture is going to be critically important. And last, I think that really pivots into one of…
S9
https://dig.watch/event/india-ai-impact-summit-2026/agentic-ai-in-focus-opportunities-risks-and-governance — Absolutely. Thank you, Jason. Thank you to ITI, and thank you all for coming today. As Jason said, my name is Austin May…
S10
S11
Agentic AI in Focus Opportunities Risks and Governance — -Jennifer Mulvaney- Public policy role at Adobe
S12
https://dig.watch/event/india-ai-impact-summit-2026/agentic-ai-in-focus-opportunities-risks-and-governance — And I think ITI has a really neat role to play in that actually because we all know that NIST is the gold standard and t…
S13
Agentic AI in Focus Opportunities Risks and Governance — Great. Thank you. Hi, everyone. My name is Carly Ramsey. I lead public policy for Asia Pacific for Cloudflare. I’m based…
S14
Driving U.S. Innovation in Artificial Intelligence — 7. Jason Oxman – President & CEO, Information Technology Industry Council 8. Julia Stoyanovich – Associate Professor, De…
S15
Agentic AI in Focus Opportunities Risks and Governance — -Jason Oxman- Moderator/Host, appears to be with ITI (Information Technology Industry Council)
S16
Agentic AI in Focus Opportunities Risks and Governance — – Ellie Sakhaee- Caroline Louveaux
S17
https://dig.watch/event/india-ai-impact-summit-2026/agentic-ai-in-focus-opportunities-risks-and-governance — Thanks, Austin, so much. Really appreciate your being here and helping set the stage for us for our discussion of agenti…
S18
Agentic AI in Focus Opportunities Risks and Governance — Sure. So I’m Prith Banerjee, and my role is to look at sort of future directions of where Synopsys is headed. And agenti…
S19
U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence — – Michael Sellitto- Owen Lauder- Michael Brown Industry-led, consensus-based approach to standards development is prefe…
S20
WS #257 Emerging Norms for Digital Public Infrastructure — Belli advocates for a bottom-up approach in developing DPI, emphasizing the importance of understanding local contexts a…
S21
WS #283 AI Agents: Ensuring Responsible Deployment — Dominique Lazanski: And actually, well, you bring up the point that I think agentic AI started in the 90s with search an…
S22
Challenging the status quo of AI security — Connection between observed security challenges and need for standards Given the new security challenges that emerge wh…
S23
Indias AI Leap Policy to Practice with AIP2 — The discussion revealed tensions between global harmonization and local adaptation needs. Adams argued against one-size-…
S24
Setting the Rules_ Global AI Standards for Growth and Governance — A major theme was the challenge of measurement and benchmarking in AI systems. Rebecca Weiss from ML Commons explained t…
S25
Singapore International Cyber Week (SICW) 2025 — Artificial intelligence is a recurring theme across the agenda, with panels examining AI-enabled cyber operations, accou…
S26
Singapore International Cyber Week — The 5th edition of Singapore International Cyber Week 2020 (SICW) – the region’s most established cybersecurity event – …
S27
Singapore International Cyber Week 2021 — The SICW is one of Asia-Pacific’s most established cybersecurity event since its inception in 2016. SICW brings together…
S28
Cloudflare launches Moltworker platform after AI assistant success — The viral success of Moltbot has prompted Cloudflare tolaunch a dedicated platformfor running the popular AI assistant. …
S29
Closing Session  — Appreciation for working group members for the depth, rigor and practicality of outcomes, stating these are not abstract…
S30
Closing remarks – Charting the path forward — 5. **Coherent Policy Frameworks**: Called for “coherent and interoperable policy frameworks to prevent fragmentation whi…
S31
Agentic AI and the new industrial diplomacy — Xiaomi has been publicly promoting its ‘black-light factory’ concept for smartphones and consumer electronics. This refe…
S32
From agentic AI to agreement technologies: LLMs as a new layer in diplomatic negotiation — In current discourse, agentic AI usually refers to systems that can pursue goals with limited supervision. Such systems …
S33
How AI agents are quietly rebuilding the foundations of the global economy  — AI agents have rapidly moved from niche research concepts to one of the most discussed technology topics of 2025. Search…
S34
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — Very high level of consensus with no significant disagreements identified. The alignment spans government policy makers,…
S35
Semiconductor design set for AI revolution with new Synopsys tool — Synopsys hasintroduced AgentEngineer,an AI-powered technology designed to streamline semiconductor design by automating …
S36
AI being used in payment fraud prevention for e-commerce — Fraugster, a German-Israeli payment security company, has launched afraud prevention solution, Fraud Free Product, using…
S37
WS #19 Satellites, Data, Action: Transforming Tomorrow with Digital — Kulesza Joanna: I think we have a mic and the camera has not yet been enabled, but I’m glad to speak. I also see we ha…
S38
African Union (AU) Data Policy Framework — Cloud servicesare used on-demand at any time, through any access network, using any connected devices that use cloud com…
S39
Data first in the AI era — – **Cybersecurity as Essential to Data Governance**: The panelists stressed that data governance and cybersecurity are i…
S40
Agentic AI in Focus Opportunities Risks and Governance — “If the data can be manipulated, if the lineage of data is not properly understood, if it is not really governed, if the…
S41
Sandboxes for Data Governance: Global Responsible Innovation | IGF 2023 WS #279 — Dennis Wong:Thank you. Thank you very much. And thanks for having me. As you’ve seen, Singapore has experimented in Sand…
S42
WS #280 the DNS Trust Horizon Safeguarding Digital Identity — High level of consensus with significant implications for policy development. The agreement suggests that the DNS commun…
S43
Data first in the AI era — Steve Macfeely: OK, good afternoon, everybody. I’m glad to see so many people here. So the question, why international d…
S44
The fading of human agency in automated systems — Crucially, a human presence does not guarantee agency if the system is designed around compliance rather than contestati…
S45
Crisis management — This collaboration also helps mitigate the limitations of both approaches. Human oversight ensures accountability, corre…
S46
Agentic AI and the new industrial diplomacy — Several trends are converging:UNandUNESCOframeworks emphasize that AI should augment human capabilities, not replace hum…
S47
UNSC meeting: Artificial intelligence, peace and security — Brazil:Thank you, Mr. President, Mr. President, dear colleagues. I thank the Secretary General for his briefing today an…
S48
Open Forum #30 High Level Review of AI Governance Including the Discussion — High level of consensus with significant implications for AI governance development. The alignment suggests that despite…
S49
Green and digital transitions: towards a sustainable future | IGF 2023 WS #147 — In terms of governance, a framework is deemed essential to operationalise long-term systems for the service of citizens….
S50
Future-proofing global tech governance: a bottom-up approach | IGF 2023 Open Forum #44 — Cedric Sabbah:Does anybody else want to say something about this concept of agility? I’m not seeing anyone. So, okay, we…
S51
Promoting policies that make digital trade work for all (OECD) — It recognizes the growth and benefits derived from digital transformation but also highlights challenges stemming from d…
S52
Setting the Rules_ Global AI Standards for Growth and Governance — Implementation requires interoperable and modular standards ecosystems to avoid reinventing approaches for each sector o…
S53
How to make AI governance fit for purpose? — All speakers recognize that AI’s global nature requires international cooperation and coordination, though they may diff…
S54
Seeing, moving, living: AI’s promise for accessible technology — These questions require international coordination and inclusive decision-making. Standard-setting bodies cannot be domi…
S55
Global AI Policy Framework: International Cooperation and Historical Perspectives — The concept includes practical elements such as cloud and data standards that guarantee interoperability and reversibili…
S56
Global Perspectives on Openness and Trust in AI — “I think the difference is that, to your point from the opening, Amba, is that I think part of what we were trying to do…
S57
Evolving AI, evolving governance: from principles to action | IGF 2023 WS #196 — Nobuhisa Nishigata:. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ….
S58
U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence — – Michael Sellitto- Owen Lauder- Michael Brown Industry-led, consensus-based approach to standards development is prefe…
S59
Agentic AI in Focus Opportunities Risks and Governance — Absolutely. Thank you, Jason. Thank you to ITI, and thank you all for coming today. As Jason said, my name is Austin May…
S60
U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence — Austin Marin, Acting Director of the US Center for AI Standards and Innovation, introduced a major new government initia…
S61
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — And this is almost like a test for me of kind of saying. These names of these institutions through this panel. But they …
S62
From agentic AI to agreement technologies: LLMs as a new layer in diplomatic negotiation — In current discourse, agentic AI usually refers to systems that can pursue goals with limited supervision. Such systems …
S63
How AI agents are quietly rebuilding the foundations of the global economy  — AI agents have rapidly moved from niche research concepts to one of the most discussed technology topics of 2025. Search…
S64
Comprehensive Summary: World Economic Forum Discussion on Stablecoins — Jeremy Allaire describes the broad proliferation of stablecoin use cases across different sectors of the economy. He arg…
S65
Semiconductor design set for AI revolution with new Synopsys tool — Synopsys hasintroduced AgentEngineer,an AI-powered technology designed to streamline semiconductor design by automating …
S66
https://dig.watch/event/india-ai-impact-summit-2026/agentic-ai-in-focus-opportunities-risks-and-governance — Sure. So I’m Prith Banerjee, and my role is to look at sort of future directions of where Synopsys is headed. And agenti…
S67
NVIDIA and Synopsys shape a new era in engineering — The US tech giant, NVIDIA,has deepenedits long-standing partnership with Synopsys through a multi-year strategy designed…
S68
AI being used in payment fraud prevention for e-commerce — Fraugster, a German-Israeli payment security company, has launched afraud prevention solution, Fraud Free Product, using…
S69
AI agents complete first secure transaction with Mastercard and PayOS — PayOS and Mastercard havecompleted the first live agentic paymentusing a Mastercard Agentic Token, marking a pivotal ste…
S70
Rule of Law for Data Governance | IGF 2023 Open Forum #50 — Alibaba Cloud Intelligence Group has played a significant role in cloud-based data governance, offering a range of cloud…
S71
GOVERNMENT CLOUD POLICY — – i. Whole-of-government efficiencies: Reducing the cost of developing and maintaining technology and reducing …
S72
Keynote-António Guterres — We need guardrails that preserve human agency, human oversight and human accountability
S73
Agents of Change AI for Government Services & Climate Resilience — “…they can hallucinate it can have bias, it can have toxicity, avoid all of that and they are unpredictable ultimately…
S74
Indias AI Leap Policy to Practice with AIP2 — The discussion revealed tensions between global harmonization and local adaptation needs. Adams argued against one-size-…
S75
https://dig.watch/event/india-ai-impact-summit-2026/indias-ai-leap-policy-to-practice-with-aip2 — And we’ve seen with the GDPR framework. For example, that that has had a limiting effect on the African continent. So I …
S76
Open Forum #26 High-level review of AI governance from Inter-governmental P — Audrey Plonk: Does it work now? Okay, now I can hear you. Oh, wonderful. Thank you. I think maybe I was in the observe…
S77
The Role of Government and Innovators in Citizen-Centric AI — The discussion aimed to explore how artificial intelligence, particularly large language models, can transform public se…
S78
Engineering Accountable AI Agents in a Global Arms Race: A Panel Discussion Report — And with the interfaces that we have today, they can be introduced also in business applications. And I think what ethic…
S79
Main Session on Artificial Intelligence | IGF 2023 — Moderator 1 – Maria Paz Canales Lobel:Thank you very much, Maria, for the opportunity to be here with you today, and I’m…
S80
Any other business /Adoption of the report/ Closure of the session — In summary, the speaker artfully blended expressions of gratitude with recognition of collaborative efforts and a call f…
S81
Opening address of the co-chairs of the AI Governance Dialogue — The tone is consistently formal, diplomatic, and optimistic throughout. It maintains a ceremonial quality appropriate fo…
S82
Summit Opening Session — The tone throughout is consistently formal, diplomatic, and collaborative. Speakers maintain an optimistic and forward-l…
S83
Opening — The overall tone was formal yet optimistic. Speakers acknowledged the serious challenges posed by rapid technological ch…
S84
Opening Ceremony — The tone is consistently formal, diplomatic, and optimistic yet cautionary. Speakers maintain a celebratory atmosphere a…
S85
AI: Lifting All Boats / DAVOS 2025 — The tone was largely optimistic and solution-oriented, with speakers acknowledging challenges but focusing on opportunit…
S86
Governments, Rewired / Davos 2025 — The overall tone was optimistic and forward-looking, with speakers highlighting the transformative potential of technolo…
S87
Crypto at a Crossroads / DAVOS 2025 — Anthony Scaramucci: Hey, listen, I mean, you know, we have to call balls and strikes. It’s probably not a great Europea…
S88
From India to the Global South_ Advancing Social Impact with AI — The discussion maintained an overwhelmingly optimistic and energetic tone throughout. It began with excitement about you…
S89
Building Population-Scale Digital Public Infrastructure for AI — The tone is optimistic and collaborative throughout, with speakers sharing concrete examples of successful implementatio…
S90
From summer disillusionment to autumn clarity: Ten lessons for AI — As we refocus on existing risks, some accountability is due:how and why did respected voices get carried away with AGI p…
S91
How can we deal with AI risks? — Long-term risksare the scary sci-fi stuff – the unknown unknowns. These are the existential threats, the extinction risk…
S92
Are AI safety institutes shaping the future of trustworthy AI? — As AI advances at an extraordinary pace, governments worldwide are implementing measures to manage associated opportunit…
S93
Artificial intelligence — AI applications in the physical world (e.g. in transportation) bring into focus issues related to human safety, and the …
S94
Comprehensive Summary: AI Governance and Societal Transformation – A Keynote Discussion — The tone begins confrontational and personal as Hunter-Torricke distances himself from his tech industry past, then shif…
S95
Emerging Markets: Resilience, Innovation, and the Future of Global Development — The tone was notably optimistic and forward-looking throughout the conversation. Panelists consistently emphasized oppor…
S96
Setting the Rules_ Global AI Standards for Growth and Governance — The discussion maintained a consistently collaborative and constructive tone throughout. Panelists demonstrated remarkab…
S97
Panel 1 – Accelerating Cable Repairs: Reducing Delays Through Smarter Processes  — The tone was collaborative and constructive throughout, with panelists building on each other’s points and sharing pract…
S98
How AI Is Transforming Diplomacy and Conflict Management — The discussion maintained a consistently thoughtful and cautiously optimistic tone throughout. Participants demonstrated…
S99
WS #395 Applying International Law Principles in the Digital Space — The discussion maintained a serious, academic tone throughout, with participants demonstrating deep expertise and concer…
S100
Open Mic & Closing Ceremony — The overall tone was formal yet appreciative. There was a sense of accomplishment and gratitude expressed throughout, wi…
S101
Closing remarks — The tone is consistently celebratory, optimistic, and forward-looking throughout the discussion. It maintains an enthusi…
S102
Closing Ceremony — The discussion maintains a consistently positive and collaborative tone throughout, characterized by gratitude, celebrat…
S103
Closing Session  — Sustained collaboration between governments, industry, and other stakeholders is essential for translating recommendatio…
S104
Opening and introduction — The AU’s commitment to working with Member States in adopting the meeting’s recommendations was reaffirmed, alongside th…
S105
Agentic AI transforms enterprise workflows in 2026 — Enterprise AIentereda new phase as organisations transitioned from simple, prompt-driven tools to autonomous agents capa…
S106
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — A haves and have -nots framing, however, risks distracting from what should be the main point of international AI dialog…
S107
360° on AI Regulations — In conclusion, the analysis reveals that AI regulation is guided by existing laws, and there is a complementary nature b…
S108
Research Publication No. 2014-6 March 17, 2014 — A prime example of the important role that a government unit can play in cloud standard setting initiatives is the Natio…
S109
https://dig.watch/event/india-ai-impact-summit-2026/setting-the-rules_-global-ai-standards-for-growth-and-governance — So I would say the Manav mission, it’s welfare, human -centric, and all those aspects are there. And from the governance…
S110
https://dig.watch/event/india-ai-impact-summit-2026/welfare-for-all-ensuring-equitable-ai-in-the-worlds-democracies — On the one hand, we want them to be able to apply across borders because we want to enable companies to have responsible…
S111
INTRODUCTION — To effectively pursue the objectives defined in the strategy, it will be essential to define an entity responsible for t…
S112
The Innovation Beneath AI: The US-India Partnership powering the AI Era — Yes, thank you. So super excited. This week we announced in partnership with the Office of Principal Scientific Advisory…
S113
Brainstorming with AI opens new doors for innovation — AI is increasingly embraced as a reliable creative partner, offering speed and breadth in idea generation. In Fast Compa…
S114
Hitler’s impact: Catalysing Europe’s fall and USA’s rise to power — WWI brought about a professionalisation of the states’ bureaucracy in the Allied states and a belated realisation that r…
S115
UNITED NATIONS CONFERENCE ON TRADE AND DEVELOPMENT — As UNCTAD (2019a) warned, firms in many developing countries may find themselves in subordinate positions, with data …
S116
EXCERPTED FROM — 8. The war on terror has been justified by what has been coined the Bush Doctrine of preemption, unilateralism, and mili…
S117
DIGITAL DIVIDENDS — The internet emerged from U.S. government research in the 1970s, but as it grew into a global network of netw…
S118
AI safety institute launches £8.5 million initiative to enhance systemic safety research — The AI Safety Instituteis launchingan £8.5 million funding scheme to support research on AI system safety, while the ini…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
A
Austin Mayron
4 arguments194 words per minute951 words293 seconds
Argument 1
Industry‑front‑door & standards focus – Austin Mayron explains that CAISI serves as the “front door” for industry to the U.S. government, partnering with NIST to develop voluntary, consensus‑based standards that unlock adoption.
EXPLANATION
Austin describes CAISI’s role within the Department of Commerce as the primary gateway for industry to engage with the U.S. government. He emphasizes the partnership with NIST to create voluntary standards that facilitate innovation and adoption of agentic AI.
EVIDENCE
He states that the Secretary has tasked CAISI to be the front door for industry to the United States government and that they collaborate with NIST, an organization that historically promotes economic growth through standards rather than regulation, to develop the standards and best practices needed for industry to flourish [19-22][26-29].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
CAISI’s role as the front door and its partnership with NIST to develop voluntary standards is described in S19, which announces the Agent Standards Initiative led by CAISI, and reinforced in S1 which highlights CAISI’s front‑door function and collaboration with NIST.
MAJOR DISCUSSION POINT
Front‑door industry engagement and standards development
AGREED WITH
Jason Oxman, Sam Kaplan, Carly Ramsey, Combiz Abdolrahimi
Argument 2
Security RFI and sector‑specific listening sessions – Austin Mayron notes CAISI’s request for information on AI‑agent security and upcoming sector‑focused sessions to identify adoption barriers and develop benchmarks.
EXPLANATION
Austin outlines CAISI’s recent actions to gather industry input on AI agent security through an RFI and to hold listening sessions for health care, education, and finance. These efforts aim to surface challenges and create benchmarks that support safe adoption.
EVIDENCE
He mentions that CAISI issued a request for information on AI agent security and that they are convening sector-specific listening sessions in April on barriers to adoption for health care, education, and finance, inviting industry to share challenges [34-38][164-166].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The issuance of a request for information and the organization of sector‑specific listening sessions are documented in S19 and S1, both of which mention CAISI’s sector‑focused listening sessions to surface adoption challenges.
MAJOR DISCUSSION POINT
Collecting industry feedback on security and adoption barriers
Argument 3
CAISI follows a bottom‑up, humility‑driven approach, gathering input from field experts before defining problems, rather than imposing top‑down solutions.
EXPLANATION
Mayron explains that CAISI prefers to listen to those closest to the technology challenges, acknowledging its limited perspective and emphasizing collaboration with industry to identify barriers.
EVIDENCE
He states that CAISI takes “a little bit of humility and say, we don’t actually know what the problem is until we talk to the people who are closest to the issue” and that the process is more bottom-up than top-down [158-162].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
A bottom‑up, stakeholder‑driven methodology is emphasized in S20’s discussion of DPI development and echoed in S1’s call for a grassroots, industry‑driven approach.
MAJOR DISCUSSION POINT
Collaborative, bottom‑up standards development
Argument 4
CAISI will develop benchmarks, methodologies, and evaluation methods to assure that AI agents handle personally identifiable information (PII) correctly in regulated sectors such as healthcare and education.
EXPLANATION
Mayron points out that uncertainty around how agents process PII hampers adoption, and CAISI can provide measurable standards to give companies confidence that privacy obligations are met.
EVIDENCE
He gives the example of a regulated field like healthcare where there is reluctance to adopt because it is unclear how agents treat PII, and suggests CAISI could develop benchmarks and evaluation methods to settle those concerns [168-170].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The concern about PII in regulated fields and CAISI’s potential role in providing benchmarks is highlighted in S1.
MAJOR DISCUSSION POINT
Creating PII‑focused benchmarks for regulated sectors
S
Sam Kaplan
1 argument173 words per minute675 words233 seconds
Argument 1
Standards as security foundation – Sam Kaplan argues that standards organizations are the essential foundation for understanding and mitigating the three‑dimensional risk picture of agentic AI, especially security.
EXPLANATION
Sam stresses that voluntary standards bodies are crucial for mapping the evolving risk landscape of agentic AI, turning a two‑dimensional model risk view into a three‑dimensional one that includes kinetic consequences. He positions standards as the base for security and trust.
EVIDENCE
He explains that standards organizations are developing frameworks that capture the three-dimensional risk picture of agentic AI, moving from traditional model security to agentic risks that can have kinetic real-world impacts, and that understanding this risk picture is critically important for security [332-339].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S19 notes that standards bodies are developing frameworks that capture the three‑dimensional risk picture of agentic AI, and S22 stresses the need for proper standards to address emerging security challenges.
MAJOR DISCUSSION POINT
Standards as the foundation for security risk assessment
AGREED WITH
Jason Oxman, Austin Mayron, Carly Ramsey, Combiz Abdolrahimi
C
Carly Ramsey
3 arguments188 words per minute547 words173 seconds
Argument 1
Open, inclusive standards – Carly Ramsey stresses the need for open models and open standards to make agentic AI accessible worldwide and to harmonize regional frameworks (e.g., Singapore vs. NIST).
EXPLANATION
Carly calls for open AI models and standards that enable global access, emphasizing the importance of aligning regional frameworks such as Singapore’s with NIST’s standards. She highlights the role of policy in ensuring inclusivity and interoperability.
EVIDENCE
She notes that policymakers should consider whether agentic AI is accessible to everyone, that open models and standards facilitate broader access, and raises the question of compatibility between Singapore’s framework and NIST’s standards, pointing out Singapore’s leadership in cybersecurity standards [304-313].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S1 references Singapore’s own agentic AI governance framework and the importance of open models, while S19 describes NIST as the “gold standard” and notes the relevance of Singapore’s framework for global alignment.
MAJOR DISCUSSION POINT
Ensuring openness and global harmonization of AI standards
AGREED WITH
Jason Oxman, Austin Mayron, Sam Kaplan, Combiz Abdolrahimi
Argument 2
Regional forums for harmonization – Carly Ramsey points to Singapore International Cyber Week as a venue where governments converge to discuss cyber and AI policy, fostering cross‑regional dialogue.
EXPLANATION
Carly highlights the annual Singapore International Cyber Week as a platform that brings together governments worldwide to discuss cyber and AI policy, suggesting it as a useful venue for multilateral coordination on agentic AI governance.
EVIDENCE
She describes how Singapore International Cyber Week has grown in attendance from governments globally, providing a space for policy discussions on cyber and AI, and mentions its role in bringing together diverse countries such as India [404-406].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The role of Singapore International Cyber Week as a platform for global cyber and AI policy dialogue is detailed in S25, S26, and S27.
MAJOR DISCUSSION POINT
Using regional cyber weeks for international policy coordination
Argument 3
Cloudflare acts as critical infrastructure protection for AI model providers by securing traffic and offering developer tools that enable the creation of AI agents.
EXPLANATION
Ramsey describes Cloudflare’s role in sitting between customers and users, protecting the data flows of AI model providers, and providing tools that developers use to build AI agents, positioning the company as a key defender of agentic AI deployments.
EVIDENCE
She states that Cloudflare “runs a global network, … we protect the traffic that goes back and forth” and that many AI model providers are their customers, while also offering developer tools for building AI agents [298-303].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Carly’s description of Cloudflare protecting traffic between customers and users and providing developer tools is documented in S1.
MAJOR DISCUSSION POINT
Infrastructure security for AI agents
C
Combiz Abdolrahimi
3 arguments165 words per minute334 words120 seconds
Argument 1
Practical, actionable guidance – Combiz Abdolrahimi calls for concrete, operational standards and playbooks rather than abstract principles, emphasizing clarity for industry and regulators.
EXPLANATION
Combiz argues that regulators need clear, practical guidance—such as standards, playbooks, and operational frameworks—rather than high‑level theoretical principles. He stresses that actionable clarity will help both industry and policymakers.
EVIDENCE
He states that industry wants clarity, standards, and concrete governance playbooks, and warns against abstract principles, calling for practical standards, operational clarity, and model frameworks [364-368].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for concrete, actionable standards instead of abstract principles are echoed in S1, S29, and S30, which stress the need for practical outcomes and shared responsibility.
MAJOR DISCUSSION POINT
Demand for concrete, operational standards
AGREED WITH
Jason Oxman, Austin Mayron, Sam Kaplan, Carly Ramsey
Argument 2
Clear, operational standards over abstract principles – Combiz Abdolrahimi calls for concrete governance playbooks, benchmarks, and operational clarity to guide industry and regulators.
EXPLANATION
Repeating his earlier point, Combiz emphasizes the need for specific, actionable standards and benchmarks rather than vague policy language, urging regulators to provide operational guidance that can be directly applied.
EVIDENCE
He repeats the call for clarity, standards, and operational guidance, noting that governments should avoid abstract principles and instead deliver practical standards and playbooks [364-368].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The demand for operational clarity and concrete governance tools is reinforced in S1, S29, and S30.
MAJOR DISCUSSION POINT
Advocating for operational clarity in governance
Argument 3
Broader multilateral engagement – Combiz Abdolrahimi adds that bodies such as the ITU, UN, and AI‑for‑Good initiatives should be leveraged to ensure inclusive, global participation in standards development.
EXPLANATION
Combiz suggests expanding multilateral involvement by engaging organizations like the ITU, UN, and AI‑for‑Good to foster inclusive global dialogue on AI standards and governance, ensuring diverse stakeholder input.
EVIDENCE
He lists the ITU, UN, and AI-for-Good as examples of multilateral forums that can be used to engage more countries and stakeholders, emphasizing inclusivity in global standards work [432-434].
MAJOR DISCUSSION POINT
Expanding multilateral platforms for inclusive standards
D
Danielle Gilliam-Moore
3 arguments189 words per minute635 words201 seconds
Argument 1
Agile, sector‑specific governance – Danielle Gilliam‑Moore highlights that governance can be more agile than formal regulation, using sector‑specific ministries and interim safety institutes to fill gaps while longer‑term ISO standards are developed.
EXPLANATION
Danielle explains that governance need not wait for formal regulation; sector ministries can create rapid, tailored frameworks, and safety institutes can bridge gaps during the lengthy ISO standard development process.
EVIDENCE
She contrasts governance with regulation, noting that governance includes standards, global norms, and risk procedures, and points out that ISO standards take about three years, so interim safety institutes are crucial for filling gaps while longer-term standards are built [353-357].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for agile, sector‑specific governance frameworks is discussed in S30, which calls for coherent, interoperable policy frameworks and agile governance to bridge gaps before ISO standards mature.
MAJOR DISCUSSION POINT
Using sector ministries and safety institutes for agile governance
Argument 2
Agile, ministry‑driven frameworks – Danielle Gilliam‑Moore suggests leveraging existing sector ministries for tailored, rapid frameworks, allowing startups and niche use‑cases to comply without waiting for global standards.
EXPLANATION
She recommends that governments let specialized ministries (e.g., health, finance) lead on AI governance, providing faster, context‑specific rules that support innovation, especially for smaller firms and emerging use‑cases.
EVIDENCE
She cites emerging frameworks that started in the UK and are now seen in Indonesia, where ministries with core competencies drive AI governance, offering a more agile approach than centralized regulation [354-358].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S30’s emphasis on agile governance and sector‑driven policy mechanisms supports this argument.
MAJOR DISCUSSION POINT
Sector‑specific ministries as agile policy drivers
Argument 3
OECD as the global anchor – Danielle Gilliam‑Moore identifies the OECD’s AI principles and reporting framework as the primary reference point for worldwide policy alignment.
EXPLANATION
Danielle points to the OECD’s AI principles, reporting framework, and related work as the foundational reference that many jurisdictions, including the EU AI Act and U.S. state drafts, already rely on for AI policy alignment.
EVIDENCE
She notes that the OECD set the floor for global AI policy, that EU AI Act definitions are based on OECD principles, and that many state-level legislations reference them, highlighting the OECD’s reporting framework and GPI work as key global references [386-393].
MAJOR DISCUSSION POINT
OECD as the central reference for AI policy
P
Prith Banerjee
3 arguments171 words per minute1262 words442 seconds
Argument 1
Agentic engineers augment chip design – Prith Banerjee describes “agentic engineers” that complement human designers, enabling faster, more complex silicon and system development for automotive, aerospace, and data‑center products.
EXPLANATION
Prith explains that Synopsys is creating AI‑driven “agentic engineers” that handle lower‑level reasoning tasks, working alongside human engineers to accelerate chip and system design for high‑complexity products such as cars and aircraft.
EVIDENCE
He states that agentic engineers will complement human engineers, adding roughly 200,000 AI-driven engineers to support human teams, while humans remain in the loop to prevent drastic errors [90-93].
MAJOR DISCUSSION POINT
AI‑augmented engineering workforce
Argument 2
Verification, validation, and safety for physical AI – Prith Banerjee warns that agentic AI controlling physical systems (cars, aircraft, nuclear assets) demands exhaustive digital‑level verification to prevent catastrophic misuse.
EXPLANATION
Prith highlights the heightened risk when agentic AI operates physical systems, citing scenarios like autonomous cars in Mumbai or weaponized aircraft, and stresses the need for near‑100 % digital verification before hardware prototyping.
EVIDENCE
He describes potential threats such as cyber-attacks on autonomous cars and aircraft, the need for extensive verification and validation to achieve close to 100 % coverage at the digital level, and the broader danger of software-defined critical infrastructure falling into the wrong hands [188-207].
MAJOR DISCUSSION POINT
Safety through exhaustive verification for physical AI
Argument 3
Agentic AI is a strategic priority for Synopsys, underpinning its transition from a pure EDA tool provider to a “chips‑to‑systems” company and driving future growth.
EXPLANATION
Banerjee explains that Synopsys has acquired Ansys to become a chips‑to‑systems firm and that agentic AI is at the core of this strategic direction, enabling the company to expand its market reach.
EVIDENCE
He notes that “agentic AI is actually the core of this” and that Synopsys recently acquired Ansys for $35 billion to become a chips-to-systems company, reflecting the strategic importance of agentic AI [56-62].
MAJOR DISCUSSION POINT
Strategic business importance of agentic AI
C
Caroline Louveaux
3 arguments163 words per minute678 words249 seconds
Argument 1
Operational AI for fraud detection and payment flow – Caroline Louveaux outlines how MasterCard deploys agentic AI that not only recommends but actually executes fraud‑prevention actions in milliseconds, requiring clear permissions and human oversight.
EXPLANATION
Caroline notes that MasterCard has moved from assistive AI to agentic AI that can autonomously detect suspicious transactions, triage fraud signals, and initiate secure payment flows, all while operating within defined permissions and maintaining human oversight.
EVIDENCE
She explains that AI is shifting from recommendation to action, with agents deployed to detect suspicious transactions, triage fraud, and initiate secure flows in milliseconds, and stresses that agents must act within defined values, permissions, and with end-to-end human oversight [105-115].
MAJOR DISCUSSION POINT
Agentic AI enabling real‑time fraud mitigation
Argument 2
Four guardrails for trustworthy payments – Caroline Louveaux proposes a playbook: (1) “Know Your Agent,” (2) security‑by‑design, (3) explicit consumer intent, and (4) traceability/auditable.
EXPLANATION
Caroline presents a four‑point framework to ensure safe agentic payments: verifying agent identity, embedding security by design, confirming clear consumer intent, and ensuring all actions are traceable and auditable for redress and regulator confidence.
EVIDENCE
She lists the four guardrails-know your agent, security by design, clear consumer intent (illustrated by the sushi-ordering incident), and traceability/audibility-explaining each and noting their role in building trust while scaling adoption [218-226][227-231].
MAJOR DISCUSSION POINT
Guardrails to secure agentic payment transactions
Argument 3
Effective deployment of agentic AI requires agents to operate within clearly defined values, permissions, and human‑in‑the‑loop oversight to prevent open‑ended autonomy and ensure accountability.
EXPLANATION
Louveaux stresses that agents must have explicit boundaries on what they are allowed to do, and that humans must retain end‑to‑end oversight to guarantee responsible behavior.
EVIDENCE
She outlines that agents must act “within clear values, principles, within clear permissions” and that humans need full end-to-end oversight, emphasizing the need to avoid open-ended autonomy [110-115].
MAJOR DISCUSSION POINT
Defining permissions and human oversight for safe agentic AI
S
Syam Nair
3 arguments183 words per minute645 words210 seconds
Argument 1
Data‑centric agents improve storage and risk detection – Syam Nair details agents embedded near storage controllers that enhance data quality, enable on‑premise AI processing, and surface security threats directly at the data layer.
EXPLANATION
Syam describes how NetApp is developing AI agents that sit close to storage controllers, allowing data to be prepared and processed without moving it, thereby improving data quality and enabling real‑time threat detection at the storage layer.
EVIDENCE
He explains that agents positioned near the storage controller can prepare structured data at the source, improve AI readiness, and help detect security threats such as rapid ransomware breakouts directly where the data resides [135-139].
MAJOR DISCUSSION POINT
Embedding agents for data preparation and security
Argument 2
Multi‑level guardrails & data governance – Syam Nair emphasizes layered safeguards, public‑private partnership on guardrails, rigorous data governance, and the principle that ultimate accountability rests with humans.
EXPLANATION
Syam argues that because agents can amplify errors, guardrails must be layered across the enterprise, involve public‑private collaboration, enforce strict data governance, and ensure that humans retain final accountability for agent actions.
EVIDENCE
He outlines the need for multi-level guardrails, public-private partnership to define enterprise-specific rules, strong data governance to prevent manipulation, and stresses that agents cannot take accountability-the business owner must [239-248].
MAJOR DISCUSSION POINT
Layered enterprise guardrails and data governance
Argument 3
Agentic AI maturity can be categorized into levels, with NetApp’s current work situated around level three, indicating a progression from assistive co‑pilot functions toward more autonomous multi‑agent networks.
EXPLANATION
Nair describes a five‑level framework for agentic AI, noting that NetApp is currently in the early‑mid stage (around level three), which informs expectations for future capabilities and guardrails.
EVIDENCE
He explains that “if you have five levels of AI … we’re still in that journey somewhere in the three range” indicating NetApp’s position on the maturity scale [141-142].
MAJOR DISCUSSION POINT
Agentic AI maturity levels
J
Jennifer Mulvaney
2 arguments223 words per minute333 words89 seconds
Argument 1
Human‑first harm prevention – Jennifer Mulvaney urges policymakers to evaluate every AI initiative through the lens of protecting humans and preventing harm.
EXPLANATION
Jennifer stresses that policy should always prioritize human welfare, asking what the policy means for people and how it can prevent harms, positioning humans before models in decision‑making.
EVIDENCE
She notes that policy has always been about protecting humans, that policymakers should ask what the policy means for humans and how to prevent harm, and cites Adobe’s stance that technology should serve what we should do, not just what we can do [263-267][268-272].
MAJOR DISCUSSION POINT
Human‑centric approach to AI policy
Argument 2
Policy should focus on “what we should do” rather than merely “what we can do,” placing human welfare at the centre of AI governance.
EXPLANATION
Mulvaney argues that the purpose of policy is to protect humans, and that decision‑makers must evaluate AI initiatives based on societal benefit rather than technical capability alone.
EVIDENCE
She says “it’s not what we can do with technology, it’s what we should do” and that policy should always ask what it means for humans and how to prevent harm [270-271].
MAJOR DISCUSSION POINT
Human‑first ethic in AI policy
E
Ellie Sakhaee
3 arguments146 words per minute505 words206 seconds
Argument 1
Continuum of autonomy & human‑in‑the‑loop – Ellie Sakhaee recommends regulations reflect the spectrum of agent autonomy, shifting from “human‑in‑the‑loop” to “human‑on‑the‑loop” as agents become more reliable.
EXPLANATION
Ellie proposes that policy should recognize a continuum of agent autonomy and adjust human oversight accordingly, moving from constant human confirmation to supervisory roles as agents mature, using aviation analogies.
EVIDENCE
She describes the continuum based on autonomy, memory, context, and planning, and argues that oversight should evolve from human-in-the-loop to human-on-the-loop or human-in-command, citing the FAA’s shift in drone oversight as an analogy [278-284].
MAJOR DISCUSSION POINT
Adapting oversight to agent autonomy levels
Argument 2
Regulate applications, not just underlying models – Ellie also advises focusing on the harms caused by specific agentic uses rather than trying to freeze the underlying technology.
EXPLANATION
Ellie suggests that regulators should target the applications and potential harms of agentic AI rather than attempting to regulate the underlying models, which evolve rapidly and could render regulations obsolete.
EVIDENCE
She argues that policymakers should regulate the use or application that causes harm, not the underlying AI models, to avoid regulating technology that may have already advanced beyond the regulation by the time it is enforced [287-289].
MAJOR DISCUSSION POINT
Application‑focused regulation over model‑centric rules
Argument 3
Technical benchmarks for multi‑agent systems – Ellie Sakhaee stresses the need for academic‑industry collaboration to create benchmarks that evaluate emerging multi‑agent behaviors before deployment.
EXPLANATION
Ellie calls for the development of technical benchmarks to assess the risks and behaviors of multi‑agent systems, emphasizing collaboration between academia and industry to ensure safety prior to real‑world use.
EVIDENCE
She notes that while standards exist for single agents, multi-agent systems present new risks and behaviors that need benchmarks, urging the academic and industry community to develop and expand such benchmarks [410-415].
MAJOR DISCUSSION POINT
Developing benchmarks for multi‑agent risk assessment
J
Jason Oxman
5 arguments153 words per minute2123 words831 seconds
Argument 1
Agentic AI creates new opportunities across many industries and therefore requires targeted public‑policy solutions to encourage its responsible use.
EXPLANATION
Oxman points out that agentic AI is already generating jobs and societal benefits in sectors such as automotive, aerospace, and finance, and stresses that governments need to develop policies that both promote adoption and address emerging risks.
EVIDENCE
He notes that agentic AI is “the AI of agents” and that there has been extensive discussion about its potential for jobs and societal benefits, followed by the question of what public-policy solutions are needed to encourage its use [4-6].
MAJOR DISCUSSION POINT
Business opportunities and need for public‑policy support
Argument 2
Voluntary, industry‑driven consensus standards are preferable to top‑down regulation for governing agentic AI.
EXPLANATION
Oxman argues that the tech industry operates best when standards are voluntary and globally applicable, allowing faster innovation than prescriptive government rules.
EVIDENCE
He praises the focus on voluntary, consensus-based standards and contrasts them with government regulation, stating that such standards are global in nature and better suited to the industry [172-174].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Both S19 and S1 advocate for industry‑led, consensus‑based standards over prescriptive government regulation.
MAJOR DISCUSSION POINT
Preference for voluntary standards over regulation
AGREED WITH
Austin Mayron, Sam Kaplan, Carly Ramsey, Combiz Abdolrahimi
Argument 3
Policymakers should craft inspirational, non‑interfering policies that protect consumers and ensure safety while allowing rapid market deployment of agentic AI.
EXPLANATION
Oxman emphasizes that public policy should inspire innovators rather than hinder them, but must still safeguard consumers and embed safety and security into product design.
EVIDENCE
He says the goal is for policy to be “inspirational to innovators, that it doesn’t interfere with the ability of innovators to get the products and services out to market” while also protecting consumers and ensuring safety and security [249-255].
MAJOR DISCUSSION POINT
Balancing innovation encouragement with consumer protection
Argument 4
Industry has a responsibility to inform governments about risks and emerging guardrails, and clear guidance helps regulators understand the challenges faced by companies.
EXPLANATION
Oxman calls on companies to proactively share information on risk mitigation and guardrails so that the U.S. administration can develop appropriate policies.
EVIDENCE
He asks Austin to set up a question about what the industry should flag for the U.S. administration regarding guardrails and requests practical tips on what information is helpful for governments [144-152].
MAJOR DISCUSSION POINT
Industry‑government communication on risk management
Argument 5
The shift from assistive AI to operational AI demands explicit oversight and guardrails to maintain accountability and human control.
EXPLANATION
Oxman highlights that when AI agents move from recommending actions to actually executing them, robust oversight mechanisms are essential to prevent unintended consequences.
EVIDENCE
He remarks that moving from assistive AI to operational AI means agents can take tasks on, but oversight must remain in the system, and that guidelines and protections will be discussed later [121-124].
MAJOR DISCUSSION POINT
Need for oversight as AI agents become operational
AGREED WITH
Caroline Louveaux, Prith Banerjee, Ellie Sakhaee, Syam Nair
Agreements
Agreement Points
Voluntary, industry‑driven consensus standards are preferred over top‑down regulation for governing agentic AI.
Speakers: Jason Oxman, Austin Mayron, Sam Kaplan, Carly Ramsey, Combiz Abdolrahimi
Voluntary, industry‑driven consensus standards are preferable to top‑down regulation for governing agentic AI. Industry‑front‑door & standards focus – Austin Mayron explains that CAISI serves as the “front door” for industry to the U.S. government, partnering with NIST to develop voluntary, consensus‑based standards that unlock adoption. Standards as security foundation – Sam Kaplan argues that standards organizations are the essential foundation for understanding and mitigating the three‑dimensional risk picture of agentic AI, especially security. Open, inclusive standards – Carly Ramsey stresses the need for open models and open standards to make agentic AI accessible worldwide and to harmonize regional frameworks (e.g., Singapore vs. NIST). Practical, actionable guidance – Combiz Abdolrahimi calls for concrete, operational standards and playbooks rather than abstract principles, emphasizing clarity for industry and regulators.
All five speakers converge on the view that voluntary, consensus-based standards-developed collaboratively with industry-are the preferred mechanism for governing agentic AI, rather than prescriptive government regulation [172-174][19-22][26-29][332-339][304-307][364-368].
POLICY CONTEXT (KNOWLEDGE BASE)
This preference mirrors the industry-led, consensus-based approach advocated by U.S. standards bodies, which argue that voluntary standards are more effective than government mandates [S58] and reflects broader calls for bottom-up governance in global tech policy discussions [S50].
A bottom‑up, industry‑driven approach is essential for developing standards and informing public policy on agentic AI.
Speakers: Austin Mayron, Jason Oxman
CAISI follows a bottom‑up, humility‑driven approach, gathering input from field experts before defining problems. Industry has a responsibility to inform governments about risks and emerging guardrails, and clear guidance helps regulators understand challenges.
Both Austin and Jason stress that standards and policy should be shaped by direct industry input, emphasizing humility and collaboration rather than top-down mandates [158-162][144-152].
POLICY CONTEXT (KNOWLEDGE BASE)
A bottom-up model was highlighted as a way to future-proof global tech governance and ensure agility at the IGF Open Forum #44, emphasizing industry participation in AI standard-setting [S50] and reinforcing the industry-led consensus stance [S58].
Security, risk assessment, and layered guardrails are critical for safe deployment of agentic AI.
Speakers: Austin Mayron, Sam Kaplan, Caroline Louveaux, Syam Nair, Prith Banerjee, Ellie Sakhaee
Security RFI and sector‑specific listening sessions – Austin notes CAISI’s request for information on AI‑agent security and upcoming sector‑focused sessions to identify adoption barriers. Standards as security foundation – Sam Kaplan argues that standards organizations provide the foundational layer for understanding and mitigating the three‑dimensional risk picture of agentic AI. Four guardrails for trustworthy payments – Caroline outlines a playbook (know your agent, security‑by‑design, clear consumer intent, traceability/auditable) to ensure safe agentic payments. Multi‑level guardrails & data governance – Syam emphasizes layered safeguards, public‑private partnership, rigorous data governance, and human accountability. Verification, validation, and safety for physical AI – Prith warns that agentic AI controlling physical systems requires near‑100 % digital verification to prevent catastrophic misuse. Technical benchmarks for multi‑agent systems – Ellie stresses the need for academic‑industry collaboration to develop benchmarks that evaluate emerging multi‑agent behaviors before deployment.
All six speakers highlight that robust security measures, risk-focused standards, and multi-layered guardrails (including data governance and verification) are indispensable for trustworthy agentic AI across sectors [34-38][164-166][332-339][218-226][227-231][239-248][188-207][410-415].
POLICY CONTEXT (KNOWLEDGE BASE)
Guardrails and risk assessment were emphasized as essential safeguards for agentic AI, noting that missing data lineage and lack of guardrails can produce dangerous outcomes [S40] and that layered safeguards are a core component of responsible AI frameworks [S45].
Human oversight (human‑in‑the‑loop or human‑on‑the‑loop) is essential to ensure accountability and prevent open‑ended autonomy of agentic AI.
Speakers: Caroline Louveaux, Prith Banerjee, Ellie Sakhaee, Jason Oxman, Syam Nair
Effective deployment of agentic AI requires agents to operate within clearly defined values, permissions, and human‑in‑the‑loop oversight to prevent open‑ended autonomy. Agentic engineers complement human engineers; the human remains in the loop to prevent drastic errors. Continuum of autonomy & human‑in‑the‑loop – regulations should reflect a spectrum of agent autonomy, shifting from human‑in‑the‑loop to human‑on‑the‑loop as agents mature. The shift from assistive AI to operational AI demands explicit oversight and guardrails to maintain accountability and human control. Agents cannot take accountability; ultimate responsibility rests with humans, reinforcing the need for layered guardrails and oversight.
Caroline, Prith, Ellie, Jason, and Syam all agree that agents must operate under clear permissions with continuous human oversight, and that accountability ultimately lies with humans, to avoid uncontrolled autonomous actions [105-115][90-93][278-284][121-124][245-248].
POLICY CONTEXT (KNOWLEDGE BASE)
Multiple sources stress that human oversight is needed to maintain accountability and counteract algorithmic blind spots, warning that mere human presence does not guarantee agency if systems are compliance-driven [S44][S45][S46].
The OECD’s AI principles and reporting framework serve as the primary global reference for aligning AI policy across jurisdictions.
Speakers: Danielle Gilliam‑Moore, Sam Kaplan, Jennifer Mulvaney
OECD as the global anchor – Danielle identifies the OECD’s AI principles and reporting framework as the foundational reference for worldwide policy alignment. OECD has been the leader … foundational piece – Sam notes that many U.S. state definitions and international policies are based on OECD principles. OECD … most credible group – Jennifer states that the OECD is the largest and most credible group for AI policy coordination.
Danielle, Sam, and Jennifer all point to the OECD as the central, credible platform that underpins AI policy harmonisation globally, influencing the EU AI Act, U.S. state drafts, and other national frameworks [386-393][401-403][418-420].
POLICY CONTEXT (KNOWLEDGE BASE)
The OECD AI Principles are repeatedly cited as a foundational international framework for AI governance, underpinning calls for standardized global policies and interoperable standards [S51][S53].
Open, inclusive standards and multilateral coordination are needed to ensure global accessibility and harmonisation of agentic AI.
Speakers: Carly Ramsey, Jennifer Mulvaney, Sam Kaplan, Combiz Abdolrahimi
Open, inclusive standards – Carly stresses the need for open models and open standards to make agentic AI accessible worldwide and to harmonise regional frameworks. Need space for smaller regional groups – Jennifer highlights the importance of regional initiatives complementing global standards like the OECD. International Consortium of Safety Institutes – Sam suggests a tactical, multilateral forum to develop technical standards and taxonomies for agentic AI security. Broader multilateral engagement – Combiz calls for leveraging bodies such as the ITU, UN, and AI‑for‑Good to ensure inclusive, global participation in standards development.
Carly, Jennifer, Sam, and Combiz converge on the necessity of open, inclusive standards and multilateral platforms (regional groups, safety institutes, UN bodies) to promote worldwide access and harmonisation of agentic AI [304-307][420-423][401-402][432-434].
POLICY CONTEXT (KNOWLEDGE BASE)
Consensus on the need for inclusive, multilateral standard-setting was observed at several IGF sessions and AI governance forums, highlighting the importance of avoiding domination by a few actors and ensuring interoperability [S53][S48][S52][S54].
Similar Viewpoints
Both emphasize that standards development must start with industry input and that standards are the core mechanism for addressing security risks in agentic AI [158-162][332-339].
Speakers: Austin Mayron, Sam Kaplan
CAISI follows a bottom‑up, humility‑driven approach, gathering input from field experts before defining problems. Standards as security foundation – Sam Kaplan argues that standards organizations are the essential foundation for understanding and mitigating the three‑dimensional risk picture of agentic AI, especially security.
Both stress layered guardrails, clear permissions, and human accountability as essential safeguards for agentic AI deployments [105-115][239-248].
Speakers: Caroline Louveaux, Syam Nair
Effective deployment of agentic AI requires agents to operate within clearly defined values, permissions, and human‑in‑the‑loop oversight to prevent open‑ended autonomy. Multi‑level guardrails & data governance – Syam emphasizes layered safeguards, public‑private partnership, rigorous data governance, and human accountability.
Both view the OECD as the primary, foundational multilateral framework guiding AI policy globally [386-393][401-403].
Speakers: Danielle Gilliam‑Moore, Sam Kaplan
OECD as the global anchor – Danielle identifies the OECD’s AI principles and reporting framework as the foundational reference for worldwide policy alignment. OECD has been the leader … foundational piece – Sam notes that many U.S. state definitions and international policies are based on OECD principles.
Both advocate for the creation of technical benchmarks and standards, through collaborative multilateral bodies, to assess and mitigate risks of multi‑agent AI systems [410-415][401-402].
Speakers: Ellie Sakhaee, Sam Kaplan
Technical benchmarks for multi‑agent systems – Ellie stresses the need for academic‑industry collaboration to develop benchmarks that evaluate emerging multi‑agent behaviors before deployment. International Consortium of Safety Institutes – Sam suggests a tactical, multilateral forum to develop technical standards and taxonomies for agentic AI security.
Both underline the importance of industry‑government collaboration, with a bottom‑up approach to shaping standards and policy for agentic AI [144-152][158-162].
Speakers: Jason Oxman, Austin Mayron
Industry has a responsibility to inform governments about risks and emerging guardrails, and clear guidance helps regulators understand challenges. CAISI follows a bottom‑up, humility‑driven approach, gathering input from field experts before defining problems.
Unexpected Consensus
Cross‑domain agreement on verification, data governance, and layered guardrails between a hardware‑centric design firm (Synopsys) and a data‑centric storage provider (NetApp).
Speakers: Prith Banerjee, Syam Nair
Verification, validation, and safety for physical AI – Prith warns that agentic AI controlling physical systems requires near‑100 % digital verification to prevent catastrophic misuse. Multi‑level guardrails & data governance – Syam emphasizes layered safeguards, public‑private partnership, rigorous data governance, and human accountability.
Despite operating in different parts of the technology stack (chip design vs. data storage), both speakers converge on the necessity of exhaustive verification and strong data governance as core guardrails for safe agentic AI, a convergence not explicitly anticipated given their distinct business focuses [188-207][239-248].
Overall Assessment

The panel exhibits strong consensus around four core themes: (1) voluntary, industry‑driven consensus standards are preferred to top‑down regulation; (2) a bottom‑up, collaborative approach to standards and policy is essential; (3) security, risk assessment, and layered guardrails—including human oversight and data governance—are critical for safe agentic AI; (4) the OECD serves as the primary global anchor for policy alignment, complemented by calls for inclusive, open standards and multilateral coordination.

High consensus across technical, policy, and governance dimensions, indicating that future policy initiatives are likely to prioritize voluntary standards, collaborative stakeholder engagement, robust security frameworks, and alignment with OECD principles, thereby facilitating broader industry adoption while safeguarding societal interests.

Differences
Different Viewpoints
Preferred multilateral platform for coordinating agentic AI governance
Speakers: Danielle Gilliam-Moore, Carly Ramsey, Sam Kaplan, Combiz Abdolrahimi
Danielle Gilliam-Moore identifies the OECD as the primary global anchor for AI policy alignment, citing its principles, reporting framework and influence on EU and US state legislation [386-393]. Carly Ramsey points to Singapore International Cyber Week as a practical venue where governments converge to discuss cyber and AI policy, emphasizing its annual, region-focused nature [404-406]. Sam Kaplan, while agreeing on the OECD’s importance, adds the International Consortium of Safety Institutes as a tactical forum for developing technical standards and taxonomies for agentic AI security [401-402]. Combiz Abdolrahimi expands the set of relevant multilateral bodies to include the ITU, UN and AI-for-Good initiatives, arguing for broader inclusive engagement [432-434].
Speakers disagree on which multilateral forum should be the primary focus for coordinating agentic AI governance. Danielle stresses the OECD as the foundational reference, Carly highlights a regional cyber‑week event, Sam adds a safety‑institutes consortium, and Combiz calls for even broader UN‑based platforms. All agree coordination is needed but differ on the optimal venue.
POLICY CONTEXT (KNOWLEDGE BASE)
Discussions at IGF and other multistakeholder venues have identified the need for a dedicated multilateral platform to coordinate AI governance efforts, reflecting broad agreement on the principle despite differing platform preferences [S53][S48].
Approach to achieving effective governance: global standards versus agile, sector‑specific frameworks
Speakers: Danielle Gilliam-Moore, Carly Ramsey, Sam Kaplan, Austin Mayron
Danielle Gilliam-Moore argues for agile, ministry-driven frameworks that can act faster than lengthy ISO standards, using sector ministries and safety institutes as interim solutions [353-358]. Carly Ramsey and Sam Kaplan emphasize the importance of global, voluntary consensus standards (e.g., NIST, OECD) as the foundation for security and interoperability, advocating for harmonisation across regions [304-313][332-339]. Austin Mayron describes a bottom-up, voluntary standards process coordinated through NIST and CAISI, focusing on industry-driven consensus rather than sector-specific regulation [158-162][156-162].
All speakers seek robust governance for agentic AI but diverge on the mechanism: Danielle promotes fast, sector‑specific, ministry‑led frameworks; Carly, Sam and Austin favour globally‑aligned, voluntary consensus standards. The disagreement lies in the speed and scope of the governance approach.
POLICY CONTEXT (KNOWLEDGE BASE)
The tension between universal standards and sector-specific agile frameworks has been highlighted, with calls for modular, interoperable standards ecosystems to balance consistency and flexibility [S52][S50][S53].
Unexpected Differences
Scope of openness in AI models and standards
Speakers: Carly Ramsey, Austin Mayron
Carly Ramsey calls for open models and open standards to make agentic AI accessible globally and stresses the need for compatibility between regional frameworks and NIST standards [304-313]. Austin Mayron, while supporting voluntary standards, does not explicitly address openness of models and focuses on industry-driven, possibly proprietary, standards development and sector-specific listening sessions [32-38][156-162].
Carly’s explicit demand for open, universally accessible standards and models was not mirrored by Austin’s discussion, which centered on voluntary, industry‑driven standards without a clear stance on openness. This divergence was not anticipated given the overall consensus on standards.
POLICY CONTEXT (KNOWLEDGE BASE)
Debates on openness reference the foundational openness of the open-source movement and the need to balance openness with trust and security, as discussed in U.S. policy reflections and global AI openness perspectives [S56][S55][S53].
Overall Assessment

The panel largely converged on the importance of standards, guardrails, and collaborative governance for agentic AI. The most notable divergences concern the preferred multilateral coordination mechanism (OECD vs regional events vs safety‑institute consortia) and the balance between global standards and agile, sector‑specific frameworks.

Low to moderate. While participants share common goals of safe, trustworthy, and inclusive agentic AI, they differ on the pathways to achieve these goals. The disagreements are more about implementation details than fundamental principles, suggesting that consensus on high‑level policy is achievable, but coordination on specific institutional venues and governance models will require further negotiation.

Partial Agreements
All speakers agree that standards are the preferred tool for governing agentic AI, but they differ on the emphasis: Jason focuses on industry preference, Austin on the front‑door government role, Sam on security foundations, and Carly on openness and global accessibility.
Speakers: Jason Oxman, Austin Mayron, Sam Kaplan, Carly Ramsey
Jason Oxman argues that voluntary, industry-driven consensus standards are preferable to top-down regulation for governing agentic AI [172-174]. Austin Mayron describes CAISI’s role in partnering with NIST to develop voluntary standards that unlock adoption [26-29]. Sam Kaplan states that standards organisations are the essential foundation for understanding and mitigating the three-dimensional risk picture of agentic AI [332-339]. Carly Ramsey stresses the need for open, inclusive standards to make agentic AI accessible worldwide and to harmonise regional frameworks [304-313].
Both agree that robust guardrails are essential for safe deployment of agentic AI, but Caroline focuses on payment‑specific guardrails while Syam advocates a broader, multi‑level enterprise framework that includes data governance and accountability.
Speakers: Caroline Louveaux, Syam Nair
Caroline Louveaux proposes four guardrails (know your agent, security-by-design, clear consumer intent, traceability) to ensure trustworthy agentic payments [218-226][227-231]. Syam Nair emphasizes layered guardrails, public-private partnership, rigorous data governance and human accountability for enterprise-wide agentic AI risk management [239-248].
Takeaways
Key takeaways
CAISI (U.S. Center for AI Standards and Innovation) acts as the industry front‑door to the U.S. government, partnering with NIST to develop voluntary, consensus‑based standards that facilitate safe adoption of agentic AI. Security and trust are foundational; standards bodies (NIST, OECD, International Consortium of Safety Institutes) are seen as the primary mechanism for defining guardrails and risk‑mitigation frameworks for agentic AI. Open, inclusive standards and open‑model approaches are essential for global accessibility and for harmonising regional frameworks (e.g., Singapore vs. NIST). Concrete, operational guidance (playbooks, benchmarks, verification/validation methods) is preferred over abstract principles; industry seeks clear, actionable standards. Agentic AI is already delivering business value: Synopsys uses “agentic engineers” to accelerate chip‑and‑system design; MasterCard deploys agentic agents for real‑time fraud detection and payment execution; NetApp embeds agents near storage controllers to improve data quality and surface security threats. Four core guardrails for trustworthy agentic payments were outlined: (1) Know Your Agent, (2) Security‑by‑Design, (3) Explicit Consumer Intent, (4) Traceability & Audibility. Risk‑management guardrails must be multi‑layered, include strong data‑governance, public‑private partnership, and retain ultimate human accountability. Policy should focus on protecting humans, adopt a continuum‑of‑autonomy approach (human‑in‑the‑loop → human‑on‑the‑loop → human‑in‑command), and regulate applications rather than trying to freeze underlying models. Agile, sector‑specific governance (leveraging existing ministries or safety institutes) can fill gaps while longer‑term ISO standards are developed. International coordination is critical; the OECD’s AI principles and reporting framework are identified as the primary global anchor, complemented by regional forums such as Singapore International Cyber Week and multilateral bodies (ITU, UN, AI‑for‑Good). Technical benchmarks for multi‑agent systems are needed to understand emergent risks before deployment.
Resolutions and action items
CAISI issued a Request for Information (RFI) on AI‑agent security; industry is invited to submit comments within the next month. CAISI announced sector‑specific listening sessions (healthcare, education, finance) to be held in April to gather barriers to adoption and inform standards development. MasterCard shared its four‑point guardrail playbook for agentic payments, signalling intent to adopt these internally and encourage industry uptake. Synopsys highlighted its development of “agentic engineers” as a product offering, indicating ongoing internal deployment. NetApp is progressing toward level‑3 agentic capabilities (agents near storage controllers) and will continue to refine data‑governance guardrails. Panelists collectively urged companies to engage with standards bodies (NIST, CAISI, OECD, International Consortium of Safety Institutes) and submit feedback on emerging drafts. Policymakers were encouraged to look to the OECD for baseline principles and to support sector‑specific, agile regulatory frameworks.
Unresolved issues
Specific technical specifications for AI‑agent security standards and verification/validation metrics remain under development. How to achieve seamless harmonisation between regional standards (e.g., Singapore’s framework) and U.S./NIST guidelines is still an open question. The exact definition of the autonomy continuum and the thresholds for shifting from human‑in‑the‑loop to human‑on‑the‑loop have not been concretised. Benchmarks for multi‑agent system behaviour and emergent risk assessment are not yet established. Mechanisms for ongoing public‑private partnership on data‑governance and accountability in large‑scale deployments need further definition. Details on how smaller, sector‑specific ministries will coordinate with global standards bodies to produce agile frameworks were not fully fleshed out.
Suggested compromises
Adopt voluntary, consensus‑based standards (via NIST/CAISI) rather than top‑down regulation to balance innovation speed with safety. Implement sector‑specific, agile governance frameworks (through existing ministries or safety institutes) as interim measures while broader ISO standards are being finalised. Combine open‑model/open‑standard approaches with rigorous security‑by‑design requirements to ensure accessibility without sacrificing trust. Use a layered guardrail approach—technical safeguards, data‑governance policies, and human accountability—to mitigate the larger blast radius of agentic errors.
Thought Provoking Comments
CAISI was originally founded as the U.S. AI Safety Institute, but last year it was refounded as the Center for AI Standards and Innovation, signaling a shift away from safety principles toward standards and innovation.
Highlights a strategic pivot in government approach—from prescriptive safety to enabling industry through standards—introducing a new framework for public‑private collaboration.
Set the stage for the discussion on how government can facilitate adoption; prompted later speakers to reference standards, voluntary consensus, and the role of NIST, shaping the conversation toward bottom‑up, industry‑driven policy development.
Speaker: Austin Mayron
We have created agentic engineers… they complement human engineers rather than replace them, acting as lower‑level reasoning agents while humans stay in the loop.
Introduces the concept of ‘agentic engineers’ as a hybrid workforce, reframing AI agents as augmentative tools rather than job‑threatening replacements.
Shifted the dialogue toward collaboration between AI and humans; later participants (e.g., Caroline, Syam) referenced human oversight and guardrails, deepening the discussion on human‑in‑the‑loop designs.
Speaker: Prith Banerjee
Imagine an autonomous car in Mumbai being hacked and used as a weapon… software‑defined airplanes could become missiles. We must ensure responsible, safe AI in intelligent product design.
Provides a vivid, high‑stakes scenario that underscores the potential physical dangers of agentic AI, moving the conversation from technical benefits to existential risk.
Created a turning point toward safety concerns; prompted Caroline to discuss guardrails, and Syam to talk about blast radius and accountability, adding urgency and depth to the risk‑management discussion.
Speaker: Prith Banerjee
Our four guardrails for agentic payments: know your agent, security by design, clear consumer intent, and traceability/auditable records.
Offers concrete, actionable policy recommendations that translate abstract safety concepts into specific operational controls.
Anchored the abstract safety talk in practical measures; other panelists referenced these guardrails when discussing enterprise risk management, and it guided the later focus on standards and accountability.
Speaker: Caroline Louveaux
Data governance is the key; agents have no empathy and make decisions solely on data. If data lineage isn’t understood, agents can produce scary outcomes, and accountability always rests with humans.
Connects data quality and governance directly to agentic risk, emphasizing that the root of many failures lies in data rather than the agents themselves.
Expanded the conversation from system‑level guardrails to the foundational role of data, influencing subsequent remarks about multi‑level safeguards and the need for clear operational standards.
Speaker: Syam Nair
We should think of a continuum of agent autonomy and move from ‘human‑in‑the‑loop’ to ‘human‑on‑the‑loop’ or ‘human‑in‑command’ as agents become more reliable, similar to FAA’s evolving drone oversight.
Provides a nuanced framework for scaling oversight with agent capability, offering a clear regulatory pathway rather than a binary safe/unsafe view.
Guided the policy discussion toward graduated oversight mechanisms; later speakers referenced this continuum when discussing guardrails and standards, adding a layer of sophistication to the regulatory conversation.
Speaker: Ellie Sakhaee
Policy should focus on the impact on humans—‘humans before models’—and ask what we should do, not just what we can do, to prevent harm.
Re‑centers the debate on human welfare, reminding participants that technology policy is ultimately about protecting people, not just advancing tech.
Reinforced the human‑centric theme introduced earlier, influencing the tone of later remarks about inclusive standards and global governance.
Speaker: Jennifer Mulvaney
Policymakers need to ensure agentic AI is inclusive and accessible; open standards and harmonization across regions (e.g., NIST vs. Singapore frameworks) are essential.
Raises the issue of global equity and standard compatibility, highlighting the risk of fragmented regulations that could hinder adoption.
Shifted the focus to international coordination; prompted Danielle, Sam, and others to suggest multilateral venues like the OECD and to discuss cross‑regional alignment.
Speaker: Carly Ramsey
The OECD provides the foundational global platform for AI policy; its principles underpin the EU AI Act and many US state initiatives, making it the ideal venue for coordinated standards.
Identifies a concrete, existing multilateral institution that can serve as the hub for harmonized policy, moving the conversation from abstract needs to a specific solution.
Consolidated the earlier calls for coordination into a actionable recommendation; subsequent speakers (Sam, Carly, Ellie) built on this by mentioning complementary bodies and events, solidifying the multilateral governance theme.
Speaker: Danielle Gilliam-Moore
Governments should provide practical, operational standards and playbooks rather than abstract principles; clarity and actionable guidance are what industry needs.
Emphasizes the necessity for concrete implementation tools, bridging the gap between high‑level policy and day‑to‑day industry practice.
Reinforced the demand for actionable guidance, echoing earlier points about standards and influencing the final consensus on the need for clear, practical frameworks.
Speaker: Combiz Abdolrahimi
Overall Assessment

The discussion was driven forward by a series of pivotal remarks that moved the conversation from a broad overview of agentic AI to concrete concerns about safety, governance, and global coordination. Austin’s framing of CAISI’s standards‑focused mission established a bottom‑up policy lens, which Prith amplified with vivid risk scenarios and the notion of ‘agentic engineers.’ Caroline’s four guardrails and Syam’s emphasis on data governance translated these risks into actionable controls, while Ellie’s continuum of autonomy offered a scalable oversight model. The human‑centric reminder from Jennifer kept the dialogue grounded in societal impact. Finally, Carly, Danielle, and Combiz converged on the need for inclusive, harmonized, and practical standards, pinpointing the OECD and other multilateral forums as the vehicles for such coordination. Collectively, these comments shifted the tone from exploratory to solution‑oriented, deepened the analysis of risk and governance, and forged a consensus around the importance of standards, human oversight, and international collaboration in shaping policy for agentic AI.

Follow-up Questions
What specific enterprise guardrails or risk‑management practices should companies adopt for agentic AI deployments?
Understanding concrete guardrails is critical for safe, responsible deployment of agentic AI across industries such as semiconductor design, payments, and data storage.
Speaker: Jason Oxman (asked), Prith Banerjee, Caroline Louveaux, Syam Nair
What should policymakers prioritize when regulating or supporting agentic AI?
Policymakers need clear focus areas—human‑centric safeguards, standards, and agile frameworks—to foster innovation while protecting consumers and society.
Speaker: Jason Oxman (asked), Jennifer Mulvaney, Ellie Sakhaee, Carly Ramsey, Sam Kaplan, Danielle Gilliam‑Moore, Combiz Abdolrahimi
Which multilateral platform or organization should governments use to coordinate global agentic AI standards and policies?
A common venue is needed to harmonize standards across regions (e.g., OECD, International Consortium of Safety Institutes, Singapore International Cyber Week) to avoid fragmented regulations.
Speaker: Jason Oxman (asked), Danielle Gilliam‑Moore, Sam Kaplan, Carly Ramsey, Ellie Sakhaee, Jennifer Mulvaney, Combiz Abdolrahimi
How can industry best provide input to the U.S. administration on guardrails for agentic AI?
Effective industry‑government communication ensures that standards and guidelines address real‑world barriers and regulatory concerns.
Speaker: Jason Oxman (asked), Austin Mayron
What research is needed to develop technical benchmarks for multi‑agent systems and understand emergent risks?
Benchmarks will allow systematic testing of multi‑agent interactions, helping to identify safety and security gaps before deployment.
Speaker: Ellie Sakhaee, Sam Kaplan
How can data governance be ensured for agentic AI to prevent manipulation and guarantee trustworthy decisions?
Since agents act on data, robust data lineage, governance, and accountability mechanisms are essential to avoid erroneous or malicious outcomes.
Speaker: Syam Nair, Combiz Abdolrahimi
How should AI agent security be addressed in regulated sectors (healthcare, education, finance), especially regarding handling of PII?
Regulated industries need standards and benchmarks that demonstrate compliance with privacy laws while enabling agentic AI adoption.
Speaker: Austin Mayron
What approaches can achieve interoperability of AI agents across different sectors and platforms?
Interoperability is key for widespread adoption; research is needed on common protocols, data formats, and integration frameworks.
Speaker: Austin Mayron
How can practical standards, playbooks, and operational clarity be created for governance of agentic AI?
Stakeholders request concrete, actionable guidance rather than abstract principles to implement responsible AI at scale.
Speaker: Combiz Abdolrahimi
How can global harmonization of standards be achieved, especially between the U.S., Singapore, India, and other regions?
Ensuring that regional frameworks (e.g., Singapore’s AI governance) align with global standards (e.g., NIST, OECD) avoids conflicting requirements for multinational firms.
Speaker: Carly Ramsey
How can we assess and mitigate kinetic consequences of agentic AI in physical systems such as autonomous vehicles and aircraft?
Physical AI agents can cause real‑world harm; research into verification, validation, and safety‑critical testing is needed for safety‑critical domains.
Speaker: Prith Banerjee
How can trust, consumer intent, and auditability be ensured in agentic payment systems?
Payments require clear verification of agents, secure design, explicit consumer consent, and traceability to prevent fraud and maintain confidence.
Speaker: Caroline Louveaux
How can public‑private partnerships define and enforce guardrails for agentic AI within enterprises?
Collaboration between governments and companies is needed to set sector‑specific rules, especially around data provenance and accountability.
Speaker: Syam Nair
How can standards bodies move from high‑level principles to tactical, actionable standards for agentic AI security?
Translating broad AI safety concepts into concrete security specifications will help industry implement protective measures effectively.
Speaker: Sam Kaplan
How can the OECD be leveraged effectively as a central forum for AI policy coordination?
The OECD’s principles have become a global reference; understanding its mechanisms can guide nations in aligning regulations.
Speaker: Danielle Gilliam‑Moore, Sam Kaplan
What role can the International Consortium of Safety Institutes play in developing tactical standards for agentic AI security?
This consortium could bridge the gap between high‑level policy and technical standards, focusing on security taxonomy and measurement.
Speaker: Sam Kaplan
How can the Singapore International Cyber Week serve as a platform for worldwide policy dialogue on agentic AI?
Annual cyber‑security gatherings can bring together diverse governments to discuss AI governance, fostering inclusive standard‑setting.
Speaker: Carly Ramsey
What research is needed to create benchmarks for AI agent identity and verification?
Robust identity verification is essential for secure agent interactions; standards are currently being drafted and need further study.
Speaker: Austin Mayron (referencing NIST publication)

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Building Inclusive Societies with AI

Building Inclusive Societies with AI

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel, comprising leaders from industry, development and government, convened to examine systemic obstacles facing India’s informal workforce and possible digital-enabled interventions [1-4][5-10][11-20][21-22]. Romal Shetty identified five recurring roadblocks: lack of discovery and trust, insufficient steady demand, delayed or unfair payment, inadequate upskilling, and limited access to social protection [26-31].


Arundhati Bhattacharya argued that a nationwide digital marketplace is essential to make workers’ credentials visible, match them with local opportunities, and provide verifiable upskilling certifications [34-37]. She stressed that payment delays plague even MSMEs and large corporates, and that only a digital platform can create an audit trail to enforce accountability [38-44]. She also warned that reports without a designated execution authority will remain ineffective, calling for a body that can implement and monitor recommendations [46-51].


Manisha Verma outlined Maharashtra’s newly created SEED department, which oversees more than a thousand ITIs, a state board for vocational accreditation, a public skills university, and a dedicated innovation society to foster skilling and inclusion of vulnerable groups such as prisoners, persons with disabilities and tribal communities [57-77]. She highlighted partnerships with industry, including PPP models that hand over ITI management to anchor firms, short-term evening courses, and collaborations such as Mahindra Tractors training that achieved 100 % placement in Garchiroli [274-285].


Aditya Natraj emphasized that the bottom quartile-over 200 million people, many women married before 18-remains disconnected from markets, with only 40 % of families having a member with six years of education, making productivity gains dependent on addressing these structural gaps [84-110][112]. He illustrated that simple equipment upgrades, like replacing stone-age bamboo tools, can dramatically improve product quality without high-tech solutions [129-135]. He further argued that aggregating blue-collar workers through models such as FabIndia, farmer cooperatives, or rating platforms is crucial for quality assurance and for leveraging government schemes like NRLM and SRLM [188-215].


Discussing behavioral barriers, Aditya described four user profiles among ASHA health workers-from non-phone users to tech-savvy youths-and stressed the need to tailor digitisation programmes to these distinct groups rather than applying a one-size-fits-all approach [306-322]. Across the discussion, participants concurred that digital platforms, government stewardship, and industry partnerships must be coordinated to create accountability, upskill workers, and unlock market access for the informal sector [34-37][57-77][120-136][274-285][188-215]. The panel concluded that a unified execution framework, supported by targeted technology and inclusive policies, is essential to transform India’s informal economy and realise its latent employment potential [46-51][324-328].


Keypoints


Major discussion points


Digital platforms are essential to solve the systemic road-blocks faced by informal workers (discovery, steady demand, timely payment, upskilling, and protection). Arundhati stresses that a “digital way” is the only viable solution for a populous nation, describing a marketplace for credentials, a verifiable up-skilling system, and a digital payment trail that creates accountability [34-44]. She also questions who will be responsible for executing the recommendations in the many reports that have been produced [46-51].


Government-led skilling, vocational education, and social-inclusion programmes form the backbone of the response. Manisha outlines the newly created Department of Skills, Employment, Entrepreneurship and Innovation, its oversight of 1,000+ ITIs, the state board for accreditation, the public State Skills University, and targeted programmes for prisoners, persons with disabilities, women and tribal communities [57-77]. She later adds a public-private partnership model that hands over ITI management to industry for curriculum design and apprenticeship [274-279].


Aggregating informal workers and standardising quality are critical for market access and productivity gains. Aditya explains that unlike the already-aggregated white-collar sector, blue-collar workers lack a mechanism for consumers to assess quality, and he describes several aggregation models (FabIndia-type, cooperative-type like Amul/Seva, and rating-platforms such as UrbanClap) that can improve incentives and enable technology deployment [188-204]. He further points to the National Rural Livelihood Mission (NRLM) and State-level equivalents as key aggregation vehicles [209-215].


Behavioural and adoption barriers must be addressed through differentiated, context-specific interventions. Using the example of ASHA health workers, Aditya shows that technology adoption varies across four age-and-skill cohorts, from workers with no phone experience to young, smartphone-savvy users, and argues that “one size fits all” programmes fail [295-322].


Tailored, persona-based solutions and strong multi-stakeholder coordination are needed to avoid generic, ineffective policies. Romal highlights that different worker personas (cultivators, artisans, textile workers, etc.) face distinct challenges, and Arundhati reinforces that “there cannot be a cookie-cutter solution” and that the government must enable an ecosystem that supports each vertical [115-118][120-126].


Overall purpose / goal of the discussion


The panel convened representatives from industry, the development sector, and government to diagnose the chronic challenges of India’s informal workforce, evaluate existing interventions, and chart concrete, accountable actions-particularly digital and skilling-focused strategies-that can be implemented over the next 12-18 months to boost productivity, inclusion, and livelihoods.


Overall tone and its evolution


– The conversation opens with a formal, optimistic tone, emphasizing the privilege of a diverse panel and the promise of collaborative solutions [1-4][22-24].


– It shifts to a critical, problem-focused tone as Arundhati highlights systemic failures (payment delays, lack of execution accountability) and calls for an authority to drive implementation [45-51].


– The discussion then becomes analytical and solution-oriented, with Manisha detailing concrete government programmes and Aditya dissecting aggregation models and behavioural hurdles [57-77][188-204][295-322].


– Towards the end, the tone turns reflective and hopeful, celebrating successful pilot initiatives, sharing inspiring anecdotes, and expressing confidence in multi-stakeholder effort [166-176][324-327].


Overall, the dialogue moves from introductory optimism, through candid critique, into constructive problem-solving, and concludes on an encouraging, forward-looking note.


Speakers

S. Anjani Kumar


Area of Expertise: Moderation / Panel facilitation


Role / Title: Moderator/Host (introduced the panel)


Manisha Verma


Area of Expertise: Government policy, skills development, social inclusion


Role / Title: Additional Chief Secretary, SEEID (Skills, Employment, Entrepreneurship, and Innovation), Maharashtra; IAS officer, 1993 batch [S2]


Arundhati Bhattacharya


Area of Expertise: Technology leadership, responsible AI, inclusive digital adoption


Role / Title: Chairperson and CEO, Salesforce India; former Chairperson, State Bank of India; Padma Shri awardee


Aditya Natraj


Area of Expertise: Education reform, community-led development, poverty alleviation


Role / Title: CEO, Pyramid Foundation; Founder, Kaivalya Education Foundation and Pyramid School of Leadership [S7]


Romal Shetty


Area of Expertise: Management consulting, workforce productivity, digital transformation


Role / Title: CEO, Deloitte South Asia; Moderator of the panel [S8]


Additional speakers:


None (all speaking participants are covered in the list above).


Full session reportComprehensive analysis and detailed insights

The session opened with S. Anjani Kumar introducing a short video on the informal sector before welcoming a three-pronged panel that represented industry, the development sector and government [1-4]. The first panelist was Ms Arundhati Bhattacharya, Chairperson and CEO of Salesforce India, a Padma Shri award-winner and recognised leader in responsible AI and public-private collaboration [5-10]. The development side was represented by Mr Aditya Natraj, CEO of the Pyramid Foundation and an Ashoka Fellow [11-16], while the government was represented by Ms Manisha Verma, Additional Chief Secretary of Maharashtra’s SEED department, a senior IAS officer with a record of drafting major social legislation [17-20]. The moderator, Romal Shetty, CEO of Deloitte South Asia, introduced the panel and framed each round of questioning [21-24].


Romal Shetty summarised the study’s findings, identifying five systemic roadblocks for informal workers: (1) limited discovery and trust, (2) insufficient and irregular demand, (3) delayed or unfair payments, (4) constrained upskilling opportunities, and (5) exclusion from social-protection schemes [26-31]. He asked the panel which of these issues should be prioritised over the next 12-18 months.


Arundhati Bhattacharya argued that, given India’s population size, a digital-first approach is the only viable solution [34-36]. She described a nationwide marketplace where workers could upload credentials, view local job opportunities and obtain verifiable upskilling certificates [34-37]. She noted that payment delays affect not only blue-collar workers but also MSMEs, large corporates and government agencies, and that a digital platform would create an immutable audit trail to enforce accountability [38-44]. Bhattacharya called for a clearly designated authority to own implementation of the platform, warning that without such accountability recommendations remain untracked [45-51].


Manisha Verma outlined Maharashtra’s newly created Department of Skills, Employment, Entrepreneurship and Innovation (SEED), which now oversees more than a thousand ITIs, a state board that accredits private training providers, and the public Ratan Tata State Skills University [57-66][72-73]. She highlighted short-term skilling programmes for vulnerable groups-including prisoners, persons with disabilities, women and tribal communities-to ensure social inclusion [74-77]. Verma also described a public-private partnership (PPP) policy that hands over ITI management, curriculum design and faculty recruitment to industry anchor partners for ten to twenty years, aligning with the national PM SETU scheme [274-279]. Additional initiatives include (a) opening ITI programmes to non-ITI students in the evenings to optimise infrastructure utilisation, and (b) a partnership with Mahindra Tractors in Garchiroli that delivered a certified training batch with 100 % placement for tribal students [240-270].


Aditya Natraj shifted the focus to the “bottom quartile” of India’s population, noting that over 200 million people remain in poverty, with 36 % of women in the eastern states marrying before the age of 18 and 40 % of poor families having no member with six years of schooling[84-110][112]. He argued that productivity gaps are rooted in structural exclusion rather than mere skill deficits, and that simple, low-tech interventions-such as replacing stone-age bamboo tools with modestly improved equipment-can dramatically raise product quality and marketability [129-135]. Natraj emphasized that any digital solution must first build on existing aggregation models (e.g., FabIndia, Amul/Seva, Urban Clap) to ensure quality assurance and create market incentives for technology deployment [188-215].


Both panelists highlighted the importance of aggregation, but Bhattacharya emphasized building a unified digital marketplace as the primary vehicle, whereas Natraj stressed that any digital solution must first leverage pre-existing aggregation mechanisms to guarantee quality and consumer confidence [34-37][188-209].


Regarding execution, Bhattacharya called for a clearly designated authority to own the platform’s implementation, while Verma described the government’s role as a catalyst that creates enabling policies (e.g., the PPP framework, PM SETU) and partners with industry for execution. Both agree on the need for strong execution, differing only on the preferred mechanism-centralised authority versus facilitative partnership [45-51][274-279].


Addressing behavioural barriers, Natraj presented a typology of ASHA health workers to illustrate technology-adoption diversity: (1) workers over 50 with no phone experience; (2) users of basic “dumb” phones; (3) smartphone owners who use devices only for entertainment; and (4) young, tech-savvy workers who already blend digital tools with income-generating activities [294-322]. He argued that one-size-fits-all digital programmes would miss three-quarters of the target audience, and that interventions must be tailored to each cohort’s skill level and comfort with technology [295-318].


Romal Shetty reinforced the need for persona-specific design, noting that cultivators, artisans, textile workers and migrant labourers each face distinct challenges such as volatility, market access, skill gaps and income insecurity [114-119]. Bhattacharya echoed this, stating that while fundamental issues like access, health and literacy must be addressed early, solutions should be vertical-specific and supported by government-enabled ecosystems [120-126].


Key takeaways


1. A digital platform is central to solving discovery, credential verification, demand matching and payment traceability, and must be complemented by sector-specific aggregation models.


2. Government agencies-exemplified by Maharashtra’s SEED department-must lead inclusive skill development, accreditation and programmes for vulnerable groups, while also acting as catalysts for PPP-driven execution.


3. Productivity gaps stem largely from the exclusion of the bottom quartile; targeted, gender-sensitive and tribal-focused interventions are required.


4. Technology should augment, not replace, informal workers; upskilling should be delivered via verifiable digital certifications.


5. Public-private partnerships and the startup ecosystem can drive socially impactful innovations and job creation.


6. Digital adoption must be segmented according to user cohorts, with tailored training for each ASHA typology.


7. An accountable execution authority-whether a dedicated government body or a facilitative partnership framework-is essential to move from recommendation to action [34-44][57-77][84-110][112-135][188-215][294-322][45-51].


Proposed actions include establishing a lead agency to implement the nationwide digital marketplace, scaling Maharashtra’s PPP model for ITI management, expanding the “Startup Week” and direct work-order awards to nurture socially-impactful ventures, leveraging NRLM/SRLM for systematic aggregation of blue-collar workers, and designing tiered digital-adoption training that addresses the four identified ASHA cohorts [45-51][274-279][188-215][294-322]. Unresolved issues remain around the precise governance and funding structure for the platform, safeguards to ensure AI augments rather than displaces workers, and metrics for measuring the impact of skilling programmes on the bottom quartile [45-51][274-279][188-215][294-322].


In closing, all participants reaffirmed that multi-stakeholder collaboration-bringing together industry, development organisations and government-is indispensable for transforming India’s informal economy. While consensus existed on the goals of digital inclusion, skill development and accountable execution, the discussion highlighted moderate differences on the preferred aggregation mechanism and the balance between government-led versus industry-led execution. The panel concluded on an optimistic note, expressing confidence that coordinated, sector-specific, and accountable interventions can unlock the latent employment potential of India’s informal sector [4][23-24][120-124][324-328].


Session transcriptComplete transcript of the session
S. Anjani Kumar

show a video which will give you context of what the informal sector is, what are some of the interventions that can be taken before I call the esteemed panel to have a discussion on the topic. So we are privileged to have a panel. We are privileged to have a panel today, which represents industry, the development sector, and the government. You know, all of the ecosystem has to come together to solve for this problem. So may I now invite my first panelist, Ms. Arundhati Bhattacharya, chairperson. And CEO, Salesforce India. Thank you. She is the recipient of the Padmashri, India’s fourth highest civilian award, and has frequently been featured on Forbes’ World’s 100 Most Powerful Women and Fortune’s World’s 50 Greatest Leaders list.

She is a strong advocate of responsible AI, inclusive technological adoption, and public -private collaboration for national growth. She is instrumental in expanding India’s digital economy while embedding ethics, governance, and sustainability into technology ecosystems. Thank you, ma ‘am, for joining us today. Representing the development sector, we have the pleasure of inviting Mr. Aditya Natraj, the CEO of Pyramid Foundation. He’s a prominent education reform leader and also the founder of Kaivalya Education Foundation and the Pyramid School of Leadership. He’s over 20. He has over 20 years of experience in the development sector, including a significant tenure with… driving volunteer -led literacy campaigns in rural India. He’s been recognized as an Ashoka Fellow, an Echoing Green Fellow, and Aspen India Fellow.

He’s also the recipient of Time’s Now Amazing Indian Award in Education. Thank you, Aditya, for joining us. On the government side, again, I’m privileged to request Ms. Manisha Verma, Additional Chief Secretary, SEEID, Maharashtra. She’s a 1993 batch IAS officer who has contributed to drafting transformative regulations in India, like the National Food Security Act, the Forest Rights Act, the National Food Rights of Persons with Disabilities, Right to Education, Magnera, and others. She’s been felicitated by the Honorable President, the Honorable Prime Minister, Niti Ayog, Honorable Governor, and Honorable Chief Minister for various initiatives, and is also a recipient of Maharashtra Foundation Award for Outstanding Policy. Thank you, ma ‘am, for joining us. And to kick us off, I’m delighted to welcome Romul Chetty, CEO of Deloitte South Asia, to

Romal Shetty

Thank you so much, Roy. Good afternoon, ladies and gentlemen, and always a privilege to have a wonderful panel here. So maybe I’ll kick off first with you, Arundhati, to start with. As you know, when we did our study, obviously you and Arundhati were significant contributors to that study. We’ve seen that the informal workforce basically faces about five really systemic roadblocks. One is being discovered and trusted. Second is getting some steady demand. Third is getting fair and timely payment. Then upskilling, that sort of translates into higher productivity. And, of course, accessing protections, insurance and others. So how do you use? How do you see these challenges playing out in the future? and what or which of it must be prioritized in the next 12 to 18 months?

Arundhati Bhattacharya

So given the fact that ours is a very populous nation, I don’t think we have a way other than a digital way of addressing these solutions. In the sense that you might have a worker, say a person who works as a plumber, who might be really, really good at his job and there might be very good opportunities in his village or in the village next to his, but he has no idea that it exists. So this lack of knowledge is not something that you can manage to do away with unless you have some kind of a marketplace where people can put in not only their credentials and their experience, but also be able to access the opportunities that are there for their kinds of jobs.

That’s one piece. The second piece is that unless and until we put all of… these people together we would also not understand what is the upskilling that is required for such people because more and more as days go by we are realizing that everything is changing all of the technology is changing and the change in technology is such that requires people to be further upskilled now how do you get that upskilling how do you ensure that you have a verifiable certification that you have gone through that upskilling again you have got to come back to the digital area third is regarding getting payment on time as you said this is something by the way which is a very big problem across India and it does not only impact your the blue -collar workers it impacts even the MSMEs and the SMEs and sadly enough I would say it is the big corporates that are the worst at this including the government means I cannot not include the government over there because getting payments on time in India is something that is not considered to be at all important It is one of the things that you do last.

You have to do it. So you do it at some point of time. And this is not something that speaks well for us as a country. It really adds to the difficulty in doing business because you’re not funding people the moment that they need to be funded in. And there has to be an accountability for all of this, which unless if you use a digital platform, there is no footprint. There is no footprint about the delays that are taking place unless you put a digital platform to this. So I think, you know, in the report that we put out together, and I think there were other people, especially your people, Deloitte people who did a lot of work on this, who actually suggested a platform where all of these things could be comprehensively addressed.

Now, I was just asking Romil before coming in over here that India is great at putting out fantastic reports. At the end of the reports, who is charged with the execution? Who is really accountable that if it doesn’t get executed, there is a downside to it? We have no such downsides. We have suggestions, we have reports, and then we don’t have a person who is charged with the execution. I think it’s time for all of us to understand that reports are great, suggestions are fantastic, but there has to be an authority that will take charge of this, will run with it, and be accountable for actually implementing it. Because there are some really, really good suggestions over there that need to be implemented.

Romal Shetty

Thank you, Arundhati, and you know why she was the SBI chairperson, because she’s got a strong mind of her own, and always willing to challenge the status quo, which I think in her own life, as well as of course in the various positions that she’s held. Thank you, Arundhati. So Manisha, a question to you now, and this is really about Maharashtra, and obviously, could you sort of share an overview of the work, the work that’s being undertaken by your department? for the benefit of all the delegates here. And how is it working towards enhancing human capital and social inclusion?

Manisha Verma

So first of all, thank you so much for having me here. I’m looking forward for a great dialogue with this esteemed panel members as well as all of you. I head the Department of Skills, Employment, Entrepreneurship, and Innovation. Innovation, that is why it is written SEED, so it’s not a very common kind of a department. This is a newly constituted department in Maharashtra. And to put it simply, it is overseeing the entire vocational education spectrum. So there is a thousand plus institutes, ITIs, government, and private, which are the cutting edge. You know, they are the cradle of creating skilled workforce for the industries, manufacturing, and service sector, but mainly the manufacturing. And so all the ITIs are under the department oversight.

But we are also looking at short term skilling programs through our Maharashtra State Skilling Society. So all the government of India programs and the state budget resources for skilling. Then we have a state board of vocational education and training. So if you are a private provider of skill training, then the accreditation and recognition of the courses is done by our state board. And affiliation is also given because today you know that there is a lot of duping of people, ordinary people. There is no information as to whether the courses which are given in the market are actually accredited or have a value. So this body does the independent assessment of the training institutes and gives affiliation and recognition.

And then to complete the spectrum of because you know that the students from, ITIs or from people who are doing vocational education, they might have aspirations. for higher education and independently also. So we recently set up a public state skills university, Ratan Tata State Skills University in Maharashtra. So that is also doing pretty well now, I mean, in its infant stages. And then we have a Maharashtra State Innovation Society, which is under my department, which is looking at promotion of startups and incubators. So this is a whole spectrum of the work that we are doing. But not to miss out the vulnerable groups for social inclusion, we are also partnering with agencies to do skilling for jail inmates, prisoners in jail, people with disabilities, women, tribal areas and all.

So that in brief is the work that we are doing. Thank you.

Romal Shetty

Thank you, Manisha. Aditya, one of our core insights from our study was that we are working with the government that productivity gaps often come from… sort of inefficient workflows and tooling deficit rather than any work effort. So as we look to increase productivity 10x to really realize Vixit Bharat aspirations, what guardrails do you think should be in place so that technology augments workers, improves their safety and earnings, and does not really replace them altogether?

Aditya Natraj

Yeah. So thank you very much for having me on this panel. It was great fun to be part of the committee at Niti Ayog as well, which put this together. Thanks to Deloitte’s efforts. I think when we’re talking about this informal labor force, we’re all imagining this electrician who’s coming to our house, right? And so we’re imagining an upgrade of that. We at the Pyramal Foundation are working with the bottom quartile of India. Largely the top quartile is sitting in this room and driving the growth. The next quartile sort of supports that growth by being drivers, electricians, plumbers. The next quartile is just about surviving. And the fourth quartile, honestly, first of all, you have to tune into to even understand how badly off there are.

There are still, as per official statistics, over 200 million people in India in poverty, right? So the areas where we focus, which are the five eastern states, for example, I mean, so when you’re talking about productivity deficit, just I’ll give you a few statistics, right, because we’re imagining this is a plumber who’s coming into my house and how do I increase this thing? But what about the women? 50 % of India is women, right? And the states where we work with Jharkhand, Assam, Chhattisgarh, Orissa, Bihar, these states, today, the number is at 36 % of women getting married below the age of 18. What is going to be my productivity gap? I got married before the age of 18. My productivity is measured by how fast I produce the first child and the second child.

And all my energy is going into just taking care of children. What is AI going to do for this girl who, by the age of 20, has two children and at home? What is it going to do for the tribal? I’m going to be able to do it for the next 10 years. who’s still in the Dandakaranya forest in South Chhattisgarh. So that group of people has lower growth rate than the median of India. As it is, they were lower and they have lower growth rate. So really increasing productivity for that group, I think, is going to be key because it’s not about taking the top quartile to $29 ,000, right? That is going to happen or it’s going to happen because there are automatic mechanisms in place in the market to incentivize that productivity gain.

The bottom quartile is not yet plugged into the market, right? These are the 70 million people who are in poverty in these five states. Out of them, the statistic is not having 40 % of those families don’t have even one person who has had six years of education in the family. Six years, we’re not talking about 10th standard. So a lot of our programs are designed on, okay, 10th standard, after that you’re going to do ITI, you’re going to do this thing. So this bottom quartile really needs, I think productivity gains are going to come by us unknowingly. Understanding why the bottom quartile… is not involved in the market and what do we need to do three times, four times as hard so that they’re not pulling the median of India down.

Romal Shetty

I mean, and, you know, as consultants, when we look at these reports, and I can tell you from the NITI one was, I think these kind of inputs, because it’s very easy sometimes just to be far off and sort of give recommendation, but when you realize the nitty -gritties as well, well, I think you realize that there have to be different solutions, and I think this report was where really different sets of people came together to contribute. Arundhati, back to you in terms of, you know, we created this persona -led, you know, the carpenter, the, you know, the cultivator, and we chose this because challenges differ, right? So cultivators face sort of volatility and information gaps.

Artisan face sort of market access. Middlemen, dependents. Textile workers face skills and technology gaps, and trade workers, of course, face income insecurity. Migration, of course, pressure. as well. So how do you balance a centralized approach while ensuring each person’s unique challenge are solved for?

Arundhati Bhattacharya

So basically again you know there cannot be a cookie cutter solution to all of this because the persuasions are so different the challenges are so different you necessarily need to solve for people in different ways. There are certain fundamental issues that bother all of these whether it’s an issue of access, issue of health issue of you know basic understanding and literacy these are all basic issues that need to get fixed at a very low level in the sense at a very early level in their lives. But if you are looking beyond that and if you are looking at vertical wise the different kinds of people and the different ecosystems that they work for you necessarily will have to come up with different solutions and again here I think this is where the stakeholder which is the major stakeholder, which is the government, the government has a role to play.

Because it is the government that is going to enable the ecosystem to help these people to grow. For them to grow on their own, like was being said by him, the upper quartile people can help themselves. The people who are absolutely at the lower quartile, they actually need help. And I remember one incident where, you know, we used to run this Youth for India program in State Bank of India. Where we had people taking a gap year, coming and serving in the villages. Now one such guy was serving in one of these villages of Dang tribals, who work with bamboo. And he discovered that the equipments that they were working the bamboo with were basically stone age equipments.

Literally stone age equipments. Now just by changing the nature of the equipments that they were working with, and again nothing very fancy. Nothing with technology or AI. And they were working with bamboo. But just changing those equipments improved the quality of the product so much that it had a much better purchase in the market. So, you know, solutions may be something that’s very simple, but it is something that has to be innovated there by actually getting knowledge of what really is holding them back. So I think, again, this is something that needs a lot of work and it needs a lot of work by people at that place, which, again, has to be partly the government.

Romal Shetty

And in fact, the platform that the committee recommended in some sense was to also help to Uberize, to create demand, to also build skills also. So as simple as long as you have a simple phone, you could actually use it. So I think that was actually done as well. So, Manisha, coming to the sort of the startup ecosystem and, you know, and obviously Maharashtra has been doing phenomenally well in the startup ecosystem. So could you share how you’re driving societal impact through this startup? Ecosystem.

Manisha Verma

I think honestly startup ecosystem is something that is organically grown and government should not be taken too much credit. I was just sharing with Arundhati ji before and we were entering that, you know, some things are on autopilot and government should just catalyze or facilitate and not obstruct the growth, I think that is. But nevertheless, I would like to say that we have been trying from the Maharashtra government side to really kind of catalyze this ecosystem which is there in Maharashtra. You know, Maharashtra has 35 ,000, nearly 35 ,000 startups currently registered by DPIIT and it is the leading state. And some of the things that we have been doing actually is to create this, get this culture penetrated across the state.

Initially, we saw that their startups were primarily centered around Mumbai because of the ecosystem and Pune. But today, I’m happy to share that every district in Maharashtra, including Garchiroli, has minimum of 25 startups registered. So can you imagine that? So we’ve tried to do it through multiple ways, like having hackathons, grant challenges, startup yatras, involving the college students. And the rural areas as much, creating district level committees, you know, led by collector, but having an entire ecosystem of stakeholders, including principals, ITIs, the district industries officers, the MSME clusters. Then we also give some support of financial because not all startups are capable of prototyping and then, you know, getting the quality testing done. So we’ve done that.

We do some reimbursement for IPR. for domestic patents or, you know, international patents. We are helping them to obtain quality testing and certification. But a very unique experiment that we have done, I think, and which we can, you know, take genuine credit of, is our very unique program called Startup Week. We invite startups from across the country. We get nearly close to 3 ,000 entries every year. And they are shortlisted by an independent jury of domain experts, VCs. And then we have their pitching done before second round of independent jury. Now, these are not startups. You know, we are looking at startups and their technologies and innovations, which have a large social impact. So just to give you an example, the sectors are actually clean energy, mobility, agriculture, health, education.

And FinTech, these are the kind of… sector. So I’m happy to share some examples like there was a startup and then we give them as awards, direct work orders up to 25 lakhs. In recently, we have entries from 15 to 25. So otherwise, startups are stuck for procurement policies of the government. They are not able to compete with the tender systems that are there. So we give them a direct work orders as winning price. And then we connect them with the domain departments to rule out their innovations. And that has been very helpful for our startups to gain visibility and even gain international markets and investors. So some of our startups have really grown up like this Sagar Defense.

Now today, it’s called Sagar Defense. We started with their, you know, now today, their technology has been upgraded for marine surveillance and Indian Navy, has also placed orders and they’ve created a manufacturing plant near NASIG. We have new docs recently. It was our winner from IIT and other people who have created a very beautiful home diagnostic app. On phone you can have more than 30 health parameters at a very low cost. We have which has done the entire thing of menstrual hygiene management and disposal of sanitary pads in a sustainable way. We did their pilots in Mantralay itself to do the, you know, see the proof of concept and give them the work order. So we have, I think it is new motors.

I remember very interesting for physically challenged people that their wheelchair converts into a battery operated two wheeler. disabled person. So I can cite a lot of examples and I would say even in the areas of agriculture and clean energy. So these are kind of some efforts that we have been doing and hopefully we’ll take it to the next level with the help of such experts.

Romal Shetty

I think it’s fantastic work and on a lighter note, of course, Manisha ji, we also struggle on the tender side. So maybe So Aditya, I mean from your experiences, where do digital or sort of AI led interventions for the informal force sort of break down and what are some of the learnings from the past? Like you said, you bucketed into the four categories as well.

Aditya Natraj

So we’ve done a lot of digitization work. In fact, we’ve showcased it even at the expo and we work with with the government to digitize government health systems, digitize government education systems, agri, water, any space digitization normally adds value. But here when we are talking about the informal labor force, I think we have to look at the mental model here. When we are talking about white collar workers, right, like Deloitte or a lawyer firm, they got aggregated more than 40, 50 years ago. If you went back 100 years ago, you had an individual chartered accountant, an individual lawyer, an individual banker, or an individual consultant. Now you have firms. Now as soon as you’ve aggregated, you get lots of benefits because you get specialization, and then you can reintegrate to offer a more complex service.

Or you get more skill capability growth for each person. You can get quality standards. The customer knows what he’s buying. So in the white collar workforce, this has already happened. In the blue collar workforce, on the other hand, tell me where you will go for quality of election. electrician right you’ll end up asking your neighbors what about a carpenter tailor we’ve not yet organized the blue collar workforce in a way in which the customer can choose quality predictably right as an urban consumer i will face more than 80 brands a day even my salt is branded it’s catch you walk into a village today nothing is branded right so the need to aggregate is very critical to improve quality of service and this is what we tried with our farmer produce organizations and how they could improve but if you see there are multiple models for this aggregation right you can have the fab india type model right the fab india’s and the uh type model where it’s a private sector fab india high design high designs help the entire supply chain in leather fab india helped in the entire textile supply chain right you can go in that private sector type model The second model is that you can actually go in the Amul and the Seva model, which the firm itself is owned by the farmers.

Today, when I buy Amul milk, 90 % of what I pay goes back to the farmer. When you buy Nestle milk, it doesn’t go back to the farmer. So when you buy Seva, when you buy Lijat Papad, 90 % is going back to the last person because it’s organized as a cooperative. And the third is then you’ve got the Urban Clap model, which is saying, OK, I will certify the person and he’s got a 4 .5 rating. So therefore, you choose him. You choose this physiotherapist. You choose this carpenter. You choose this plumber. All these are aggregating in different ways and distributing incentives in different ways. I think unless we think of but for the artisan who’s 45 years old and doing a traditional Kalamkari, you’re expecting that someone’s going to come and choose this particular piece without having branded that as a whole.

I think actually his productivity is quite high. The problem is his realizations are not that high. What he’s able to realize from the market. It is not as great as the actual craft. his actual understanding of where the design market is going in Paris or in New York or in Delhi is not as high in order to adapt his design. And so the constraints, I think, is about aggregation of these workers, which I think the government’s main program of NRLM, the National Rural Livelihood Mission, and the SRLM, which is, of course, very, very powerful in Maharashtra, Bihar, I think is extremely critical for aggregating workers at various levels in order that then you can improve quality, deploy technology, create incentives, create a common expectation of quality.

Because otherwise, as a consumer, I’m not going to be willing to pay unless I’m sure of a certain quality level.

Romal Shetty

So I have a last question to each of you for what I request is maybe just a minute or two, a quick one. So Arutati, as part of the study, if you remember, we met about 70 personas. We met 70, we had 70 stories, we had 70 different aspirations. but they all represent a 490 million workforce, 90 % of the country’s workforce. These are numbers, I believe, but I believe the stories matter actually more. And as a reflection, if you could share a persona which stuck with you the most during our exercise.

Arundhati Bhattacharya

the mountains, you have the seas, you have culture, you have temples, you have old structures, like you ask for it and it is there. And yet this is one sector where we really haven’t done well. And it’s very difficult to understand why. People in countries with far, far less are doing much, much better. And this also is a very labor -intensive sector. We talk about people not having enough jobs. And why not? Because this is a sector that can provide a lot of jobs. There are so many wonders in this country which we ourselves as Indians have not witnessed. And this, I think, is something that the government needs to take up on a really urgent footing because not everything is going to happen from the private side.

But, of course, the private sector coming in over here in full force, along with the government, should actually mean a great deal to us. because this is also not going to be something that is not going to give us foreign exchange. It will give us foreign exchange. It will give us enough amount of employment. And more than anything else, I think it will showcase what India is all about, which I think is very important. So if you ask me, that was one place that I thought we could do a separate study just on that to see whether we could do something more for that particular segment. And I can tell you she was as passionate then also.

I remember this discussion as well specifically, but it is a fact that hospitality, tourism actually is a force multiplier because it also impacts so many industries, right?

Romal Shetty

So Manisha, in terms of industry partnerships, so really when it comes to employment, an important ally is industry partnership. What special efforts are there to sort of deepen collaboration between industry and the government for societal impact? A quick question.

Manisha Verma

Okay, before I go to industry, I just quickly wanted to respond because I remember I was a few years ago tribal department secretary and we used to have a small fund called Nucleus Budget Fund, which was untied. We could do some locally contextualized responses. So I do remember one of my department officers saying, ma ‘am, I want to build homestays in tribal areas. And beyond Nasik, there’s Bhandardhara Falls area and there is fireflies. There is a cluster of tribal villages which have got these fireflies before the monsoon sets in. It’s a beautiful site. I would ask some of you to explore if you haven’t. And then he so I couldn’t I funded that time’s few homestays, which was just one lakh rupee for villages.

I would ask some of you to explore if you haven’t. And then he so I couldn’t I funded that time’s few homestays, which was just one lakh rupee for villages. I would ask some of you to explore if you haven’t. And then he so I couldn’t I funded that time’s few homestays, which was just one lakh rupee for villages. I would ask some of you to explore if you haven’t. And then he so I couldn’t I funded that time’s few homestays, which was just one lakh rupee for villages. I would ask some of you to explore if you haven’t. And then he so I couldn’t I funded that time’s few homestays, which was just one lakh rupee for villages.

iron furniture, one bed and mattress and something. They couldn’t even afford that because they were all small marginal farmers. And I forgot about it. And I did it out of the way because there was no such scheme but I designed it for them because I trusted my officer that he will use it well. And then he said, Ma ‘am, you come. They are doing good business. And three years ago, I had left the department few years ago and I traveled to Bhandardhara area to catch this fireflies. And he said, Ma ‘am, they are reminding you to come to their house and eat. So around from 11 at night till 2 in the morning, I was looking at that tract of fireflies and then I visited that village Hamlet.

She cooked that Jowari, Bhagri and everything. And she was so happy to share with me the lady of the house. Ma ‘am, this is the room. We will take our food. We will give our food to the Maharashtrians. You are giving authentic Maharashtra food. and a lot of people come and stay in my room. So one example, I just got some warm remembrance. And I’m sure there are so many efforts that are happening, but as ma ‘am was saying, we have so much to do in terms of aggregation, a systematic kind of approach to kind of tap the potential of tourism as well as our rich culture and diversity that we have. Coming quickly to industry, we’ve created industry as a major role because we keep talking about industry -aligned courses, matchmaking for the job seeker and job provider, but it is our industries which are the job providers, whether it is small -scale industries, MSMEs, or they are big industry associations or service sector.

So what we have done is actually to modernize curriculum of research. One of our ideas we have started, we have created a PPP policy. public -private partnership in which we are ensuring that if an industry -led anchor partner is there, we will give our ITI management to the industry for 10 years or 20 years. And we will give them freedom to design curriculum, to have expert faculty, and even converge our resources. This is something that Maharashtra did before. Therefore, recently, Government of India has also announced PM Setu scheme, which is akin to this kind of concept of developing ITIs along with industry partnership. But on a regular basis also, we are trying to tap industry expertise for OGT on the job training, apprenticeship programs, you know, advising our institutions, academic institutions.

Another good example, I would just like… to share because it’s a recent one. We have introduced short -term training courses and opened the ITI to non -ITI students in the evening. for optimal utilization. So in the evenings and on, we can have short -term skilling programs. We are looking for partnerships. One good partnership we have done is with Mahindra Tractors in Garchiroli for tribal students again. And we’ve done the first batch of certification in Mahindra Tractors technology and with 100 % placement in Garchiroli. So some

Romal Shetty

Thank you, Manisha.

Manisha Verma

But one line, this is not enough. We really need industry to engage very deeply. There are structural kind of issues, but we are really open to partnerships, but I think industry needs to come forward.

Romal Shetty

Aditya, final question to you. Of course, the Pyramid Foundation has developed, really deep experience in community -led development. mile governance and of course behavioral change. In your view, what behavioral change levers are the most critical to sort of unlock adoption and also trust amongst the informal workers?

Aditya Natraj

You’re asking a question which we spend all our time on and I’m going to try and summarize it in two minutes. Let me give you an example of a very basic technology, right? The government of India has a huge national digitization program for healthcare workers. There are over a million ASHA workers in India who are the last mile delivery for all health services. And ASHA workers still in many states has a manual register in which she fills up the pregnancies. She has 54 different things to track. She has a separate register for pregnancies, separate register for TB, separate register for nutrition, separate register for adolescence. In most states, that was not yet. Now you would imagine, come on, that’s like the easiest thing to automate, right?

Because. It’s a tool. She goes to each home. There’s a geo -tagging, and then you have the database, and then you fill up what’s the latest problem so that her surveys are more efficient. Bihar alone, and we went into Bihar to try to digitize this, and Bihar alone has over 100 ,000 ASHA workers, right? And we thought, hey, this will be done in three months because we had the technology. The point is that technology adoption is a separate skill from the technology, right? And when you think of technology adoption, again, we’re thinking of the white -collar person in this room. When we saw the people who had to adopt this, we saw that they were in four categories.

Category one is people who are over 50, okay, and have never used any technology. She’s not even used a dumb phone. Now, suddenly, you’re asking her on the smartphone to collect her wage. She’s saying, bitya ko dedo, bo kar degi. Okay? So we have to remember that there are people. People are 50 to 75, and they’re in the government workforce as ASHA workers, right? So there’s people who don’t even have dumb phone. That’s about a quarter. the second quartile is people who still have a dumb phone and not a smartphone so they use it for call call ke labar not even sms use it for call and you use it for emergency you’re not using it for work you’re not used to how will you use it for work when i press here what happens where does it go how does that data come back here who’s looking at it these are all the questions going in their mind because of which they say so there is a huge fear of this technology adoption then there’s a third quartile which has smartphones but is not used to using it for business right that is used for you know my sun watches youtube i have prime video all those sort of things but using it for business my business means whatever work i’m doing you know i’m using it for business you’re not used to because and then the top one is typically younger people who are you know young asha workers from 25 to 35 who have a smartphone who are going out who are also selling something on the side also running some site business, they are really smart.

So the adoption depends on the profile of the workers inside and how far they have adopted. And typically we design one size fits all type programs. And there’s a group of people who already knew how to do it. And there’s a group of people who’s never going to do it. And I think this is very critical to really imagine that there is not one India, there are at least four Indias on any dimension. And to first understand that, and then tailor our programs to that, I think all adoption can happen.

Romal Shetty

Yeah, thank you. I think this is I mean, you can see the wealth of experience, the depth of knowledge, and the willingness to work you can clearly see from industry, from the development sector, from the government. So I think sometimes we feel a bit disheartened of, you know, but whenever we hear stories, and if we see leaders like this, you know that, you know, India is in good hands. So thank you, everyone for such a wonderful panel. Thank you for your time. Thank you. Thank you. Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (26)
Factual NotesClaims verified against the Diplo knowledge base (5)
Confirmedhigh

“Aditya Natraj highlighted that over 200 million people remain in poverty in India.”

The knowledge base states that more than 200 million people are living in poverty in India, confirming Natraj’s figure [S1].

Confirmedhigh

“Around 80 % of the female workforce in India operates within the informal sector.”

S17 reports that approximately 80 % of women in India work in the informal sector, confirming the claim.

Additional Contextmedium

“Arundhati Bhattacharya is a recognized leader in responsible AI and public‑private collaboration.”

S4 lists Bhattacharya as a panelist in an AI summit, providing context for her involvement in responsible AI and multi‑stakeholder discussions.

Additional Contextmedium

“Digital platforms can link worker credentials, job opportunities and upskilling certificates, creating an immutable audit trail for payments.”

S83 describes public employment services that link databases on jobseekers and vacancies, illustrating how digital infrastructure can enable comprehensive data trails and improve matching.

Additional Contextmedium

“Upskilling opportunities for informal workers are constrained and under‑developed.”

S84 notes that investment in re‑ and upskilling remains under‑developed, adding nuance to the claim about limited upskilling opportunities.

External Sources (84)
S1
Building Inclusive Societies with AI — -S. Anjani Kumar: Role/title not explicitly mentioned in the transcript, appears to be moderating or introducing the pan…
S2
Building Inclusive Societies with AI — -Manisha Verma: Additional Chief Secretary, SEEID (Skills, Employment, Entrepreneurship, and Innovation Department), Mah…
S3
https://dig.watch/event/india-ai-impact-summit-2026/building-inclusive-societies-with-ai — He’s also the recipient of Time’s Now Amazing Indian Award in Education. Thank you, Aditya, for joining us. On the gover…
S4
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Panel Discussion Moderator Amitabh Kant NITI — <strong>Moderator:</strong> With a big round of applause, kindly welcome the panelists of this last panel of AI Impact S…
S5
S6
https://dig.watch/event/india-ai-impact-summit-2026/building-inclusive-societies-with-ai — She is a strong advocate of responsible AI, inclusive technological adoption, and public -private collaboration for nati…
S7
Building Inclusive Societies with AI — – Aditya Natraj- Manisha Verma – Arundhati Bhattacharya- Aditya Natraj
S8
Building Inclusive Societies with AI — -Romal Shetty: CEO of Deloitte South Asia, moderating the panel discussion This panel discussion, moderated by Romal Sh…
S10
Multistakeholder Partnerships for Thriving AI Ecosystems — Dr. Bärbel Koffler emphasized that governments must create frameworks and governance structures to ensure AI benefits ar…
S11
Assessing the Promise and Efficacy of Digital Health Tool | IGF 2023 WS #83 — Capacity building in digital health was identified as a significant ongoing challenge in the healthcare sector. The need…
S12
Digital Entrepreneurship September 2018 — Ultimately, entrepreneurs must do the hard work of building profitable business models. Yet governments can help by doin…
S13
Bridging the Digital Skills Gap: Strategies for Reskilling and Upskilling in a Changing World — Training programs extend to rural areas, indigenous communities, to women and youth through community innovation centers…
S14
Open Forum #5 Bridging digital divide for Inclusive Growth Under the GDC — Minister Michel emphasizes the importance of incorporating digital skills into educational systems and professional trai…
S15
Securing access to financing to digital startups and fast growing small businesses in developing countries ( MFUG Innovation Partners) — In conclusion, Yamanaka’s perspective sheds light on the intersection of startups, development agencies, governments, an…
S16
Open Forum #76 Digital for Development: UN in Action — There is a need for greater accountability from platforms in enforcing their own terms of service and protecting users f…
S17
Addressing the gender divide in the e-commerce marketplace – a policy playbook for the global South (IT for Change) — In India, around 80% of the female workforce operates within the informal sector. These informal workers face numerous c…
S18
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — Professor Chandorkar described IISc’s role, operating one of the world’s top academic fabrication facilities and develop…
S19
Why science metters in global AI governance — “But if your potential or probable outcome is the end of jobs, then you need to think about universal basicism.”[113]. “…
S20
Host Country Open Stage — Context-specific solutions are essential rather than one-size-fits-all approaches
S21
WS #162 Overregulation: Balance Policy and Innovation in Technology — Key issues addressed included the role of AI in combating child sexual abuse material (CSAM), the importance of human ri…
S22
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — Factors such as restricted access to computing resources and data further impede policy efficacy. Nevertheless, the cont…
S23
Digital Cooperation and Empowerment: Insights and Best Practices for Strengthening Multistakeholder and Inclusive Participation — Twenty years ago there was an assumption that the private sector would solve access issues in remote areas, but now it’s…
S24
Briefing on the Global Digital Compact- GDC (UNCTAD) — In today’s age of digital interdependence, the multi-stakeholder approach is seen as more relevant than ever. Switzerlan…
S25
Building Inclusive Societies with AI — This comment introduced a new conceptual framework that helped explain multiple challenges facing informal workers. It s…
S26
Young voices from Africa – Harnessing digital tools for sustainable trade — Furthermore, the analysis criticizes the government’s hasty approach to formalizing the informal sector through counterp…
S27
Addressing the gender divide in the e-commerce marketplace – a policy playbook for the global South (IT for Change) — In India, around 80% of the female workforce operates within the informal sector. These informal workers face numerous c…
S28
Global Digital Compact topics: How were they tackled in previous policy documents? — Countries are still in early stages of learning how to use digital tools in education and how to prepare students for di…
S29
Launch of the eTrade Readiness Assessment of Ghana (UNCTAD) — In conclusion, access to verifiable data is crucial for e-commerce startups in Ghana to secure finance. Ghana’s position…
S30
The Impact of Digitalisation and AI on Employment Quality – Challenges and Opportunities — 4. **Skilling and Lifelong Learning**: – The ILO underscores the necessity of lifelong learning strategies to prepare…
S31
Europe’s rush to innovate — To achieve progress, public-private partnerships are considered essential. The collaboration between the public and priv…
S32
Joint Inspection Unit — The issue of e-learning platforms was extensively addressed for the first time at the UN system-wide level in a report e…
S33
Digital Public Infrastructure, Policy Harmonization, and Digital Cooperation — The level of disagreement among speakers is relatively low. Most differences stem from varying levels of progress and di…
S34
A digital public infrastructure strategy for sustainable development – Exploring effective possibilities for regional cooperation (University of Western Australia) — In conclusion, DPI is a critical building block for the digital economy and plays a significant role in achieving the SD…
S35
The digital economy and enviromental sustainability — Building alternative public platforms is suggested to aid in regulation and encourage compliance in the private sector. …
S36
Redrawing the Geography of Jobs / Davos 2025 — Reskilling and upskilling workers is essential to adapt to changing job markets and technological advances
S37
Main Session 1: Global Access, Global Progress: Managing the Challenges of Global Digital Adoption — Example of the ‘Contrate quem luta’ (Hire who fights) platform in Brazil, which connects homeless workers to job opportu…
S38
Shaping the Future: Harnessing E-commerce for Sustainable Development in the ECOWAS Region (ECOWAS) — A government’s role is to provide a conducive policy environment that encourages growth rather than stiflement.
S39
Rewriting Development / Davos 2025 — Boitumelo Mosako: Thank you, and for the opportunity to be on this panel with esteemed panelists. At the Development B…
S40
AI Meets Agriculture Building Food Security and Climate Resilien — And that’s truly right. evolutionarily empowering for farmers. But, you know, to make that work for farmers, there’s a l…
S41
Contents — Other observers felt there was good alignment in the past but that ‘it’s drifting away’ and new alignment is needed now….
S42
Open Forum #9 Digital Technology Empowers Green and Low-carbon Development — The level of consensus among the speakers was relatively high, particularly on the overarching themes of leveraging digi…
S43
Host Country Open Stage — High level of consensus on fundamental principles despite working in different domains. This suggests emerging best prac…
S44
WS #65 Gender Prioritization through Responsible Digital Governance — 4. Community Networks and Locally-driven Solutions Speaker 2: the great panelists who have gone before me. I think a l…
S45
DIGITAL DIVIDENDS — Sources: World Governance Indicators (World Bank, various years) and WDR 2016 team. Data at http://bit.do/WDR2016-Fig5\_…
S46
High-Level sessions: Setting the Scene – Global Supply Chain Challenges and Solutions — Initiatives by UNIDO and potential changes to financial structures reflect the international community’s acknowledgment …
S47
Emerging Markets: Resilience, Innovation, and the Future of Global Development — Economic | Legal and regulatory My response to this one, governments and sovereign nations have their own self-interest…
S48
Business Engagement Session: Sustainable Leadership in the Digital Age – Shaping the Future of Business — Laltaika suggests that members of Parliament and government officials should have attended the session to learn from the…
S49
World Economic Forum® — The perceived inability of governments to respond to major global challenges – from climate change and internet governan…
S50
WS #98 Towards a global, risk-adaptive AI governance framework — Sector-specific and use case-specific governance may be needed rather than one-size-fits-all approaches
S51
Building Inclusive Societies with AI — Aditya Natraj provided crucial perspective on India’s bottom quartile, pointing out that over 200 million people remain …
S52
Bridging the Digital Skills Gap: Strategies for Reskilling and Upskilling in a Changing World — This broadened the scope of discussion beyond traditional tech jobs and influenced later speakers to address rural commu…
S53
Competition law and regulations for digital markets: What are the best policy options for developing countries? (UNCTAD) — Brazil is currently discussing extensive regulation for digital markets, drawing inspiration from the European Union Dig…
S54
https://dig.watch/event/india-ai-impact-summit-2026/building-inclusive-societies-with-ai — But we are also looking at short term skilling programs through our Maharashtra State Skilling Society. So all the gover…
S55
WSIS Action Lines C4 and C7:E-employment: Emerging technologies in the world of work: Addressing challenges through digital skills — Anupama Shekhar: Dr. Anupama Shekhar, Dr. Dorothea Schmidt-Klau, Ms. Anupama Shekhar, Dr. Dorothea Schmidt-Klau, Dr. Dor…
S56
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Panel Discussion Moderator Amitabh Kant NITI — References a NEETI report studying blue-collar workers including carpenters, plumbers, hospitality workers, and Anganwad…
S57
Host Country Open Stage — Context-specific solutions are essential rather than one-size-fits-all approaches
S58
Responsible AI in India Leadership Ethics &amp; Global Impact — “One size doesn’t fit all”[111]. “See, it is a very diverse element and there is a different kind of templates which we …
S59
Bridging the Digital Divide: Advancing Inclusion in Africa with Affordable Devices (Carnegie Endowment for International Peace) — In conclusion, Africa faces several challenges in the digital age. Low levels of smartphone adoption, the existence of a…
S60
Digital Cooperation and Empowerment: Insights and Best Practices for Strengthening Multistakeholder and Inclusive Participation — Addressing Access Divides Development | Economic Twenty years ago there was an assumption that the private sector woul…
S61
AI and Data Driving India’s Energy Transformation for Climate Solutions — The emphasis on moving from pilots to permanent solutions reflects a broader maturation in the climate-tech space, where…
S62
Summit Opening Session — The tone throughout is consistently formal, diplomatic, and collaborative. Speakers maintain an optimistic and forward-l…
S63
Opening — The overall tone was formal yet optimistic. Speakers acknowledged the serious challenges posed by rapid technological ch…
S64
Dynamic Coalition Collaborative Session — The discussion began with an optimistic, collaborative tone as panelists shared their expertise and perspectives. Howeve…
S65
Comprehensive Report: “Converging with Technology to Win” Panel Discussion — The discussion began with an optimistic, exploratory tone as panelists shared different models and success stories. The …
S66
Emerging Markets: Resilience, Innovation, and the Future of Global Development — The tone was notably optimistic and forward-looking throughout the conversation. Panelists consistently emphasized oppor…
S67
Approaches Towards Meaningful Connectivity in the Global South — This observation exposes a critical gap between policy intention and implementation reality across 27 African countries….
S68
High-Level Track Facilitators Summary and Certificates — These key comments transformed what could have been a routine closing ceremony into a substantive reflection on the fund…
S69
Open Forum #32 Shaping an equal digital future with WSIS+20 &amp; Beijing+30 — The tone of the discussion was largely analytical and solution-oriented. Speakers highlighted both progress made and rem…
S70
WS #31 Cybersecurity in AI: balancing innovation and risks — The tone of the discussion was largely analytical and solution-oriented. Speakers approached the complex issues with a m…
S71
AI for Democracy_ Reimagining Governance in the Age of Intelligence — These key comments fundamentally shaped the discussion by establishing three critical frameworks: (1) the need to move f…
S72
Artificial General Intelligence and the Future of Responsible Governance — The speakers demonstrated strong consensus on the need for holistic approaches to AGI development, emphasizing education…
S73
WS #302 Upgrading Digital Governance at the Local Level — The discussion maintained a consistently professional and collaborative tone throughout. It began with formal introducti…
S74
Closing Ceremony — The discussion maintains a consistently positive and collaborative tone throughout, characterized by gratitude, celebrat…
S75
Building the Future STPI Global Partnerships &amp; Startup Felicitation 2026 — The tone was consistently optimistic, collaborative, and forward-looking throughout the session. It maintained a formal …
S76
Scaling Trusted AI_ How France and India Are Building Industrial &amp; Innovation Bridges — The discussion maintained a consistently optimistic and collaborative tone throughout, characterized by mutual respect b…
S77
The Global Power Shift India’s Rise in AI &amp; Semiconductors — -Moderator: Role not specified in detail, appears to be the session moderator who introduced the panelists and managed t…
S78
Invest India Fireside Chat — -Moderator: Event moderator introducing the session participants
S79
WS #343 Revamping decision-making in digital governance — Audience: Thank you very much. My name is Anne McCormick. I lead global digital policy for EY. We’re active in the globa…
S80
Defending Our Voice: Global South Participation in Digital Governance — Audience: Thank you. Anne McCormick from EY. Thank you for what’s been shared. It’s extremely insightful and helpful. A …
S81
Closure of the session — Notably, the country praised the discussion paper for its focus on the proposed mechanism’s functionality, feasibility, …
S82
Creating digital public infrastructure that empowers people | IGF 2023 Open Forum #168 — Aishwarya Salvi:So I think that’s unique to India. Moving next into the room, I would request Mark to give his response….
S83
Contents — regional level (Lee, 2018). In many advanced economies, public employment services have set up, or are setting up, syste…
S84
Contents — Beyond school and university-level education, a range of opportunities are currently available to workers looking to ite…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
A
Arundhati Bhattacharya
5 arguments169 words per minute1281 words452 seconds
Argument 1
Digital marketplace needed for worker discovery, credential sharing, and opportunity access (Arundhati Bhattacharya)
EXPLANATION
Arundhati argues that because India’s large informal workforce lacks awareness of job opportunities, a digital marketplace is essential where workers can list their credentials and experience and connect with available jobs. Such a platform would bridge information gaps and enable workers to find work beyond their immediate locality.
EVIDENCE
She described a plumber who is skilled but unaware of nearby opportunities, emphasizing the need for a marketplace that captures credentials and matches workers with jobs, and noted that digital solutions are the only viable way for a populous nation [34-37].
MAJOR DISCUSSION POINT
Need for a digital marketplace to connect informal workers with jobs
AGREED WITH
Aditya Natraj, Romal Shetty
DISAGREED WITH
Aditya Natraj
Argument 2
Platform provides payment traceability and accountability, reducing delays (Arundhati Bhattacharya)
EXPLANATION
Arundhati points out that delayed and unreliable payments plague both informal workers and MSMEs, and that a digital platform can create a transparent record of transactions, making delays visible and enforceable. Accountability mechanisms embedded in such platforms would improve the business climate.
EVIDENCE
She highlighted pervasive payment delays across sectors, including large corporates and government, and argued that only a digital platform can generate a footprint to track and hold parties accountable [38-44].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
She highlighted chronic payment delays across sectors and argued that digital platforms can create accountability through clear payment footprints, making delays visible and enforceable [S1].
MAJOR DISCUSSION POINT
Digital platform for payment accountability
Argument 3
Upskilling must be delivered through verifiable digital certifications to keep pace with rapid tech change (Arundhati Bhattacharya)
EXPLANATION
Arundhati stresses that continuous technological change demands that informal workers receive upskilling that is validated through digital certifications, ensuring that their new skills are recognized and trusted. This verifiable credentialing supports both workers and employers in a digital ecosystem.
EVIDENCE
She noted that as technology evolves, workers need upskilling and that verifiable digital certification is necessary to confirm completed training [34-38].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for verifiable digital certification and skill validation is discussed in literature on bridging the digital skills gap, which emphasizes training programs that issue trusted digital credentials [S13].
MAJOR DISCUSSION POINT
Digital certification for upskilling
Argument 4
No cookie‑cutter solution; vertical‑specific interventions backed by government enable ecosystems to grow (Arundhati Bhattacharya)
EXPLANATION
Arundhati argues that solutions must be tailored to the distinct needs of different worker categories, and that government involvement is crucial to create the enabling ecosystem for each vertical. A one‑size‑fits‑all approach would fail to address varied challenges.
EVIDENCE
She said solutions cannot be cookie-cutter and must be vertical-specific, with the government playing a key role in enabling ecosystems [120-124].
MAJOR DISCUSSION POINT
Need for sector‑specific, government‑backed interventions
Argument 5
Reports and suggestions lack an execution authority; a dedicated accountable entity is required to implement recommendations (Arundhati Bhattacharya)
EXPLANATION
Arundhati critiques the current practice of producing reports without assigning responsibility for implementation, calling for an accountable body to drive execution of recommendations. Without such authority, good ideas remain unimplemented.
EVIDENCE
She questioned who is charged with execution after reports, noting the absence of accountability and the need for an authority to implement suggestions [45-50].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
She questioned who is charged with execution after reports and called for an authority that can take charge and be accountable for implementation [S1].
MAJOR DISCUSSION POINT
Need for execution accountability
AGREED WITH
Manisha Verma, Romal Shetty
DISAGREED WITH
Manisha Verma
M
Manisha Verma
4 arguments145 words per minute1918 words792 seconds
Argument 1
SEED department oversees vocational institutes, accreditation, and a state skills university to build skilled workforce (Manisha Verma)
EXPLANATION
Manisha describes the SEED department’s comprehensive oversight of over a thousand ITIs, a state board for accreditation, and the newly established Ratan Tata State Skills University, all aimed at creating a skilled workforce for industry. This institutional framework supports systematic skill development.
EVIDENCE
She outlined the department’s role overseeing ITIs, the state board’s accreditation function, and the creation of the state skills university [57-73].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The speaker noted heading the Department of Skills, Employment, Entrepreneurship, and Innovation and the recent creation of the Ratan Tata State Skills University in Maharashtra [S1].
MAJOR DISCUSSION POINT
Institutional framework for vocational skill development
Argument 2
Targeted skilling programmes for jail inmates, people with disabilities, women, and tribal communities ensure social inclusion (Manisha Verma)
EXPLANATION
Manisha highlights that the department partners with agencies to provide vocational training to marginalized groups such as prison inmates, persons with disabilities, women, and tribal populations, ensuring inclusive skill development. These programs aim to integrate vulnerable groups into the formal economy.
EVIDENCE
She listed partnerships for skilling jail inmates, people with disabilities, women, and tribal areas as part of the department’s inclusive agenda [76-77].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Bridging the Digital Skills Gap outlines training extensions to women, people with disabilities, and indigenous communities, providing context for such inclusive programmes [S13]; Open Forum on bridging the digital divide also stresses targeted programs for marginalized groups [S14].
MAJOR DISCUSSION POINT
Inclusive skilling for vulnerable groups
AGREED WITH
Aditya Natraj
Argument 3
Maharashtra’s startup outreach (district‑level committees, hackathons, “Startup Week”) creates jobs and delivers social‑impact innovations (Manisha Verma)
EXPLANATION
Manisha details Maharashtra’s extensive startup ecosystem, noting 35,000 registered startups, district‑level committees, hackathons, grant challenges, and the “Startup Week” competition that channels socially impactful innovations into market opportunities and government contracts. This ecosystem generates employment and addresses societal challenges.
EVIDENCE
She cited the number of startups, district-level outreach, hackathons, grant challenges, and the “Startup Week” process that selects and awards socially impactful startups with work orders and visibility [146-154] and provided examples of awarded startups in health, clean energy, and inclusive mobility [158-166].
MAJOR DISCUSSION POINT
State‑driven startup ecosystem for social impact
Argument 4
PPP policy allows industry‑led management of ITIs, curriculum redesign, and apprenticeship programmes, strengthening industry‑government collaboration (Manisha Verma)
EXPLANATION
Manisha explains a public‑private partnership policy that lets industry anchor partners manage ITIs for long terms, redesign curricula, and provide expert faculty, thereby aligning training with industry needs and fostering apprenticeship opportunities. This model deepens collaboration between government and private sector.
EVIDENCE
She described the PPP policy granting industry-led management of ITIs, freedom to design curriculum, and integration with apprenticeship programmes, referencing the PM Setu scheme as a national counterpart [274-279].
MAJOR DISCUSSION POINT
PPP framework for vocational training
AGREED WITH
Arundhati Bhattacharya, Romal Shetty
A
Aditya Natraj
4 arguments185 words per minute1857 words600 seconds
Argument 1
Productivity deficit stems from exclusion of the bottom quartile; addressing education, gender and tribal barriers is essential (Aditya Natraj)
EXPLANATION
Aditya emphasizes that the largest productivity gaps arise from the poorest quartile, many of whom lack basic education, face early marriage for women, and belong to tribal communities, all of which limit their economic contribution. Targeted interventions in education, gender equity, and tribal inclusion are needed to raise overall productivity.
EVIDENCE
He described the four quartiles, highlighted that 36% of women marry before 18, low education levels (less than six years), and the concentration of poverty in five eastern states, illustrating the barriers faced by the bottom quartile [86-108].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
He referenced that the bottom quartile is not yet plugged into the market and needs help, echoing statements made in the discussion about the need to support the poorest segment [S1].
MAJOR DISCUSSION POINT
Bottom‑quartile exclusion driving productivity gaps
AGREED WITH
Manisha Verma
Argument 2
Technology should augment informal workers, not replace them; appropriate guardrails are required (Aditya Natraj)
EXPLANATION
Aditya argues that technology interventions must enhance workers’ capabilities rather than displace them, and that safeguards should be put in place to ensure safety, earnings, and job security. The focus should be on augmentation, not substitution.
EVIDENCE
In response to a question about guardrails, he discussed productivity gaps and stressed that technology must improve safety and earnings without replacing workers, linking the issue to the need for appropriate safeguards [79-80] and his broader commentary on productivity and technology [81-108].
MAJOR DISCUSSION POINT
Guardrails for technology‑enabled work
Argument 3
Aggregation models (e.g., FabIndia, Amul, UrbanClap) are critical for improving service quality, market access and incentive alignment (Aditya Natraj)
EXPLANATION
Aditya outlines various aggregation models—FabIndia’s design‑focused supply chain, Amul’s farmer‑owned cooperative, and UrbanClap’s rating system—that can organize blue‑collar workers, improve quality perception, and align incentives, thereby enhancing market access for informal workers.
EVIDENCE
He described the FabIndia model, the Amul/Seva cooperative model, and the UrbanClap rating platform, illustrating how each aggregates workers and distributes benefits [188-209].
MAJOR DISCUSSION POINT
Importance of aggregation for informal workers
AGREED WITH
Arundhati Bhattacharya, Romal Shetty
DISAGREED WITH
Arundhati Bhattacharya
Argument 4
Adoption varies across four categories of workers (age, device literacy); tailored programs are needed to overcome fear and skill gaps (Aditya Natraj)
EXPLANATION
Aditya presents a typology of ASHA workers ranging from those with no phone experience to tech‑savvy younger workers, showing that digital adoption depends on age and device familiarity. Tailored training programs are required to address the specific barriers of each group.
EVIDENCE
He gave detailed categories of ASHA workers-over-50 with no phone, dumb-phone users, smartphone users not accustomed to business use, and younger tech-savvy workers-highlighting the need for differentiated adoption strategies [294-318].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Strategies for bridging the digital skills gap emphasize differentiated training for varied literacy levels and marginalized groups, providing context for tailored adoption programs [S13]; Open Forum stresses targeted digital programs for different demographic groups [S14].
MAJOR DISCUSSION POINT
Need for differentiated digital adoption strategies
DISAGREED WITH
Arundhati Bhattacharya
R
Romal Shetty
1 argument152 words per minute921 words361 seconds
Argument 1
Persona‑led approach demonstrates that challenges differ across worker types and requires differentiated solutions (Romal Shetty)
EXPLANATION
Romal notes that the study created distinct personas—cultivators, artisans, textile workers, etc.—each facing unique challenges such as volatility, market access, skill gaps, and income insecurity, underscoring the need for tailored interventions rather than a uniform approach.
EVIDENCE
He listed the personas and associated challenges: cultivators face volatility, artisans market access, textile workers skills gaps, trade workers income insecurity, and migration pressures [114-119].
MAJOR DISCUSSION POINT
Persona‑based differentiation of interventions
AGREED WITH
Arundhati Bhattacharya
S
S. Anjani Kumar
1 argument139 words per minute381 words163 seconds
Argument 1
Emphasises that industry, development sector and government must work together as an ecosystem to solve informal sector problems (S. Anjani Kumar)
EXPLANATION
In his opening remarks, Anjani Kumar stresses that solving informal sector challenges requires coordinated action among industry, development agencies, and government, framing the problem as an ecosystem issue.
EVIDENCE
He stated that “all of the ecosystem has to come together to solve for this problem” and introduced a panel representing industry, development, and government [4] and [2-5].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Multistakeholder Partnerships for Thriving AI Ecosystems calls for government frameworks that enable ecosystem collaboration among industry, development agencies, and public sector [S10]; Digital Entrepreneurship notes the role of governments, investors, and larger companies in fostering such ecosystems [S12]; Securing access to financing to digital startups highlights collaborative efforts across stakeholders [S15].
MAJOR DISCUSSION POINT
Multi‑stakeholder ecosystem collaboration
Agreements
Agreement Points
A digital platform/marketplace is essential to connect informal workers with job opportunities, enable payment traceability, and support upskilling and aggregation.
Speakers: Arundhati Bhattacharya, Aditya Natraj, Romal Shetty
Digital marketplace needed for worker discovery, credential sharing, and opportunity access (Arundhati Bhattacharya) Aggregation models (e.g., FabIndia, Amul, UrbanClap) are critical for improving service quality, market access and incentive alignment (Aditya Natraj) Persona‑led approach demonstrates that challenges differ across worker types and requires differentiated solutions (Romal Shetty)
All three speakers stress that a digital platform or aggregation model is required to bridge information gaps, provide transparent payment records and deliver verifiable upskilling, recognizing the diversity of informal workers’ needs [34-37][188-209][114-119].
POLICY CONTEXT (KNOWLEDGE BASE)
This view is reflected in UNCTAD’s eTrade Readiness Assessment for Ghana, which stresses verifiable data and digital marketplaces for informal sector finance [S29], and in Brazil’s ‘Contrate quem luta’ platform that links homeless workers to jobs, demonstrating the practical impact of centralized digital marketplaces [S37].
The government must play a central, accountable role in executing recommendations and catalyzing ecosystem development.
Speakers: Arundhati Bhattacharya, Manisha Verma, Romal Shetty
Reports and suggestions lack an execution authority; a dedicated accountable entity is required to implement recommendations (Arundhati Bhattacharya) SEED department oversees vocational institutes, accreditation and a state skills university to build a skilled workforce (Manisha Verma) All of the ecosystem has to come together to solve for this problem (Romal Shetty)
Arundhati calls for an execution authority, Manisha describes government structures that can deliver skills and innovation, and Romal frames the challenge as requiring ecosystem collaboration, indicating consensus on strong governmental responsibility [45-50][57-73][4-5].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy discussions in ECOWAS highlight the government’s duty to create a conducive policy environment rather than stifle innovation [S38], and the UN Joint Inspection Unit stresses government leadership and coordination as essential for digital learning platforms [S41].
Interventions must be sector‑specific and tailored to distinct worker personas rather than a one‑size‑fits‑all approach.
Speakers: Arundhati Bhattacharya, Romal Shetty
No cookie‑cutter solution; vertical‑specific interventions backed by government enable ecosystems (Arundhati Bhattacharya) Persona‑led approach demonstrates that challenges differ across worker types and requires differentiated solutions (Romal Shetty)
Both speakers argue that solutions need to be customized to different worker categories, emphasizing vertical or persona-based design [120-124][114-119].
POLICY CONTEXT (KNOWLEDGE BASE)
The AI governance framework recommends sector-specific, use-case-specific governance instead of one-size-fits-all solutions [S50], and recent analyses underline the need to address structural sector differences for informal workers [S25].
Inclusive skilling for vulnerable groups (women, tribal communities, bottom‑quartile populations) is essential for productivity gains.
Speakers: Manisha Verma, Aditya Natraj
Targeted skilling programmes for jail inmates, people with disabilities, women, and tribal communities ensure social inclusion (Manisha Verma) Productivity deficit stems from exclusion of the bottom quartile; addressing education, gender and tribal barriers is essential (Aditya Natraj)
Manisha highlights programs for marginalized groups, while Aditya points to the same groups as sources of productivity gaps, showing shared emphasis on inclusive development [76-77][86-108].
POLICY CONTEXT (KNOWLEDGE BASE)
India’s gender-focused e-commerce policy playbook calls for state schemes that target women in the informal sector [S27], while the ILO emphasizes lifelong learning and public-private collaboration to upskill vulnerable workers [S30].
Public‑private partnership and industry collaboration are critical to drive skill development, innovation and social impact.
Speakers: Arundhati Bhattacharya, Manisha Verma, Romal Shetty
She is a strong advocate of responsible AI, inclusive technological adoption, and public‑private collaboration for national growth (Arundhati Bhattacharya) PPP policy allows industry‑led management of ITIs, curriculum redesign, and apprenticeship programmes, strengthening industry‑government collaboration (Manisha Verma) Industry partnerships are an important ally for employment and societal impact (Romal Shetty)
All three speakers underscore the importance of public-private collaboration, from policy to practical industry partnerships, to enhance skill ecosystems and social outcomes [8][274-279][237-239].
POLICY CONTEXT (KNOWLEDGE BASE)
European experience shows PPPs are vital for successful digital innovation initiatives [S31], and UNIDO highlights private-sector capacity as a driver of positive change in sustainable development projects [S46].
Similar Viewpoints
Both emphasize that government must create enabling frameworks and partnerships tailored to specific sectors to foster ecosystem growth [120-124][274-279].
Speakers: Arundhati Bhattacharya, Manisha Verma
No cookie‑cutter solution; vertical‑specific interventions backed by government enable ecosystems (Arundhati Bhattacharya) PPP policy allows industry‑led management of ITIs, curriculum redesign, and apprenticeship programmes, strengthening industry‑government collaboration (Manisha Verma)
Both stress that reaching the most marginalized groups through tailored skilling is key to improving overall productivity and inclusion [86-108][76-77].
Speakers: Aditya Natraj, Manisha Verma
Productivity deficit stems from exclusion of the bottom quartile; addressing education, gender and tribal barriers is essential (Aditya Natraj) Targeted skilling programmes for jail inmates, people with disabilities, women, and tribal communities ensure social inclusion (Manisha Verma)
Both agree that a differentiated, persona‑based approach is necessary rather than a uniform solution [120-124][114-119].
Speakers: Arundhati Bhattacharya, Romal Shetty
No cookie‑cutter solution; vertical‑specific interventions backed by government enable ecosystems (Arundhati Bhattacharya) Persona‑led approach demonstrates that challenges differ across worker types and requires differentiated solutions (Romal Shetty)
Unexpected Consensus
Recognition that low‑tech, community‑driven interventions can be as impactful as high‑tech digital solutions.
Speakers: Arundhati Bhattacharya, Manisha Verma
Simple equipment upgrades (stone‑age bamboo tools) dramatically improved product quality without fancy technology (Arundhati Bhattacharya) Funding modest homestays in tribal areas created tourism and cultural value (Manisha Verma)
While both speakers champion digital platforms, they also converge on the importance of simple, low-tech actions-equipment improvement and modest community funding-as effective ways to boost livelihoods, an unexpected alignment between a corporate leader and a government official [128-135][240-270].
POLICY CONTEXT (KNOWLEDGE BASE)
The WS #44 discussion on gender prioritization underscores the effectiveness of community networks and locally-driven low-tech solutions [S44], and policy briefs on digital public infrastructure note that low-tech alternatives can complement high-tech platforms [S35].
Overall Assessment

There is strong consensus that addressing informal sector challenges requires a coordinated ecosystem where government provides accountable execution and enabling policies, digital platforms and aggregation models connect workers, interventions are tailored to specific worker categories, and vulnerable groups receive targeted inclusion. Public‑private partnerships and even low‑tech community solutions are recognized as complementary pathways.

High consensus across speakers, indicating a shared understanding that multi‑stakeholder, differentiated, and accountable approaches—combining digital and simple interventions—are essential for advancing informal sector development.

Differences
Different Viewpoints
Mechanism for connecting informal workers – centralized digital marketplace vs aggregation‑based models
Speakers: Arundhati Bhattacharya, Aditya Natraj
Digital marketplace needed for worker discovery, credential sharing, and opportunity access (Arundhati Bhattacharya) Aggregation models (e.g., FabIndia, Amul, UrbanClap) are critical for improving service quality, market access and incentive alignment (Aditya Natraj)
Arundhati argues that a single digital platform is essential to list credentials, match workers with jobs and ensure payment traceability [34-37][45-50]. Aditya counters that before a digital platform can work, workers must be aggregated through cooperative or rating-based models to improve quality perception and market incentives, citing FabIndia, Amul and UrbanClap examples [188-209].
POLICY CONTEXT (KNOWLEDGE BASE)
Debates on digital public infrastructure note differing models for worker aggregation, with centralized marketplaces exemplified by Ghana’s e-trade data platforms [S29] and aggregation approaches highlighted in Brazil’s ‘Contrate quem luta’ case study [S37].
Who should lead implementation of recommendations – a dedicated government authority vs a catalyst role for government with industry‑led execution
Speakers: Arundhati Bhattacharya, Manisha Verma
Reports and suggestions lack an execution authority; a dedicated accountable entity is required to implement recommendations (Arundhati Bhattacharya) PPP policy allows industry‑led management of ITIs and the government should act as a catalyst rather than the primary executor (Manisha Verma)
Arundhati stresses the need for an accountable body to drive execution of the study’s recommendations, warning that without it reports remain unused [45-50]. Manisha emphasizes that the government’s role is to facilitate and catalyze, with industry taking the lead through public-private partnerships for curriculum redesign and apprenticeship programmes [143-148][274-279].
POLICY CONTEXT (KNOWLEDGE BASE)
UN discussions stress the need for government coordination but also recognize industry as a catalyst, reflecting the tension between dedicated public agencies and private-led execution [S41][S48].
Approach to digital adoption – one‑size‑fits‑all platform versus differentiated programmes based on user age and device literacy
Speakers: Arundhati Bhattacharya, Aditya Natraj
Digital marketplace needed for worker discovery, credential sharing, and opportunity access (Arundhati Bhattacharya) Adoption varies across four categories of workers (age, device literacy); tailored programs are needed to overcome fear and skill gaps (Aditya Natraj)
Arundhati proposes a universal digital solution to address discovery, upskilling and payments without detailing user segmentation [34-38]. Aditya presents a typology of ASHA workers ranging from no phone experience to tech-savvy youth, arguing that programmes must be customized for each group to achieve adoption [294-318].
POLICY CONTEXT (KNOWLEDGE BASE)
The AI governance framework advocates use-case-specific approaches rather than universal platforms [S50], and the Digital Adoption Index highlights disparities in government digital service readiness across populations [S45].
Unexpected Differences
Extent of government leadership versus private/industry leadership in driving change
Speakers: Arundhati Bhattacharya, Manisha Verma
Reports and suggestions lack an execution authority; a dedicated accountable entity is required to implement recommendations (Arundhati Bhattacharya) PPP policy allows industry‑led management of ITIs and the government should act as a catalyst rather than the primary executor (Manisha Verma)
Both speakers emphasize the importance of government involvement, yet Arundhati calls for a strong, accountable government authority to implement reforms, whereas Manisha argues that the government should step back and let industry take the lead through PPPs. This contrast was not anticipated given the shared emphasis on government’s role earlier in the discussion.
POLICY CONTEXT (KNOWLEDGE BASE)
Commentary on the perceived inability of governments to meet global challenges underscores calls for stronger private-sector leadership and multilateral coordination [S49][S47].
Digital platform as a pan‑solution versus need for simple physical equipment upgrades
Speakers: Arundhati Bhattacharya, Aditya Natraj
Digital marketplace needed for worker discovery, credential sharing, and opportunity access (Arundhati Bhattacharya) Aggregation models are critical for improving service quality and market access (Aditya Natraj)
Arundhati highlights a digital marketplace as the primary lever, while earlier she also shared a story where a simple upgrade of bamboo-working equipment (non-digital) dramatically improved product quality and marketability [126-135]. Aditya’s focus on aggregation models similarly points to non-digital organisational structures as prerequisites for any digital solution, an angle not explicitly addressed by Arundhati’s platform-centric view.
POLICY CONTEXT (KNOWLEDGE BASE)
Analyses of digital public infrastructure caution against viewing platforms as pan-solutions, emphasizing complementary physical upgrades and low-tech interventions [S35][S44].
Overall Assessment

The panel largely concurs on the need for multi‑stakeholder collaboration and upskilling of informal workers. However, substantive disagreements emerge around the preferred mechanism for connecting workers (centralised digital marketplace vs aggregation models), the locus of execution authority (government‑led versus industry‑led PPPs), and the design of digital adoption programmes (uniform platform versus differentiated, user‑specific interventions).

Moderate – while there is consensus on goals, the divergent views on implementation pathways could hinder coordinated action unless a hybrid approach is adopted that integrates aggregation, tailored adoption strategies, and a clear accountable body bridging government and industry.

Partial Agreements
The moderator opens by stating that industry, development and government must work together [4]; Romal reinforces the multi‑sector panel [23-24]; Arundhati later notes that the government must enable ecosystems for vertical‑specific solutions [120-124]; Manisha describes departmental coordination of ITIs, accreditation and PPPs to build a skilled workforce [57-73][274-279]; Aditya highlights collaboration with government for digitisation of health and education systems [188-190]. All agree on the necessity of collaboration and on the importance of upskilling, though they differ on the exact mechanisms.
Speakers: S. Anjani Kumar, Romal Shetty, Arundhati Bhattacharya, Manisha Verma, Aditya Natraj
Multi‑stakeholder ecosystem collaboration is essential to solve informal sector challenges All participants acknowledge the need for upskilling and skill development for informal workers
Takeaways
Key takeaways
Digital platforms are essential to address discovery, credential sharing, market access, and payment traceability for informal workers. Government agencies (e.g., Maharashtra’s SEED department) must lead skill development, accreditation, and inclusive programs for vulnerable groups. Productivity gaps stem largely from the exclusion of the bottom quartile; addressing education, gender, and tribal barriers is critical. Technology should augment informal workers, not replace them; upskilling must be supported by verifiable digital certifications. One‑size‑fits‑all solutions are ineffective; sector‑specific interventions and aggregation models (FabIndia, Amul, UrbanClap) are needed. Public‑private partnerships and the startup ecosystem can drive social‑impact innovations and create job pathways. Adoption of digital/AI tools varies across age and device‑literacy groups; tailored behavioral‑change programs are required. Current reports lack an execution authority; a dedicated accountable entity is needed to implement recommendations. Multi‑stakeholder collaboration (industry, development sector, government) is crucial for systemic change in the informal sector.
Resolutions and action items
Proposal to establish a dedicated execution authority or lead agency to implement the digital platform recommendations. Maharashtra’s SEED department to continue expanding accreditation, the state skills university, and short‑term skilling programs for marginalized groups. Implementation of PPP policy allowing industry‑led management of ITIs and curriculum redesign, aligned with the national PM‑SETU scheme. Continuation and scaling of Maharashtra’s ‘Startup Week’, hackathons, and district‑level startup committees to foster socially‑impactful ventures. Leverage existing aggregation programs (NRLM, SRLM) to improve quality assurance and market access for blue‑collar workers. Design and deploy tiered digital‑adoption training modules that address the four identified worker categories (age/device literacy).
Unresolved issues
Specific entity or mechanism that will be held accountable for executing the digital marketplace and payment‑traceability platform. Funding model and governance structure for the proposed nationwide digital platform. Detailed guardrails to ensure AI/technology augments rather than displaces informal workers. Scalable approach for aggregating diverse informal occupations beyond pilot models (e.g., FabIndia, UrbanClap). How to systematically engage and secure deeper industry participation in PPPs and apprenticeship schemes. Metrics and timelines for measuring impact of skilling programs on the bottom quartile’s productivity.
Suggested compromises
Combine a centralized digital platform with vertical‑specific modules to respect the differing needs of each worker persona. Balance government‑led standardisation (accreditation, certification) with private‑sector innovation (startup solutions, equipment upgrades). Adopt simple, low‑technology interventions (e.g., upgraded tools for tribal bamboo workers) alongside high‑tech digital solutions. Encourage industry to act as catalyst rather than sole driver, allowing the government to provide policy support and funding while industry supplies expertise and resources.
Thought Provoking Comments
India is great at putting out fantastic reports, but who is charged with the execution? There has to be an authority that will take charge, run with it, and be accountable for actually implementing it.
Highlights the chronic implementation gap between policy recommendations and real‑world action, calling for a concrete accountability mechanism rather than just more reports.
Shifted the conversation from describing problems to questioning systemic responsibility. It prompted the moderator to acknowledge the need for execution and set the stage for later discussion on concrete platforms and governance structures.
Speaker: Arundhati Bhattacharya
The bottom quartile of India’s population is not even plugged into the market; many lack basic education, and issues like early marriage for women drastically limit productivity. We need to focus on those 70 million people in five eastern states, not just the top quartile.
Broadens the analysis from generic informal‑sector challenges to deep structural inequities—education, gender, and regional disparities—that drive the productivity gap.
Introduced a new layer of complexity, moving the dialogue from technology solutions to social determinants. It caused other panelists to acknowledge the need for targeted interventions for the most marginalized groups.
Speaker: Aditya Natraj
In a tribal bamboo‑working village we simply replaced stone‑age tools with slightly better equipment. The product quality jumped and market demand increased—no fancy AI needed, just a modest, context‑specific innovation.
Demonstrates that low‑tech, locally‑tailored solutions can have outsized impact, challenging the assumption that high‑tech AI is always required.
Reoriented the discussion toward pragmatic, low‑cost interventions. It reinforced the earlier point about the need for on‑the‑ground knowledge before scaling digital platforms.
Speaker: Arundhati Bhattacharya
Aggregation is critical for blue‑collar workers. Models like FabIndia, Amul, and UrbanClap show different ways to organize workers, ensure quality, and share benefits, but we still lack a systematic aggregation for many artisans.
Provides concrete examples of successful aggregation models and underscores that without such structures, quality assurance and market access remain elusive for informal workers.
Expanded the conversation to include supply‑chain organization and cooperative models, prompting the government representative to reference NRLM and SRLM as aggregation mechanisms.
Speaker: Aditya Natraj
Technology adoption depends on four distinct user groups: (1) workers over 50 with no phone experience, (2) dumb‑phone users, (3) smartphone owners who use it only for entertainment, and (4) young, tech‑savvy workers. One‑size‑fits‑all programs will miss three quarters of the audience.
Identifies the heterogeneity within the informal workforce and the behavioral barriers to digital adoption, urging nuanced, segmented program design.
Prompted a deeper analysis of implementation strategies, influencing the panel to consider differentiated training and support mechanisms rather than uniform digital roll‑outs.
Speaker: Aditya Natraj
I funded a one‑lakh‑rupee grant for homestays in a tribal firefly‑rich area, leading to a thriving community‑based tourism model that now attracts visitors from across the state.
Illustrates how small, flexible funding and personal initiative can catalyze sustainable livelihood projects, providing a tangible success story that bridges policy and grassroots impact.
Served as a concrete example of the earlier discussion on tourism’s potential, reinforcing the argument for micro‑funds and localized interventions, and inspiring other panelists to think about scalable pilots.
Speaker: Manisha Verma
Balancing a centralized platform with the need for persona‑specific solutions means we cannot have a cookie‑cutter approach; fundamental issues like access, health, and literacy must be addressed early, while vertical‑specific interventions require government enablement and stakeholder collaboration.
Synthesizes the tension between universal infrastructure and tailored interventions, emphasizing the role of government as an ecosystem enabler.
Re‑focused the dialogue on the architecture of solutions, leading to a consensus that both a common digital backbone and sector‑specific modules are necessary.
Speaker: Arundhati Bhattacharya
Overall Assessment

The discussion was steered by a handful of incisive remarks that moved it beyond a superficial listing of challenges. Arundhati’s call for execution accountability and her low‑tech bamboo example questioned the prevailing tech‑first mindset, while Aditya’s focus on the bottom quartile, aggregation models, and behavioral segmentation exposed deep structural and cultural barriers. Manisha’s micro‑funding anecdote provided a vivid proof‑of‑concept that small, context‑aware interventions can succeed. Together, these comments shifted the tone from problem‑identification to concrete governance, design, and implementation considerations, shaping a more nuanced, action‑oriented conversation.

Follow-up Questions
Who should be the accountable authority responsible for executing the recommendations and platform implementation for informal workers?
Arundhati highlighted the gap between report recommendations and actual execution, emphasizing the need for a designated body to ensure accountability and implementation.
Speaker: Arundhati Bhattacharya
Should a dedicated study be conducted on the tourism and hospitality sector to unlock its potential for informal employment and foreign exchange earnings?
She noted that despite India’s rich cultural assets, the sector underperforms and suggested a separate research effort to identify interventions.
Speaker: Arundhati Bhattacharya
Why is the bottom quartile of the population not integrated into the formal market, and what specific interventions are required to bring them into productive employment?
Aditya pointed out that a large segment remains excluded due to low education and social factors, calling for deeper investigation into barriers and targeted solutions.
Speaker: Aditya Natraj
What aggregation models (e.g., FabIndia‑type, cooperative like Amul, platform‑based like UrbanClap) are most effective for organizing blue‑collar informal workers and improving quality, market access, and earnings?
He discussed various aggregation approaches and indicated the need to research their applicability and impact on informal labor markets.
Speaker: Aditya Natraj
How can digital and AI interventions be tailored to the four distinct user groups (non‑phone users, dumb‑phone users, smartphone but non‑business users, and tech‑savvy young workers) to improve adoption among informal workers?
Aditya identified heterogeneous technology adoption levels and suggested further study to design differentiated programs.
Speaker: Aditya Natraj
What is the impact of public‑private partnership (PPP) policies that give industry long‑term control over ITI management, curriculum design, and faculty, on skill relevance and job placement outcomes?
Manisha described a PPP model for ITIs and implied the need to evaluate its effectiveness for aligning training with industry needs.
Speaker: Manisha Verma
How effective is the ‘Startup Week’ initiative and direct work‑order awards in scaling socially impactful startups, and can this model be expanded or refined?
She highlighted the program’s success in linking startups to government contracts, suggesting further research on scalability and long‑term impact.
Speaker: Manisha Verma
Can the small, untied Nucleus Budget Fund be leveraged systematically to develop tribal‑area homestays and other tourism‑related micro‑enterprises, and what are the outcomes of such pilots?
Manisha recounted a personal pilot funding homestays, indicating a need to study its replicability and economic benefits for tribal communities.
Speaker: Manisha Verma
What specific guardrails are needed to ensure that technology augments informal workers’ safety and earnings without displacing them, and how should these be monitored?
Romal asked about safeguards for technology deployment, pointing to a gap in policy guidance that requires further clarification.
Speaker: Romal Shetty (addressed to Aditya Natraj)
How can a centralized digital platform balance uniform services (e.g., marketplace, certification, payment tracking) with the diverse, persona‑specific challenges of different informal worker groups?
He raised the tension between a one‑size‑fits‑all platform and the need for customized solutions, indicating a research area on platform design.
Speaker: Romal Shetty (addressed to Arundhati Bhattacharya)

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Aligning AI Governance Across the Tech Stack ITI C-Suite Panel

Aligning AI Governance Across the Tech Stack ITI C-Suite Panel

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel opened by noting the challenge of managing AI risk while supporting global innovation and the need for governments to align their approaches [1-2]. Panelists agreed that fragmented national policies risk stifling cross-border AI services, so alignment is essential [12][15].


Jay Chaudhry warned that if every country imposed its own AI rules, multinational firms would face operational friction, yet excessive alignment could also hinder innovation [22-24]. He argued that too much compliance kills innovation and that a balanced, flexible approach is preferable [27-28].


Aparna Bawa emphasized that cross-border data flows underpin services like Zoom and AI, and restricting them would impede citizens’ progress [47-50]. She described the trade-off between protecting privacy/security and maintaining free data movement, calling for a basic, commonly understood framework [52-56]. Bawa also highlighted a partnership model where enterprises provide safeguards while users adopt responsible AI practices [106-108][121-128].


David Zapolsky echoed the importance of unrestricted flow of goods, information, and services for Amazon’s global operations, from e-commerce to satellite internet [58-61]. He cautioned that premature, blanket regulation creates uncertainty and costs, citing Colorado’s early AI law as an example of unclear implementation [64-68]. Zapolsky suggested focusing on high-risk uses-such as decisions affecting health or civil rights-and building common principles rather than a universal theory of AI regulation [66-68].


Jarek Kutylowski argued that DeepL’s global mission requires a transparent, harmonized governance layer that respects sovereignty yet enables consistent AI services worldwide [75-82]. He noted that as DeepL moves into agentic AI, the stakes rise and the company must embed trust and flexible controls to meet varied regulatory expectations [167-174][176-182].


All panelists concluded that developing international standards and inclusive, up-skilling initiatives will be key to unlocking AI’s benefits without over-regulation [364-368][390-393].


Keypoints


Major discussion points


Global alignment of AI governance is essential to avoid fragmentation and sustain innovation.


The moderator frames the need for coordinated policy across borders [1-2] and notes that AI “doesn’t stop at borders” [15-20]. Panelists echo this: Jay points out the chaos of 50-country rule-sets [22-28]; Aparna stresses that cross-border data flows are the lifeblood of services like Zoom and warns that heavy restrictions “impede their own citizens’ progress” [39-52]; David argues for “common principles” and a “high-risk” baseline rather than a “unified field theory” of regulation [58-68]; Jarek adds that a “common layer…with a right balance of protecting sovereignty” would benefit global users [75-82].


Finding the right balance between risk-based regulation and innovation is a recurring tension.


Jason warns that “acting too much…can stifle innovation” [30-34], while Jay observes that “when we start doing too much governance…we start killing innovations” [27-28]. David describes the danger of premature, blanket rules (e.g., Colorado’s early AI law) that create “costs…uncertainty and you inhibit innovation” [64-68]. Later, Jay stresses the need for “flexible policy that evolves” and cautions that “compliance doesn’t mean security” and that over-regulation can render controls obsolete [180-202]. David reinforces the point by differentiating risk profiles (shopping assistant vs. medical documentation) and urging regulators to “not…inhibit adoption of really useful ways” [281-286][288-295].


Security and trust are non-negotiable foundations, especially as AI agents become more powerful.


The moderator explicitly asks about the “trust and security conversation” [84-86]. Jay explains that AI can be “abused” through data-poisoning and other attacks, and argues for a security overlay across all five AI layers [87-95]. In the forward-looking segment he warns that “AI agents will be the weakest link” and could be hijacked, underscoring the need for identity, authorization, and robust zero-trust controls [340-358].


Enterprises and end-users share responsibility; product design must embed choice, education, and safeguards.


Aparna describes the partnership model: enterprises must provide “sufficient controls for the individual user” while users need basic AI hygiene (e.g., not feeding personal data into prompts) [102-130]. She later expands on how Zoom offers tiered controls-from enterprise admin toggles to consumer-level safety features-so that “every risk-based decision is you are a user” across diverse contexts [226-272].


Upstream governance decisions (e.g., Amazon’s cloud services) shape downstream customer capabilities and must be built with security, data-ownership, and flexibility in mind.


David outlines Amazon’s “upstream” approach: a Bedrock platform that supplies over 100 models, keeps customer data private, embeds guardrails, and provides disclosures so enterprises can control outputs [137-160]. He also notes that any government-imposed barrier creates “friction” for Amazon’s globally interoperable services [58-63].


Overall purpose / goal of the discussion


The panel was convened to explore how governments worldwide can cooperate with industry to create a coherent, risk-aware AI governance framework that protects citizens, preserves security, and yet does not choke the rapid innovation needed for AI-driven global interoperability.


Tone of the conversation


The tone begins formally and forward-looking, emphasizing the strategic importance of alignment. As individual speakers share concrete experiences, it becomes more pragmatic and collaborative, highlighting real-world trade-offs and shared responsibilities. Toward the end, the mood turns hopeful and aspirational, focusing on inclusive growth, emerging international standards, and a vision of cross-border cooperation for the next summit. Throughout, the discussion remains constructive, balancing caution about over-regulation with optimism about coordinated standards.


Speakers

Jason Oxman – Moderator/Host; President & CEO, Information Technology Industry Council (ITI) [S7][S8]


Jay Chaudhry – CEO, Chairman, and Founder of Zscaler; security expert [S9][S11]


Aparna Bawa – Chief Operating Officer (COO) of Zoom [S4]


David Zapolsky – Chief Global Affairs and Legal Officer at Amazon [S2]


Jarek Kutylowski – CEO of DeepL (also referred to as Dr. Jarek Kutylowski) [S6][S5]


Additional speakers:


– None


Full session reportComprehensive analysis and detailed insights

The panel opened with Jason Oxman framing the dual challenge for the AI industry: managing risk while fostering global innovation and interoperability, and urging governments to move beyond fragmented, nation-centric rules toward coordinated AI governance that can support systems at scale [1-2].


After a brief roll-call of the participants – Jay Chaudhry (CEO, Zscaler), Aparna Bawa (COO, Zoom), David Zapolsky (Chief Global Affairs & Legal Officer, Amazon) and Dr Jarek Kutylowski (CEO, DeepL) – the moderator asked each panelist why cross-jurisdictional alignment matters.


Jay Chaudhry warned that a multinational corporation operating in dozens of countries would be crippled if each market imposed its own AI regime, creating “a lot of issues” and “killing innovations” when governance becomes excessive [6-9]. He also referenced India’s five-layer AI security model, noting that without a comparable security overlay the model could be abused, underscoring the need to embed sovereignty-respecting security controls from the outset [10-12].


Aparna Bawa emphasized the importance of cross-border data flows for Zoom’s global connectivity and argued that any restriction “impedes their own citizens’ progress” by throttling the infrastructure that underpins AI services [13-16]. She added that the COVID-19 pandemic forced Zoom to shift from an enterprise-only platform to a consumer-facing service, prompting the rapid deployment of default security controls such as waiting rooms and passcodes to balance swift innovation with user safety [17-20].


David Zapolsky described how Amazon’s e-commerce, cloud, and satellite-Internet businesses rely on the free flow of goods, information, and open skies, and warned that government-imposed barriers generate friction and uncertainty. He illustrated this with Colorado’s early AI law, showing how premature, blanket regulation creates costs and stalls adoption because “no one really knows how to apply it” [21-24][28-31]. He also noted Amazon’s internal “launch-everywhere” mantra – the desire to roll out new AI features globally at once – which sometimes must be delayed due to regulatory uncertainty [25-27].


Jarek Kutylowski explained DeepL’s mission to enable multilingual communication through a “transparent, common layer” of governance that balances national sovereignty with shared norms, allowing the company to serve a truly global market while maintaining trust in AI outputs [32-35]. He added that growing up under Europe’s early AI regulation gave DeepL an “edge” in learning to work with regulatory requirements, informing its strategy for expansion into other markets [36-38].


The discussion then turned to security and trust as non-negotiable foundations. Jay Chaudhry argued that AI is “powerful but dangerous” and advocated a security overlay across all five layers of the AI stack to guard against data poisoning, rogue agents, and AI-enabled threats such as ransomware and nation-state misuse [39-45][46-48][49-52]. Aparna Bawa highlighted Zoom’s partnership model, noting that privacy, security, and user-choice are embedded in the product-from enterprise-level toggles to consumer safeguards-and that users must also practice “basic AI hygiene,” for example by avoiding the inclusion of personal data in prompts [53-57].


David Zapolsky then described Amazon’s Bedrock platform, which offers over one hundred models while guaranteeing that “the data they use…stays their data.” The service includes built-in guardrails, content filtering, and transparent disclosures, shifting much of the downstream governance burden to customers in a secure, scalable environment [58-62].


Jarek Kutylowski discussed DeepL’s move into agentic AI, noting that the stakes have risen from simple email translation to high-impact tasks such as translating R&D documentation for drug approvals. He argued that trust in AI outcomes must be reinforced by transparent, adaptable governance and that providing customers with tools to manage risk themselves is a hallmark of a mature AI provider [63-68].


When the moderator asked how a flexible, risk-based approach might preserve both safety and progress, the panel converged on several points. All three speakers agreed that over-regulation “kills innovation” and that governance must be evidence-based and use-case specific. David Zapolsky defined “high-risk” uses explicitly as “decisions that affect life, health or civil rights” and advocated a principle-based approach that first identifies such uses before tailoring safeguards [69-71][72-74]. Aparna Bawa echoed the need for a “basic level framework” that respects national sovereignty while providing clear, evidence-driven guidelines for developers [75-77].


Looking ahead one year, the panelists shared a common vision of inclusive, standards-driven AI. Jay Chaudhry called for up-skilling programs and configurable security controls that enable enterprises of all sizes to adopt AI safely [78-80]. Aparna Bawa stressed the importance of low-bandwidth access so that even a farmer in a Karnataka village can benefit from AI, linking inclusivity to market creation [81-84]. David Zapolsky highlighted the emerging international consensus around standards such as ISO 42001, which would provide “a common set of principles and a common set of technical standards” for global AI governance [85-88]. Jarek Kutylowski concluded that a global framework would facilitate seamless multilingual collaboration, the core of DeepL’s mission [89-91].


Collectively, the panel calls for a globally-aligned, risk-based AI governance framework that protects security, respects sovereignty, and enables inclusive innovation. [92-93]


Session transcriptComplete transcript of the session
Jason Oxman

The challenge of not just managing risk as an industry, but also doing so in a way that supports global innovation and interoperability. So today’s discussion, we’re very fortunate to have leaders from across the AI stack, if you will, who are here with us to discuss how governments can help industry work in partnership with industry, if you will, to align responsibilities, to reduce fragmentation, and to build trust in AI systems that are built for scale. We are very pleased to have with us some luminaries from across the tech ecosystem. Jay Choudhury is the CEO of Zscaler. Aparna will be joining us in just a moment. David Zapolsky. I almost missed that. David Zapolsky, who made it, is the Chief Global Affairs and Legal Officer at Amazon.

And Dr. Jarek Kutylowski. How did I do there? Thank you, is the CEO of DeepL. So to set up the conversation, I wanted to ask each of our panelists to help us think through the AI governance conversation that’s taking place globally. So as we’ve seen here at the AI Impact Summit, there are efforts among global governments to align their approach, even though they may take different directions. Hi, Aparna. And as Aparna is now joining us, I will introduce Aparna Bawa, who is the chief operating officer of Zoom, which is not only a technology company, it is also a verb. And so thank you, Aparna, for being here with us today. So as we were getting ready to talk about AI governance conversations, it is absolutely the case that there is a need for governments around the world to align their approaches to AI governance, because, of course, technology doesn’t, by its very nature, want to stop at borders.

It wants to cross borders and unite people around the world. So I wanted to ask each of our esteemed panelists, and, Jay, I’ll start with you. for perhaps your philosophical perspective on how AI alignment can take place across governments. Why is it that that alignment matters? And perhaps even share your perspective on what happens if that AI alignment breaks down and governments are going off in different directions and taking different approaches. Where do you see the biggest challenges around this idea of alignment of AI governance around the world? Jay, thank you.

Jay Chaudhry

Thank you. So we are a highly connected world. Imagine any large corporation that’s doing business in 50 countries. If each country has its own governance rules and all but using AI, and you’re using some systems locally, some systems globally, it’ll create a lot of issues. some line of alignment is good, but over -alignment doesn’t help either. In fact, I have similar thoughts on governance too. Some level of governance is needed. When we start doing too much governance, too much compliance, we start killing innovations. So that’s personally my view. No,

Jason Oxman

it’s an important viewpoint because there is this idea that governments need to act. They need to protect citizens. They need to ensure security. But acting too much, perhaps in advance, can stifle innovation. So, Aparna, I want to go to you with the same question. As we’re having this global AI governance conversation here at the AI Impact Summit, governments are going in different directions in many cases. This is the first time the conversation has taken place in the global south, so I think that’s a good thing for aligning governance approaches. So from where you sit, why is alignment across the AI governance ecosystem internationally so important, and what can happen when it doesn’t happen and goes wrong?

I

Aparna Bawa

will say, just to start, as an Indian American and someone who has lived in India, and we talked about this this morning at a breakfast we were at, it is quite striking to me some of the haves and have -nots. Like even we were talking about this morning, for example, during COVID, how some countries were fighting for PPE and fighting for oxygen tanks. And, you know, we in California were stockpiling toilet paper. I mean, the contrast is so stark. And I remember during COVID thinking to myself, that doesn’t seem right. And so I do feel like countries should protect the rights of their citizens and should want to advance their economies. But it is a tradeoff.

And I think it’s very well put to say it’s a tradeoff. So, for example, Zoom. imagine you would not be able to connect with people globally if we did not have cross -border data flow. So when we’re talking about AI, you can talk about AI, but it’s no different at the data layer. But we would not exist if we didn’t have cross -border data flows and free unencumbered data flow. And when governments start putting more and more restrictions on them within their own countries, it impedes their own citizens’ progress. And so at some point, it becomes a tradeoff. Now, obviously, the requirements around privacy and security are table stakes. If you get on a Zoom meeting with someone, you want to know that the person on the other side is that person.

That is sort of table stakes. But I’m with Jay on this one. I think there’s a basic level framework that is necessary. to be honest we live today with multiple in the United States we live with multiple states privacy frameworks and is it great no is it inefficient yes there’s something in between where you have a framework that is commonly understood with common set of norms and values I also respect a right of sovereignty for a nation so something there has to be a balance that

Jason Oxman

makes sense David Amazon operates pretty much in every country on the planet although I’m sure you can name a few that you’re not in yet there’s a few yeah there’s a few small number can you share your view on how this AI governance conversation needs to have some perhaps some unity to it

David Zapolsky

sure and first of all I I’m going to try not to repeat Aparna’s view because I basically agree with everything you just said if you think about every one of Amazon’s business models our stores the way we’re able to export 20 billion of Indian small to medium sized businesses to overseas markets we’re able to export 20 billion of Indian small to medium sized businesses to overseas markets we’re able to export 20 billion of Indian small to medium sized businesses to overseas markets We’re looking to take that to 80. If you look at the cloud, if you look at our entertainment business, if you look at the satellites that we’re launching to launch a global Internet service, every one of them depends on free flow of goods, free flow of information, open skies.

That’s just kind of the way we’ve designed the company, to be global and to have interoperable services. And so every time a government erects barriers to that, it creates friction. It creates potential problems. And I think the global trend towards more of that is concerning. With AI particularly, I think the danger of some of the regulation that we’ve seen around the world is that we all still don’t really know how it’s going to be used, where it’s going to be most effective, where it’s going to be dangerous. There’s a lot of theories about it. There’s a lot of fear, uncertainty, and doubt about that, a lot of science fiction. And I think the danger in regulation, before you really understand the technology or how it’s going to play out, is that you create costs.

you create uncertainty and you inhibit innovation you inhibit adoption and that’s kind of what we’re seeing a couple years into this large language model journey there are parts of the world that were quick to regulate and civil society was all over that we’re going to regulate all these things we’re going to come up with these theoretical constructs of high risk, low risk and we don’t really know what that means in practice yet and so what’s happening? well, look at Colorado Colorado was one of the first states out of the box with comprehensive AI regulation which, by the way, isn’t bad in principle but they don’t know how to apply it no one really knows how to apply it and I think you’re seeing some buyers regret they put the implementation on hold they want to figure out standards I won’t even talk about the EU but they’re pretty much in the same boat they’re all looking for ways to not have to put the thing into practice because they don’t really know how it’s going to play out so I think what we need to do is step back look for some common principles what what is a high risk use?

what can we all agree are high risk? well, if you’re using a technology to make decisions that’s going to affect the life, health, or civil rights of an individual let’s talk about that are there laws that protect that already? do we need to supplement? them. Let’s work backwards from the harms we can see today and regulate there versus trying to come up with the unified field theory of AI regulation because that’s only going to slow us down.

Jason Oxman

Great. Yarek, we’ve been talking about unifying global governance approaches, making sure one might say that they all speak a common language. That’s what DeepL does. See what I did there? Your language AI platform is all about making sure everyone can communicate with each other regardless of the language they speak. From your perspective, you’re our European headquartered representative here, but you do business around the world. What can you share with us about how AI governance conversations being unified across governments is important to DeepL?

Jarek Kutylowski

I truly believe that any successful technology needs to be inherently global. That holds both for the commercial models of the companies that we’re representing, but it also holds for the AI. just the access and the ability of reach towards the whole globe with what we are building. I think this creates the economies of scale on everything that we’re building. And when you are in AI, like obviously you’re running very, very high R &D costs and you have to be able to offset that with a huge customer base. So having a global market and being able to deploy to the whole world and therefore also to fulfill the mission of our companies, whether it’s just enabling communication, maybe in the case of Zoom, or making sure that this communication can happen multilingually, as in the case of DeepL, that really depends on a framework that is transparent and on a framework that is maybe not too different in all of the parts of this world.

And therefore, having some common layer, having this right balance of… of protecting the sovereignty and… And protecting maybe like a slightly different approach and slightly different mindset to certain topics like privacy, where we do have differences across the world. But doing that in a way that has a common understanding, that would be incredibly valuable. I think not only for the companies that we represent, but also really for our users and for our customers who depend on the best possible solutions.

Jason Oxman

Jay, I want to come back to you because you are our resident security expert and sometimes doomsayer about what happens if we don’t include trust and security as part of the conversation. I’ve heard you remind members of the government of India, indeed, that although the five pillars are enormously valuable, if you don’t have security overlaying them, we’re all in trouble. Talk to us about how the trust and security conversation is still a vital component around all the excitement.

Jay Chaudhry

Yeah, I have said that AI is powerful. but AI is dangerous because this technology can be abused. In India there’s a great focus on five layers and the focus is about being sovereign having everything that you can control. It starts with application then models underneath and so on and so forth. While it’s good to have that sovereign stuff imagine a bad guy can control all of that sovereign stuff sitting somewhere out there. Data poisoning can be done. All kinds of stuff can be done. So having a layer of security across all five layers becomes very important. So we should think about sovereignty not just in terms of this thing is sitting in my country but also in terms of who can access, who can do some of these things with it which is often overlooked.

And also the adoption of AI is happening very fast. And it’s wonderful And I’m not saying we should slow it down. I think we should embrace fast, but we should also start thinking about embracing cyber to make sure things are used securely at the same pace.

Jason Oxman

And in order to make sure that security is part of the AI ecosystem, Aparna, I want to ask you about what we all have responsibility to be thinking about as users, what enterprises have a responsibility to be thinking about. You know, we’ve talked about governance from the policy perspective, but, of course, users and enterprises also have a responsibility around AI. And as the COO of Zoom, you look over both the public policy and business aspects of what you’re deploying. How does the conversation about what we all should be thinking about factor into product development and deployment conversations?

Aparna Bawa

It is a true partnership. And you know what? When Jay was talking, it resonated with me. When you work for a technology company, you’re not just working for a company. that is what you want to develop technology and you want people to adopt it as fast as possible you want them to be early adopters it’s so exciting in fact in our company you know companies have lots of different functions obviously our engineers our developers our product people are super they’re super early adopting their first to to take any sort of app that’s come out with its cursor etc and use it in their day -to -day and then there’s other people who have other day jobs i mean there’s finance people and the people people the hr people they have day jobs and they’re learning ai at night because they’re realizing that if i’m not on the ai bandwagon i’m going to get left behind and by the way if you’re looking to develop apps it’s actually yes you can focus on the sort of the the tech applications but the real so the the secret that not getting a ton of attention maybe a little bit of attention is this non -technical roles that could be augmented with ai so in that frame of mind i think it’s a really important thing to do and i think it’s a really important framework when you work for that kind of technology company it can be difficult to then start saying, but wait a minute, you need to slow down because you need to make sure that your CICD work is still going and it’s amplified because of the risks of AI, your security certifications, your red teaming, your privacy standards, all of that stuff is maintained.

I will tell you, the user plus the enterprise that is pushing out this technology, it’s a partnership. It is so important. The one thing that we learned during the pandemic, if you think about Zoom before pandemic, it was an enterprise -focused company, a work -focused company. And basically, when the pandemic hit, we said, okay, all you consumers, we will just hand you a platform that we usually give to IT administrators. And what do IT administrators at our customers do? They decide whether to turn up the security and privacy controls, turn down usability because it’s a tradeoff. It’s a definite tradeoff. They decide. We, in turn, just handed it to consumers and said, you can’t do that.

Who decides? and we realize, okay, public schools, they don’t have IT administrators. They don’t know how to turn on waiting rooms. They don’t know how to, you know, hide the meeting invite. They don’t know how to do these kinds of things. You have an obligation as an enterprise to make sure that there’s sufficient controls for the individual user and it scales all the way up to the enterprise and maintain that level of flexibility. You have that obligation. But on the same side, I would say the user, to be smart, has to understand some basic levels. I’ll tell you for an example. My kids use all the AI engines, ChatGPT, Cloud. They use it all. And it is a conversation we have to say is you don’t put all your information into your prompt because if you put all your information in your prompt, it is going into that engine and it will train that engine.

On the flip side, we as an enterprise provider, we have made the statement and we have made the policy decision that we will not use our customer content to train data. When I’m training my kids, I have to tell them, you can’t put your address into ChatGPT. You have to make sure that you’re safe in some way. So those are the kinds of things that you have to keep in mind. It’s a partnership between the user and the enterprise. And I think the enterprise obligation scales as you get down into the consumer use.

Jason Oxman

And I want to stay on this theme of training the user, if you will, whether they’re your children or a customer, because it is important for the tech industry to be mindful of the downstream. And, David, I want to come to you with this question. Amazon is, in a lot of ways, an upstream operator. You enable business and consumer customers on everything you do, from content to e -commerce to broadband in the future to your cloud customers. So how do you think about the upstream governance decisions that you’re making at Amazon and how they impact the downstream? How do you think about the downstream decisions or ways of operating that your customers are going to have to make as a result of those decisions you make at the Amazon level?

David Zapolsky

Well, we’re fortunate to have the scale to be able to serve enterprises in the cloud at the service layer. And so we have, you know, even before the AI, the current AI craze, we have a couple of decades of experience in thinking through what does governance and security look like for our enterprise customers. And as we’ve moved into this, you know, newer age where there’s AI services available, you know, one of the best solutions that we could come up with is creating an environment within the cloud services that so many hundreds of thousands of enterprises already use to give them access to models, not just our own, and we do our own models, and there’s upstream governance on those, you know, testing, making sure there’s, you know, we correct for bias, the things that a responsible model builder will do.

But at this enterprise level and the services, this is called bedrock. You know, we try to think through what are customers going to need. So we build in security. We build in the type of infrastructure that allows customers to scale up or down. We build in choice. Enterprises can choose from over 100 different models, open source and closed source. Not just ours, but, you know, all of the leading models from all around the world. And so we try to create an environment, a platform, where enterprise customers can come to use this new technology. First of all, get access to it without having to build their own servers and train their own models. And secondly, to do it in a way where they can rely on the security of the infrastructure.

The other thing that we will provide customers is that the data they use to employ those models, you know, stays their data. It doesn’t go to the model builders and it doesn’t go to us. So, you know, you can build that into the system. And then on top of that, given the way… that enterprises are using this technology, we try to build as many tools as possible to put the control of how this technology is deployed into the hands of enterprises and users. And so, for instance, on the Bedrock platform, we provide guardrails that allow you, as an enterprise, to basically control what types of outputs the models are going to give you. Now, are they more toxic?

Are they less biased? Can you filter for certain types of content? We build those controls right into the interface so enterprises can have that control. We build disclosures into the types of services that we offer so that we provide some visibility and transparency into here’s how this thing is built, here’s what you should use it for, here’s what you probably shouldn’t use it for, and we provide those kinds of choices to consumers. And so you have to think through the overall security in the system. in the environment. and the accessibility of this technology. And as far as our approach is, the cloud is probably the best place to do that. It’s certainly the easiest way to access the technology and likely the safest.

Jason Oxman

Jarek, you’ve moved DeepL’s business model from it started as translation. Now it’s getting into agentic AI, and you have agents on your platform that can execute tasks on behalf of your customers. Which I can imagine raises very different governance policy decisions that you have to make on behalf of your customers when you’re just translating versus when agents can act autonomously, particularly because you’re a global business and they can act autonomously across borders. How are you thinking about the policies and procedures for governance that you have to put in place in an agentic AI world that are different than perhaps you did in a language translation? world?

Jarek Kutylowski

I think generally, but also in the language space, it’s just like the stakes are becoming higher and higher. AI is becoming more and more powerful. And even if you look into translation, like a couple of years ago, Diebel would be translating your typical email to your customer. And that is important, of course. You want to look great in front of the customer. You want to be eloquent. You want to be able to connect with them, maybe like really on a human level when it comes to the language that this customer is speaking. And you’re enabling your business to basically become global very, very easily. But now what Diebel is translating, it’s plain maintenance records. It’s R &D documentation for new drugs that actually influences how those drugs are developed and whether they’re being approved by the FDA or not.

So these are highly critical use cases. And I think it has been mentioned that like privacy and… it is just the table stakes it’s just the beginning I think creating a layer of trust into the outcomes of the AI whether that’s translation whether that’s agentic AI that those decisions are really following what the enterprise is expecting of the AI that is really where kind of the battle is right now and that is where both the governance aspect of that that’s coming from the political side and from the governmental side needs to obviously be included but there’s also the aspect of how do the enterprise how do our customers want to regulate the AI that is being deployed and how flexible the products that we all are providing can be towards those very different approaches that we’re seeing across the world and with different types of enterprises maybe even

Jason Oxman

Each of you mentioned the concept of risk management in your comments and I want to come back to the balance that Jay alluded to earlier between risk management between promoting innovation and balancing risks and obviously there is a trade -off it’s a sliding scale the more you regulate risk the less room there is for innovation I want to ask each of our panelists, Jay, I’ll start with you, about how you’ve seen a flexible risk -based approach from government be the most effective, where you see that flexible approach still leave room for innovation, or the flip side to that, if you want to give any examples, where you’ve seen it go wrong, where a more prescriptive approach to regulation has denied you the opportunity to bring products or services to market or has generally been more of a challenge for industry because a government didn’t get the balance right between managing risk and promoting innovation?

Jay Chaudhry

There are many facets of governance and risk. Take, for example, data privacy. Obviously, that’s one kind of factor. But potentially, hacker attacks from a cyber point of view is a different kind of factor. We look at more in terms of two things. One, making sure your data is not lost. So the data becomes very important. There’s a consumer end of data, but there’s a bigger issue on the data side is enterprises. And you don’t try to treat the same data the same way in the practical business world. I’ll give you an example. When I worked with General Electric, the CISO, a very smart guy, Larry Virginia, would say, when I tried to secure everything, I secured nothing.

So then he would give an example. He says, as a CISO, I need to protect IP or intellectual property of my products. But my washers and dryers are out there. I don’t spend time trying to protect its IP at all. You can buy them in a store. And figure it out. But I’m dead serious about protecting IP on my jet engine. That’s very important. Trying to just say all consumer data, all this data, it just starts creating issues. That’s why I also like to say compliance doesn’t mean security. In fact, when you work on compliance, all this thing works through the government entities, pros, cons, and it takes a lot longer. And by the time it’s out there, the cyber and compliance needs have moved on.

So the stuff you put in place many, many times is old. In fact, when Zscaler came out with our Zero Trust cloud -based architecture, a lot of these regulators came in, wait a second, where is your firewall? So what do you mean firewall? Firewalls don’t, we don’t use firewalls. We are anti -firewalls. And they said, no, no, no, wait a second. The banks can use it. If you know. It’s not a firewall. When we went through certification for the federal government in the U .S. the certifying body first came firewalls no it took us three months to educate them so that’s why I think over regulation I really don’t like it there needs to be a way of saying what’s the impact of this thing on what kind of stuff that’s the right approach all data is not created equal trying to put the onus off securing all data gets hard then classifying data gets hard so these are not simple issues AI makes it very hard we don’t even fully understand how AI does what it does so I think a flexible policy that evolves is a better thing while keeping track of the most important data and then beyond data hackers too, that’s a big problem we talked about agents today a user is the weakest link tomorrow AI agents will be your weakest link and they’ll be all over they are maturing they’ll come Imagine an agent getting hacked or hijacked in your company with access to all kinds of stuff.

So that’s where companies like Zscaler, we are focused on making sure our zero trust change can be extended to deal with agents, starting with understanding their identity, authorization, all those things. Those things are very important the way we look at it. Otherwise, business will shut down.

Jason Oxman

So Aparna Zoom brings some amazing innovations using AI to the platform that we’re all familiar with. It makes it a lot easier for us to do everything from transcribe meetings to pretend to be a cat when you’re in court. No, that’s not a – that’s a –

Aparna Bawa

I was going to say it can summarize your meeting. It can take notes for you. It can send action items to your teams. It can calendar those action item follow -ups. It can give them deadlines. All done.

Jason Oxman

There it is. But I can imagine you’ve had some challenges around the world in that balance between innovation. and risk management from governments. Can you either share a positive example of where that’s gone well in your mind, or if you want to, an example of where it hasn’t gone well, where consumers and businesses have been denied Zoom innovation because that balance isn’t struck? Or perhaps you can keep it at a higher level if you prefer.

Aparna Bawa

between innovation, our product team, innovate, innovate, innovate, our governance team, security, privacy, et cetera, is always thinking about that as well. And so how do you strike that balance? And I think I’ll start at the top level. It’s a sliding scale on many different fronts. But if you look at it like a layer cake or even a data stack, but the top level, it’s customer choice. So David was very appropriate when he said customer choice, but customer choice is different by the category of customer. If you are an enterprise and you have 200 people on an IT admin team or under the CIO, and you are buying Zoom and you have a giant security team and a giant compliance team, you’re going to be making choices for yourself.

I’m not going to tell HSBC what they’re going to do. They’re going to decide what they’re going to do. And we deliver the platform and we have toggles for them to decide what they want to deploy, what they don’t want to deploy, who they want to deploy it to. We make it very easy. So we provide a lot of choice. So the same platform services Fortune One. The same platform also services my mother -in -law, who is on the free account and who is chatting with her friends and won’t upgrade. I tell her, please upgrade. She gets off, waits five minutes, gets back on, and that’s how they do it. So for her, it’s very different.

So for her, you have to mandate a few things. You can’t give your meeting ID to everybody. It cannot be on the top of the UI. You know, those are some basic things. You have to have waiting rooms. If you’re in a school environment, you have to have mandatory passcodes. These are sorts of things that you – so that’s a sliding scale. I would say take it one level deeper. I think the biggest thing I have learned from working at Zoom, and in all honesty, I credit our founder for this. The biggest thing I’ve learned working at Zoom is everything goes back to the user experience. And our customers are not monoliths. They don’t just want to take down.

They don’t want to take down all the technology. They want to do it in a safe and secure way. They don’t want to be surprised. So you have to think, I am a user. I’m an end user. It doesn’t matter that I sell to Zscaler. Thank you very much. It doesn’t matter that I sell to Zscaler. I need to worry about how Jay Choudhury’s engineer feels when he gets on Zoom. And that’s the user experience I’m going for. So if you are a user and you feel like, wait a second, I don’t really want – if I’m a finance person in Jay Choudhury’s team and I say I don’t really want my meeting to be automatically transcribed and then spit into an AI engine because I’m worried, or if I’m a lawyer, I’m worried about attorney -client privilege, well, I need to give them the option to say I opt out of that.

I need to be able to give them choice. And I think that’s how I think about it. Every risk -based decision is you are a user. You’re not one kind of user. You have multiple types of users. How do you make it easy? How do you make it easy for, at a very lowest common denominator, for them to trust you? And that’s really the answer that you go through.

Jason Oxman

That’s great. David, let’s go from different kinds of users to different kind of products. You were the first on the panel to use the phrase risk -based approach, and nowhere is that more evident than Amazon’s wide range of products and services to your customers. I can imagine it’s a very different internal conversation about governance and risk when determining how AI is going to, on Amazon Prime, recommend my next series or show. Not a lot of risk there. But other Amazon products could have more risk to them. So on the sliding scale, and you also, you travel the world, quite literally now you’re doing it, talking to governments about that innovation versus risk management and the risk of getting that balance wrong.

How do you communicate that to governments and also make the internal product decisions that you need to around those issues?

David Zapolsky

Well, you sort of… kind of stole one of my talking points when I have some of these conversations, which is it does matter how this technology is used and where. It’s a different set of considerations when we think about what kind of protections or risks arise from an AI -assisted shopping assistant versus a tool we might make available to help doctors document how they’re treating patients and make it easier for people to prescribe medications. Those are two very different risk profiles. But if you start with a regulation that doesn’t differentiate between those, you’re going to inhibit innovation. You’re going to prevent adoption of really useful ways that this technology can be used. And so that’s…

You know, that’s the pitch I make when I get to talk… to people whose business it is to think about regulation. It is about risk. It’s about how the technology is used. And my point earlier was that we don’t really know yet how the technology is going to be used. When we see it, we can analyze it. I can’t, you know, and on that point generally, you know, there are cases where technology companies have made a decision to not bring certain types of technology into, say, Europe because of regulatory uncertainty. And typically those get worked through. But I can’t tell you how many conversations I’ve had internally where folks have come up with an idea or a product and our sort of internal mantra is we want to launch something everywhere all at once.

We want to serve customers. If we have convictions, something’s going to happen. If it’s good for customers, why just do it in one place? And sometimes the answer to that is. it’s too costly. It’s going to take more time. We can’t really figure out how this is going to fit within, you know, the regulatory scheme in a certain other jurisdiction because they haven’t thought of it either. And so we’re going to wait. We’re just going to, you know, wait on that. We’ll launch it in this place first and we’ll see if it works. And then if it works, then we’ll think about, you know, the costs associated with scaling it globally. And so that’s a real world issue that governments have to understand and deal with when they make decisions about how prescriptive their regulations are going to be, especially in the abstract.

And so those are the sorts of conversations I have. I think, you know, in the AI space, I think you can look at countries like Peru. You can look at countries like Japan that have proceeded cautiously. I think India has the same approach and I’m very encouraged by the way India is approaching these issues. You have to you can’t rule out regulation completely. And Amazon’s an advocate. of regulation that mandates that people developing and deploying this technology do it responsibly. But we have to understand what we’re regulating before you can really pull the trigger. And so those are the – I think those types of examples are useful for people to keep in mind when they’re considering how to resolve that balance.

Jason Oxman

And the results of those conversations not going in the right direction, David, is that consumers or businesses might get denied the technology that their neighbors are enjoying. So, Jarek, I wanted to ask you, as the CEO of DeepL, in the process of expanding around the globe, are there examples that you can think of where you’ve had to make a go -no -go decision entering a particular country or launching a particular product, including your new Agentech AI products? because of the regulatory environment or because of the way in which a country looks at? Or the flip side of that, if you want to take the positive, is are you attracted to a particular market because, as David said, it’s done the right thing, like Peru or Japan or even India is endeavoring to do, where they’re more likely to get deep L service because of the decisions they’ve made, the approach they take to these AI governance decisions?

Jarek Kutylowski

Yeah, Jason, let me maybe first start with a principle. I’m a scientist by heart, so I’m really excited about bringing the best possible technology to each and every one of our customers and users. I think they all deserve it. I think they all should be equipped with that. But yes, there is kind of some of those things that we need to take into account. And actually, quite often, those are not really location -based or country -based or regulation -based, but really also based on the use cases of those. Of those customers. AI can be incredibly powerful, but that power also demonstrates its possibilities in different ways in different applications. And going back to my example from earlier, like the translation of an email has just a different criticality grade than a translation of a patent application.

The execution of an agent in a particular environment versus in an enterprise environment has a different grade of complexity. But going back to kind of the regulation aspect of it, I think we’re lucky as a company to have grown in Europe in kind of an environment which is maybe like slightly earlier on regulation than other places in the world. And I think that gives us an edge to be able to understand how to work with this regulation and how to prepare and then also be very, very early in other markets, like you mentioned Colorado earlier, and be able to handle that complexity and be able to handle that complexity for our customers, really. Because most often it is our customers who do not understand this space.

We do. And we have to go all of the way to give them the possibility to figure this out for themselves, for their applications, for their use cases, and across a whole range of products. So in short, I think it can be managed, but it is really like part of the excellency of a company to be able to manage it together with the customer.

Jason Oxman

The last question that we have time. I want to address to each of you is a forward -looking question. It used to be possible to have conversations about policy outcomes years in advance. I think the best we can hope for. is for me to ask this question in advance of Switzerland hosting the next AI Impact Summit or whatever they choose to call it next year at this time. So my question to all of you on the panel is, a year from now, if we are to gather, and something had happened in the AI governance, AI regulatory space over the course of that year that you’d like to see happen and you were looking backwards to India and say, I’m really glad that one thing happened or that one thing changed or this government or this international body did this thing over the course of the last year to really help unleash the innovation and power of AI in a secure way that we all want to see, what could that one thing be that you’re looking at?

And it can be something that you’re focused on in your business as well over the course of the next year that government can help make a reality. So, Jay, I’ll start with you with this question. Then we’ll go down. I’ll go down the panel to bring our time to a close together. What’s the one thing you’re hoping if we’re talking a year from now has happened in global AI governance that’s going to make everything that we’re talking about and excited about a huge success?

Jay Chaudhry

The AI train is moving at a pretty fast pace. It will keep on moving. Then you look at the things that could go wrong. That’s where governance comes in. I think there’s too much focus on data. There’s less focus on bad things that bad guys can do. I think probably the biggest issue will be, hey, today we hear all about these ransom attacks, ransomware. AI can make it so much easier. Bad guys are very motivated to make money. Today, when they do attack, they have to find your attack surface. They’re finding those IP addresses that are open to the Internet, those firewalls and VPNs and everything. AI, you can discover it in 30 seconds. AI can write beautiful emails for phishing.

as if they come from your CFO. Once you get in, AI agents can discover your whole network to figure out what those things are. It can bring those things down. So I think we need to focus more on to make sure we can protect against those risks. I talked about AI agents going rogue. Those are one kind of risk. And then the second kind of risks government needs to worry about is nation states trying to use AI to really have advantage, understanding, getting these backdoors planted and all that kind of stuff. I think if you’re sitting next year and we’ve done enough in those areas that we don’t have some of these things that blow up.

If they blow up, then government starts tightening things more and more, which doesn’t sometimes help. So proactive areas to secure it will be very, very important.

Jason Oxman

All right. So protecting against these threats so that government doesn’t overreact and stifle innovation as a result. Aparna, what’s your one thing that you hope for for next year?

Aparna Bawa

You know, it really struck me in this impact summit, the focus on inclusivity, upskilling, skilling and upskilling people who wouldn’t otherwise have access to technology. And if you think about why we got started, we were founded because we wanted to provide free and open access to collaboration and have people from all walks of life connect. I think our founder had to travel to date his wife, you know, and didn’t want to see her more than once the next number of weeks. So, you know, it’s something powerful. In a year, I would like to actually see that happen. Now, it’s not. I think it’s completely altruistic. I do firmly believe that even enterprises who have more of a chance of adopting AI and gaining some of the efficiencies of AI, they need a market.

And the market is you, me, and all of us. And the more people in a village somewhere in a corner of India, even near – we were just talking about Karnataka in another meeting, in a village that has low bandwidth, et cetera, in Karnataka. If a farmer can adopt AI and can change their lives in successive generations, that is good for business. And so for me, progress on that. I still think it’s very – it’s all talk. But I love the idea. I love seeing a billboard where Prime Minister Modi is talking about inclusivity. That’s wonderful to hear. It’s good for business. Maybe it’s a bit altruistic, but I would think it would be good for Zoom.

Jason Oxman

I love it. AI lifting up more broadly the world. David?

David Zapolsky

I’ll take a much higher level approach. You know, I think there’s a sort of consensus around AI regulation that’s kind of yearning to get out. Like it’s sort of gelling a little bit. We saw it sort of in the Hiroshima agreements. We see it, you know, talked about in forums like this. You know, there is sort of an emerging consensus about how to approach this technology. In a responsible way, and I totally, again, agree violently with Aparna in adding the inclusiveness piece and commend the Prime Minister and India for making that a big part of the debate. But I think I would like to see countries around the world start to converge on this basic consensus.

It doesn’t mean that countries can’t have their own perspectives or sovereign outlooks, but there is sort of a… a movement toward an international standard that – and there’s a parallel with the technical standards. There’s ISO 42001, which everybody can abide by and give people a common set of principles and a common set of technical standards they need to make so that we can all be more confident in the way we roll out this technology.

Jason Oxman

I love that. A move toward more global industry consensus -based standards to help govern all that we do, hopefully put government regulators out of business if we can all do it right. Jarek, you get to bring us home with your aspiration for us as we gather together next year in Switzerland.

Jarek Kutylowski

Yeah, I think there’s place for those government regulators too. I would love, as you just explained, getting them all together and creating a framework. But I think there is a – bigger role for AI in this world. I think there’s so many amazing humans across all of the continents of this world and I would love to see in a year and once again that goes back a little bit to DeepL’s mission for them to be able to collaborate as much as they can no matter where they sit geographically no matter which language they speak, no matter what they do in their job just giving the opportunity to each and everyone in every place of this world and there’s amazing examples of cooperation between India and other countries and strengthening that even more and I think AI gives us even more possibilities to do that in the upcoming year so maybe in Switzerland we’re going to be able to look at that and see hey in India we’ve just set the cornerstone of making this possible and making this world a better place.

Jason Oxman

I bet they will. You know, it was AI action last year. Now it’s AI impact. Hopefully it will be AI collaboration or something of the sort next year. I love that that image of everybody across borders, across geographies, across languages collaborating together. What a great discussion. I love how we were both philosophical and practical. I really appreciate all of you sharing your deep insight on these important AI governance issues. And I appreciate all of you being here in the audience to hear this discussion. Please join me in recognizing and thanking our terrific panelists. And please enjoy the rest of the summit. Thank you. Now we’ve got to get a picture. Are we going to take a picture?

We have to get a picture, yeah. We’re going to have to hang back behind there. We’re going to have to hang back behind there.

Related ResourcesKnowledge base sources related to the discussion topics (19)
Factual NotesClaims verified against the Diplo knowledge base (6)
Confirmedhigh

“Jason Oxman framed the AI industry’s dual challenge as managing risk while fostering global innovation and interoperability, urging governments to move beyond fragmented, nation‑centric rules toward coordinated AI governance that can support systems at scale.”

The Open Forum discussion highlights the need to address resource inequities, build global regulatory capacity, and coordinate multiple governance frameworks to avoid fragmentation while respecting national approaches, aligning with Oxman’s call for coordinated AI governance [S23].

Confirmedhigh

“Panelists included Jay Chaudhry (CEO, Zscaler), Aparna Bawa (COO, Zoom) and David Zapolsky (Chief Global Affairs & Legal Officer, Amazon).”

The panel roster listed in the knowledge base confirms the participation of Jay Chaudhry, Aparna Bawa, and David Zapolsky in the AI governance discussion [S2].

Confirmedmedium

“Jay Chaudhry referenced India’s five‑layer AI security model and argued that without a comparable security overlay the model could be abused, emphasizing the need for sovereignty‑respecting security controls.”

India’s layered approach to AI sovereignty, focusing on software stacks, model development, orchestration, and applications, is documented as a strategic framework, supporting Chaudhry’s reference to a five-layer model and the need for security overlays [S79] and [S45].

Confirmedhigh

“Aparna Bawa emphasized that cross‑border data flows are essential for Zoom’s global connectivity and that any restriction impedes citizens’ progress by throttling AI‑supporting infrastructure.”

Discussion on cross-border data flows stresses that data localization stifles businesses and that unrestricted flows are vital for services like Zoom, matching Bawa’s point [S82] and [S81].

Additional Contextmedium

“The COVID‑19 pandemic forced Zoom to shift from an enterprise‑only platform to a consumer‑facing service, leading to rapid deployment of default security controls such as waiting rooms and passcodes.”

Reports note that Zoom pivoted during the pandemic to a broader AI-first work platform and that security features like waiting rooms were added to protect users, providing context for Bawa’s statement [S84] and [S85].

Confirmedmedium

“David Zapolsky cited Colorado’s early AI law as an example of premature, blanket regulation that creates costs and stalls adoption because ‘no one really knows how to apply it’.”

Colorado’s AI law, described as pioneering yet controversial, has faced criticism for its early implementation and unclear application, corroborating Zapolsky’s example [S89].

External Sources (90)
S1
https://dig.watch/event/india-ai-impact-summit-2026/aligning-ai-governance-across-the-tech-stack-iti-c-suite-panel — The challenge of not just managing risk as an industry, but also doing so in a way that supports global innovation and i…
S2
Aligning AI Governance Across the Tech Stack ITI C-Suite Panel — -David Zapolsky: Chief Global Affairs and Legal Officer at Amazon
S3
https://dig.watch/event/india-ai-impact-summit-2026/aligning-ai-governance-across-the-tech-stack-iti-c-suite-panel — And Dr. Jarek Kutylowski. How did I do there? Thank you, is the CEO of DeepL. So to set up the conversation, I wanted to…
S4
Aligning AI Governance Across the Tech Stack ITI C-Suite Panel — -Aparna Bawa: Chief Operating Officer (COO) of Zoom
S5
The Role of Government and Innovators in Citizen-Centric AI — – Arthur Mensch- Jarek Kutylowski
S6
Aligning AI Governance Across the Tech Stack ITI C-Suite Panel — – Jarek Kutylowski envisioned enhanced global collaboration that transcends language and geographic barriers And Dr. Ja…
S7
Driving U.S. Innovation in Artificial Intelligence — 7. Jason Oxman – President & CEO, Information Technology Industry Council 8. Julia Stoyanovich – Associate Professor, De…
S8
Agentic AI in Focus Opportunities Risks and Governance — -Jason Oxman- Moderator/Host, appears to be with ITI (Information Technology Industry Council)
S9
Cutting through Cyber Complexity / DAVOS 2025 — – Jay Chaudhry: CEO, Chairman, and Founder of Zscaler 3. Zero Trust Architecture: Jay Chaudhry, CEO of Zscaler, argued …
S10
https://dig.watch/event/india-ai-impact-summit-2026/aligning-ai-governance-across-the-tech-stack-iti-c-suite-panel — The challenge of not just managing risk as an industry, but also doing so in a way that supports global innovation and i…
S11
Aligning AI Governance Across the Tech Stack ITI C-Suite Panel — -Jay Chaudhry: CEO of Zscaler (security expert)
S12
Panel Discussion Data Sovereignty India AI Impact Summit — “So I think the takeaway is that as far as the infrastructure layer is concerned, as in sovereignty in compute is not on…
S13
Discussion Report: Sovereign AI in Defence and National Security — This comment shifts the discussion from narrow military applications to a comprehensive view of national resilience, inf…
S14
Cyberattacked: Who do you call? — Individual usersare often the weakest link in cybersecurity protection. More simple ‘cyber hygiene’ measures are needed …
S15
‘Operation Ghost Click’: Cyberzombies in the real world — The law is not enough. As always, the humans are the weakest link – almost every cyberattack has users’ ignorance and ne…
S16
AI-driven Cyber Defense: Empowering Developing Nations | IGF 2023 — Moreover, AI is seen as a potential threat that can lead to new-age digital conflicts. The supporting evidence presents …
S17
Operationalizing data free flow with trust | IGF 2023 WS #197 — Lastly, Narayan from Nepal proposed the need for common regulations and collaborations to address privacy, security, and…
S18
Rule of Law for Data Governance | IGF 2023 Open Forum #50 — Additionally, the analysis underscores the importance of harmonizing and aligning laws to facilitate cross-border data f…
S19
Unlocking Trust and Safety to Preserve the Open Internet | IGF 2023 Open Forum #129 — Assessments are tailored based on risk, such as user volume, and the company’s product features
S20
Design Beyond Deception: A Manual for Design Practitioners | IGF 2023 Launch / Award Event #169 — Cristiana Santos:The first time in a decision we suggest that along with this DPA other enforcers name and publicize vio…
S21
https://dig.watch/event/india-ai-impact-summit-2026/building-the-next-wave-of-ai_-responsible-frameworks-standards — And I think the second point we should think about is I think the human state of mind works well in default versus optio…
S22
WS #100 Integrating the Global South in Global AI Governance — Roeske Martin: Thank you, Fadi, both for having us here and for your great partnership in this research that we’ve do…
S23
Open Forum #30 High Level Review of AI Governance Including the Discussion — High level of consensus with significant implications for AI governance development. The alignment suggests that despite…
S24
Evolving AI, evolving governance: from principles to action | IGF 2023 WS #196 — Both speakers agree that stakeholder engagement is essential in complementing global governance. Galia specifically high…
S25
Secure Finance Risk-Based AI Policy for the Banking Sector — The discussion revealed several unresolved tensions, particularly the fundamental disagreement between risk-based and em…
S26
Tokenisation and the Future of Global Finance: A World Economic Forum 2026 Panel Discussion — Legal and regulatory | Economic References to financial crises being born from misleaded or dangerous financial innovat…
S27
State of Play: AI Governance / DAVOS 2025 — The discussion highlighted tensions between regulation and innovation. While some advocated for light-touch governance t…
S28
How AI agents are quietly rebuilding the foundations of the global economy  — AI agents have rapidly moved from niche research concepts to one of the most discussed technology topics of 2025. Search…
S29
Dynamic Coalition Collaborative Session — Matthias Hudobnik: Thanks a lot. Yeah, it’s a pleasure to be here at the Internet Governance Forum. I’m excited to contr…
S30
Regional experiences on the governance of emerging technologies NRI Collaborative Session — Chin Lin: Okay, I think to answer this question, we have to know that to set up a user-centric deployment is a collapsib…
S31
The Future of Digital Agriculture: Process for Progress — Technologies must be easily accessible, economically viable for the lowest-income groups, relevant to the context, and s…
S32
AI That Empowers Safety Growth and Social Inclusion in Action — And so looking at how we can put in place practical safeguards that ensure that AI works for people, not only in advance…
S33
The US National Cybersecurity Strategy — We must begin to shift liability onto those entities that fail to take reasonable precautions to secure their software w…
S34
Data Governance in the Context of Emerging Technologies: Promoting Human-Centred and Development-Oriented Societies   — In the context of this data-driven economy, the governance of this key asset should be tackled in a multilayered way. On…
S35
Ministerial Roundtable — Careful understanding of opportunities for cultural and language aspects is important, requiring upskilling and knowledg…
S36
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — Diana Nyakundi:Yes, I think to some extent you are a bit too hopeful. because I would say we are currently making demand…
S37
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — And this requires proactive and coherent policy responses. First, people must be at the center of AI strategy, as we hea…
S38
CSTD open consultation on WSIS+20 — The analysis also recognizes the digital divide and the importance of bridging it. Inclusivity in ICT access, particular…
S39
Aligning AI Governance Across the Tech Stack ITI C-Suite Panel — Thank you. So we are a highly connected world. Imagine any large corporation that’s doing business in 50 countries. If e…
S40
General notices • alGemene KennisGewinGs — Large-scale initiatives require a high level of political and organisational leadership, supported by financ…
S41
(Plenary segment &amp; Closing) Summit of the Future – General Assembly, 6th plenary meeting, 79th session — The level of disagreement among speakers is moderate. While there are differences in approach and emphasis, most speaker…
S42
Agentic AI in Focus Opportunities Risks and Governance — “These standards -setting organizations are now very, very deep into sort of developing these same standards on agentic….
S43
Ethical principles for the use of AI in cybersecurity | IGF 2023 WS #33 — Anastasiya Kozakova:Thank you very much. It’s a pleasure to be here. I represent the civil society organization. I work …
S44
WS #31 Cybersecurity in AI: balancing innovation and risks — Sergio Mayo Macias: Yes, thank you. Thank you, Gladys. Well, actually, the AI environment in Europe is known and has …
S45
Scaling Trusted AI_ How France and India Are Building Industrial &amp; Innovation Bridges — This observation provides crucial historical context showing how trust requirements have fundamentally changed as AI mov…
S46
WEF Business Engagement Session: Safety in Innovation – Building Digital Trust and Resilience — Beyond safety by design, companies need governance from design embedded at every stage from ideation through deployment …
S47
Shaping the Future AI Strategies for Jobs and Economic Development — The emphasis on collaboration over displacement provides a framework for managing workforce transitions while capturing …
S48
From principles to practice: Governing advanced AI in action — Sasha Rubel: It’s not an afterthought. I love that. Safety is the foundation and not an afterthought. It’s again one of …
S49
Cognitive Vulnerabilities: Why Humans Fall for Cyber Attacks — Therefore, application designers should aim to strike a balance between ensuring the security of transactions and provid…
S50
Clear-Eyed about Crypto — Prioritizing end user experience and choice is fundamental and should not be overlooked. Users should have the freedom t…
S51
Better governance for fairer digital markets: unlocking the innovation potential and leveling the playing field (UNCTAD) — In conclusion, the analysis highlights different perspectives on the impact of regulation on the tech industry. While le…
S52
Open Forum #30 High Level Review of AI Governance Including the Discussion — High level of consensus with significant implications for AI governance development. The alignment suggests that despite…
S53
WS #172 Regulating AI and Emerging Risks for Children’s Rights — Global cooperation and dialogue is needed to build common frameworks
S54
WS #145 Revitalizing Trust: Harnessing AI for Responsible Governance — The panelists stressed the need for harmonized global regulations to avoid fragmentation and ensure interoperability acr…
S55
WS #179 Privacy Preserving Interoperability and the Fediverse — Claybaugh contends that federated platforms must recognize and accommodate different user sophistication levels, from te…
S56
How IS3C is going to make the Internet more secure and safer | IGF 2023 — In conclusion, the analysis emphasizes the importance of a comprehensive security by design approach, collaborative effo…
S57
UNCTAD E-Commerce Week — In theG7 ICT Priorities: Technology, Innovation and the Global Economysession, the important role of ICT policy for the …
S58
Open Forum #30 High Level Review of AI Governance Including the Discussion — High level of consensus with significant implications for AI governance development. The alignment suggests that despite…
S59
Aligning AI Governance Across the Tech Stack ITI C-Suite Panel — Chaudhry warns that if each nation imposes its own AI rules, companies operating across borders will face fragmented com…
S60
WS #145 Revitalizing Trust: Harnessing AI for Responsible Governance — The panelists stressed the need for harmonized global regulations to avoid fragmentation and ensure interoperability acr…
S61
Evolving AI, evolving governance: from principles to action | IGF 2023 WS #196 — Both speakers agree that stakeholder engagement is essential in complementing global governance. Galia specifically high…
S62
Secure Finance Risk-Based AI Policy for the Banking Sector — The discussion revealed several unresolved tensions, particularly the fundamental disagreement between risk-based and em…
S63
Tokenisation and the Future of Global Finance: A World Economic Forum 2026 Panel Discussion — Legal and regulatory | Economic References to financial crises being born from misleaded or dangerous financial innovat…
S64
State of Play: AI Governance / DAVOS 2025 — The discussion highlighted tensions between regulation and innovation. While some advocated for light-touch governance t…
S65
Generative AI: Steam Engine of the Fourth Industrial Revolution? — Technology is moving at an incredibly fast pace, and this rapid advancement is seen in various sectors such as AI, semic…
S66
Agentic AI in Focus Opportunities Risks and Governance — “These standards -setting organizations are now very, very deep into sort of developing these same standards on agentic….
S67
Responsible AI for Children Safe Playful and Empowering Learning — “safety, privacy, these are absolutely foundational and non‑negotiable as we’ve seen on the LEGO education side and simi…
S68
How AI agents are quietly rebuilding the foundations of the global economy  — AI agents have rapidly moved from niche research concepts to one of the most discussed technology topics of 2025. Search…
S69
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — But the trust in these systems have to be built over time, and they don’t come without some assurance being put in place…
S70
AI That Empowers Safety Growth and Social Inclusion in Action — And so looking at how we can put in place practical safeguards that ensure that AI works for people, not only in advance…
S71
Regional experiences on the governance of emerging technologies NRI Collaborative Session — Chin Lin: Okay, I think to answer this question, we have to know that to set up a user-centric deployment is a collapsib…
S72
Opening Ceremony — Innovation must be guided by responsibility, with safety and privacy designed into products from the start
S73
WS #179 Navigating Online Safety for Children and Youth — There is a need for both technical solutions (safety by design) and education/awareness initiatives
S74
The US National Cybersecurity Strategy — We must begin to shift liability onto those entities that fail to take reasonable precautions to secure their software w…
S75
Reviewing Global Governance Capacity Development and Identifying Opportunities for Collaboration — The global cloud computing market is accelerating. Companies are increasingly looking at cloud computing as a vi…
S76
Acknowledgements — Governance for cloud computing refers to the system by which the provision and use of cloud services are directed and co…
S77
How AI Drives Innovation and Economic Growth — The tone was notably optimistic yet pragmatic, described as representing “hope” rather than the “fear” that characterize…
S78
Technology Regulation and AI Governance Panel Discussion — Joel Kaplan emphasized the importance of maintaining regulatory environments that support AI development through access …
S79
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — This comment provides crucial context about India’s position in the global AI ecosystem, distinguishing between applicat…
S80
Fireside Chat Intel Tata Electronics CDAC &amp; Asia Group _ India AI Impact Summit — He advocated for a layered approach to sovereignty, focusing on controlling critical chokepoints whilst accepting strate…
S81
African approaches to Cross-border Data Flows (GIZ) — Another area of concern was the impact of the e-commerce joint statement initiative on Africa’s data privacy efforts and…
S82
Open Forum #14 Data Without Borders? Navigating Policy Impacts in Africa — 4. Cross-Border Data Flows and Trade Paul Baker: Okay, thank you. Just quickly, I think that we have to be practical, …
S83
Advancing digital inclusion and human-rights:ROAM-X approach | IGF 2023 — Grace Githaiga:I think I want to be very brief. When we looked at the rights, and this is our first review, because we d…
S84
From video to AI: Zoom’s next chapter — Zoom, once synonymous with video conferencing during the pandemic, ispivotingto redefine itself as an ‘AI-first work pla…
S85
AI@UN: Navigating the tightrope between innovation and impartiality — The COVID-19 pandemic prompted a second shift—from physical to online meetings. While UN buildings ensure security and i…
S86
Software.gov — In conclusion, Doreen Bogdan-Martin emphasizes the importance of GovStack as an efficient and reusable tool for implemen…
S87
E-commerce in the WTO: the next arena of Internet policy discussions? — Regulatory frameworks on privacyare key to protecting personal information and enhancing trust in e-commerce, according …
S88
WS #278 Digital Solidarity &amp; Rights-Based Capacity Building — Jennifer Bachus: Thanks to all of you. So, we’re going to go to an interactive discussion now in what’s a little uncon…
S89
Colorado’s AI law under review amid budget crisis — Colorado lawmakersface a dual challengeas they return to the State Capitol on 21 August for a special session: closing a…
S90
Keynotes — O’Flaherty acknowledges that the regulatory work is not finished and that current regulatory models will likely be insuf…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
J
Jay Chaudhry
6 arguments142 words per minute1116 words469 seconds
Argument 1
Need for balanced alignment; over‑alignment stifles innovation
EXPLANATION
Jay argues that while some degree of alignment among governments is necessary, excessive alignment can hinder innovation. He stresses that too much governance and compliance can kill the pace of technological progress.
EVIDENCE
He notes that in a highly connected world, having a line of alignment is good, but over-alignment does not help, and that excessive governance and compliance kill innovation [24-28].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Over-regulation is warned as a barrier to innovation and alignment is needed but not excessive, as discussed in the panel summary [S2] and the Davos commentary on over-regulation [S9].
MAJOR DISCUSSION POINT
Alignment vs innovation
DISAGREED WITH
Aparna Bawa, David Zapolsky, Jarek Kutylowski
Argument 2
Compliance ≠ security; flexible, evolving policy needed; over‑regulation stalls progress
EXPLANATION
Jay distinguishes compliance from true security, asserting that meeting compliance requirements does not guarantee protection. He calls for flexible, evolving policies rather than rigid, prescriptive regulations that can delay or block innovation.
EVIDENCE
He explains that compliance does not equal security, that over-regulation creates outdated controls, and that a flexible policy that evolves with technology is needed, citing examples from his experience with Zscaler and federal certification processes [180-202].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for flexible, evolving policies rather than rigid compliance mandates is highlighted in the governance-innovation tension report [S2].
MAJOR DISCUSSION POINT
Compliance vs security
AGREED WITH
Aparna Bawa, David Zapolsky
Argument 3
AI can be weaponized; security overlay across all layers required; sovereignty alone insufficient
EXPLANATION
Jay warns that AI’s power can be abused, making security essential at every layer of the stack. He argues that sovereignty of data or models is not enough unless access and usage are also secured.
EVIDENCE
He describes scenarios such as data poisoning and malicious control of sovereign AI stacks, emphasizing the need for security across all five layers and noting that sovereignty must include who can access the system [87-95].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI’s potential as a weapon and the necessity for layered security are underscored in the AI-driven cyber-defense briefing [S16].
MAJOR DISCUSSION POINT
AI weaponization and layered security
AGREED WITH
Jason Oxman, David Zapolsky, Aparna Bawa, Jarek Kutylowski
DISAGREED WITH
Aparna Bawa
Argument 4
Users are the weakest link; need identity and authorization controls for AI agents
EXPLANATION
Jay points out that users, especially AI agents, can become the weakest security link if not properly managed. He stresses the importance of identity, authorization, and control mechanisms for AI agents to prevent hijacking.
EVIDENCE
He mentions that AI agents could be hacked or hijacked, gaining access to corporate resources, and that Zscaler is developing zero-trust controls to manage agent identity and authorization [209-212].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
User weakness in cybersecurity and the importance of identity controls are documented in the user-weakest-link analysis [S14] and the social-engineering case study [S15].
MAJOR DISCUSSION POINT
User/agent security risk
Argument 5
Governments should focus on AI‑enabled threats (ransomware, nation‑state misuse) to avoid over‑regulation
EXPLANATION
Jay urges governments to prioritize protecting against AI‑driven cyber threats rather than imposing blanket regulations that could stifle innovation. He highlights ransomware, AI‑generated phishing, and nation‑state exploitation as key risks.
EVIDENCE
He outlines how AI can accelerate ransomware attacks, generate convincing phishing emails, and be used by nation-states, arguing that proactive security focus will prevent reactionary over-regulation [340-360].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI-enabled cyber threats such as ransomware and nation-state misuse are highlighted as priority risks in the AI-driven cyber-defense report [S16].
MAJOR DISCUSSION POINT
Prioritising AI‑driven cyber threats
Argument 6
Security must keep pace with rapid AI adoption; cyber safeguards should be embedded as AI scales.
EXPLANATION
Jay argues that while AI adoption should be fast, security measures need to be introduced simultaneously to prevent abuse, emphasizing a parallel track of cyber protection alongside AI rollout.
EVIDENCE
He states “I think we should embrace fast, but we should also start thinking about embracing cyber to make sure things are used securely at the same pace” [96-97].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The panel stresses that security must evolve alongside fast AI adoption to avoid stifling progress [S2].
MAJOR DISCUSSION POINT
Synchronizing security with AI speed
A
Aparna Bawa
6 arguments180 words per minute1935 words643 seconds
Argument 1
Cross‑border data flows essential; fragmented rules impede progress; call for common framework
EXPLANATION
Aparna stresses that global services like Zoom rely on unrestricted cross‑border data flows, and that fragmented national regulations hinder both business and citizen progress. She calls for a basic, shared framework that balances sovereignty with free data movement.
EVIDENCE
She cites Zoom’s dependence on cross-border data for global connectivity and argues that increasing national restrictions impede citizens’ progress, while also acknowledging privacy and security as table-stakes [47-51].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The critical role of unrestricted cross-border data flows for global services like Zoom is emphasized in the summit remarks [S1] and calls for harmonised regulations are made in the IGF data-flow discussions [S17].
MAJOR DISCUSSION POINT
Data flow and regulatory fragmentation
AGREED WITH
Jason Oxman, David Zapolsky, Jarek Kutylowski
Argument 2
Enterprises must embed security, privacy, and user controls; partnership between provider and user essential
EXPLANATION
Aparna describes a partnership model where both the enterprise and the end‑user share responsibility for secure AI use. She emphasizes embedding security, privacy, and clear user controls into products from the start.
EVIDENCE
She notes that security certifications, privacy standards, and red-team testing must be maintained, and that the enterprise-user partnership is vital for safe AI deployment [102-108] and further elaborates on obligations to provide sufficient controls for all user types [119-130].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The importance of an enterprise-user partnership for secure AI deployment is noted in the governance panel summary [S2] and reinforced by the trust-and-safety assessment framework [S19].
MAJOR DISCUSSION POINT
Enterprise‑user security partnership
AGREED WITH
Jay Chaudhry, Jason Oxman, David Zapolsky, Jarek Kutylowski
Argument 3
Tiered controls and user choice enable risk decisions tailored to user type; preserve user experience
EXPLANATION
Aparna explains that Zoom offers configurable security and privacy settings so that different user groups—enterprises, schools, individual consumers—can choose the level of protection that fits their needs without sacrificing usability.
EVIDENCE
She describes how Zoom provides toggles for security features, differentiates between enterprise and consumer accounts, and ensures mandatory controls (e.g., waiting rooms, passcodes) for higher-risk environments while keeping the experience smooth for casual users [230-270].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Risk-based, configurable security controls that respect user experience are discussed in the trust-and-safety assessment report [S19].
MAJOR DISCUSSION POINT
Granular user‑centric risk controls
AGREED WITH
David Zapolsky, Jay Chaudhry
Argument 4
Users need education; enterprises must not use customer data for training; provide opt‑out mechanisms
EXPLANATION
Aparna highlights the need to educate users—especially younger ones—about safe AI interactions and asserts that Zoom will not use customer content to train its models, offering opt‑out options where appropriate.
EVIDENCE
She recounts teaching her children not to share personal information with AI engines and notes Zoom’s policy of not using customer content for model training, stressing the importance of user awareness and opt-out capabilities [124-128].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
User education as a defence against cyber threats and the need for opt-out mechanisms are highlighted in the user-weakest-link study [S14] and the social-engineering case analysis [S15].
MAJOR DISCUSSION POINT
User education and data privacy
Argument 5
Aim for AI access in low‑bandwidth regions; inclusive upskilling benefits business and society
EXPLANATION
Aparna envisions AI tools reaching underserved, low‑bandwidth areas, arguing that upskilling and inclusive access create both social benefits and new market opportunities for enterprises.
EVIDENCE
She references the summit’s focus on inclusivity, mentions villages in Karnataka with limited bandwidth, and argues that enabling farmers with AI can generate multi-generational benefits while also expanding Zoom’s market [364-374].
MAJOR DISCUSSION POINT
Inclusive AI deployment
AGREED WITH
David Zapolsky
Argument 6
Zoom’s product development prioritizes user experience, using configurable controls to balance security, privacy and usability.
EXPLANATION
Aparna explains that Zoom designs features around how users actually work, offering toggles and defaults that let enterprises and individual users choose appropriate security levels without sacrificing the overall experience.
EVIDENCE
She says “everything goes back to the user experience… they don’t want to take down all the technology… they want to do it in a safe and secure way” [252-257] and describes the platform’s granular toggles for waiting rooms, passcodes, and other controls that adapt to different user types [230-270].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The trade-off between regulation and user experience in Zoom’s design is described in the panel commentary on user-centric product development [S1].
MAJOR DISCUSSION POINT
User‑centric product design versus compliance‑first approaches
D
David Zapolsky
6 arguments169 words per minute1827 words645 seconds
Argument 1
Free flow of goods and information critical; over‑regulation creates friction; propose common high‑risk principles
EXPLANATION
David argues that Amazon’s global operations depend on the free movement of goods, data, and services, and that government barriers create friction. He suggests developing shared high‑risk principles rather than detailed, premature regulations.
EVIDENCE
He describes Amazon’s reliance on free flow of goods, information, and open skies across its stores, cloud, entertainment, and satellite services, and warns that each new barrier adds friction; he then calls for common high-risk principles based on real harms rather than speculative rules [60-63] and [67-68].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The necessity of free-flow of data and goods for global operations and the call for high-risk principles are echoed in the cross-border data-flow discussion [S1] and the alignment-innovation tension report [S2].
MAJOR DISCUSSION POINT
Global trade and AI risk principles
AGREED WITH
Jason Oxman, Aparna Bawa, Jarek Kutylowski
Argument 2
Regulation must be use‑case specific; one‑size‑fits‑all harms innovation; internal product decisions weigh risk vs rollout
EXPLANATION
David stresses that AI regulations need to differentiate between use‑cases, as the risk profile of a shopping assistant differs from a medical documentation tool. He explains how Amazon balances product rollout decisions with regulatory uncertainty.
EVIDENCE
He explains that AI applications have varied risk profiles, and that blanket regulation would inhibit innovation; he also details internal discussions about launching products globally versus waiting for regulatory clarity, citing examples like AI-assisted shopping versus clinical documentation [281-306].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
A risk-based, use-case-specific regulatory approach is advocated to avoid stifling innovation in the governance-innovation balance summary [S2].
MAJOR DISCUSSION POINT
Use‑case‑driven regulation
AGREED WITH
Jay Chaudhry, Jason Oxman, Jarek Kutylowski
Argument 3
Build security, guardrails, and data ownership into cloud services (Bedrock); give enterprises direct control
EXPLANATION
David outlines how Amazon’s Bedrock platform embeds security, model‑level guardrails, and ensures that customer data remains owned by the customer, giving enterprises tools to control AI outputs and usage.
EVIDENCE
He describes Bedrock’s security architecture, the ability for customers to select from over 100 models, the guarantee that data stays with the customer, and built-in guardrails for toxicity, bias, and content filtering, along with disclosures for transparency [140-159].
MAJOR DISCUSSION POINT
Secure, customer‑controlled AI cloud services
AGREED WITH
Jay Chaudhry, Jason Oxman, Aparna Bawa, Jarek Kutylowski
Argument 4
Provide tools, disclosures, and controls so enterprises can self‑govern AI use
EXPLANATION
David emphasizes that Amazon equips enterprises with practical tools—guardrails, disclosures, and configurable controls—so they can manage AI responsibly without waiting for external regulation.
EVIDENCE
He notes that Bedrock includes guardrails, disclosure statements, and interfaces that let enterprises filter outputs, manage bias, and maintain visibility into model behavior, thereby enabling self-governance [153-159].
MAJOR DISCUSSION POINT
Enterprise self‑governance tools
AGREED WITH
Aparna Bawa, Jay Chaudhry
Argument 5
Converge on international consensus and standards (e.g., ISO 42001) to harmonize regulation
EXPLANATION
David calls for a global consensus on AI regulation, suggesting that an international standard such as ISO 42001 could provide common principles and technical requirements, allowing countries to retain sovereignty while aligning on core safeguards.
EVIDENCE
He references emerging consensus in forums like the Hiroshima agreements and proposes an ISO standard that would give everyone a common set of principles and technical standards [390-392].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for common international regulatory frameworks and standards to reduce fragmentation are made in the IGF discussion on harmonising data governance [S17] and [S18].
MAJOR DISCUSSION POINT
International AI standards
Argument 6
Regulation should focus on high‑risk AI applications that affect life, health, or civil rights rather than blanket rules.
EXPLANATION
David proposes that policymakers target AI uses with the greatest potential harm—those influencing fundamental rights—and align regulation with existing protections, avoiding over‑broad mandates that could stifle innovation.
EVIDENCE
He says “if you’re using a technology to make decisions that’s going to affect the life, health, or civil rights of an individual… are there laws that protect that already? do we need to supplement them?” [67-68].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Targeting high-impact AI threats rather than blanket regulation aligns with the AI-driven cyber-defense briefing that stresses focusing on the most dangerous uses [S16].
MAJOR DISCUSSION POINT
Targeted regulation of high‑risk AI
J
Jarek Kutylowski
6 arguments159 words per minute1076 words403 seconds
Argument 1
Global market requires transparent, similar frameworks; balance sovereignty with shared norms
EXPLANATION
Jarek argues that for AI‑driven companies operating worldwide, a transparent and relatively uniform regulatory framework is essential, while still respecting national sovereignty.
EVIDENCE
He states that successful technology needs a transparent framework that is not too different across regions, and that a balance between sovereignty and common norms is valuable [79-82].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for transparent, comparable regulatory frameworks while respecting sovereignty is highlighted in the alignment-innovation panel [S2] and the IGF consensus on common regulations [S17].
MAJOR DISCUSSION POINT
Transparent global AI framework
AGREED WITH
Jason Oxman, Aparna Bawa, David Zapolsky
DISAGREED WITH
Jay Chaudhry, Aparna Bawa, David Zapolsky
Argument 2
Different use‑cases have varying risk grades; governance must adapt to application context
EXPLANATION
Jarek points out that the criticality of AI outcomes varies widely—from casual email translation to patent‑level documentation—so governance must be calibrated to the specific application’s risk level.
EVIDENCE
He contrasts low-risk email translation with high-risk patent translation and agent execution in enterprise settings, emphasizing the need for differentiated governance based on use-case [323-325].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
A risk-graded governance model that varies by use-case is discussed in the balanced governance report [S2].
MAJOR DISCUSSION POINT
Risk‑graded governance
AGREED WITH
Jay Chaudhry, David Zapolsky, Jason Oxman
Argument 3
Trust in AI outcomes critical; governance must ensure reliable, safe behavior for high‑impact tasks
EXPLANATION
Jarek stresses that for high‑impact AI uses, users must trust the outcomes, requiring governance that guarantees reliability, safety, and alignment with enterprise expectations.
EVIDENCE
He notes that trust in AI results is essential, especially for critical tasks, and that governance must provide a common understanding of high-risk uses, linking back to earlier points about transparency and shared norms [321-326].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Ensuring trustworthy AI outcomes through risk-based governance is a focus of the trust-and-safety assessment framework [S19].
MAJOR DISCUSSION POINT
Ensuring trustworthy AI
AGREED WITH
Jay Chaudhry, Jason Oxman, David Zapolsky, Aparna Bawa
Argument 4
Companies must help customers navigate regulations and select appropriate AI usage
EXPLANATION
Jarek says that DeepL’s role includes guiding customers through complex regulatory landscapes, helping them choose suitable AI applications, and managing the associated risks.
EVIDENCE
He explains that DeepL often assists customers who lack regulatory expertise, providing them with the ability to determine appropriate use-cases and manage compliance across diverse markets [327-329].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The private sector’s role in guiding customers through complex regulatory landscapes is emphasized in the IGF private-sector collaboration briefing [S22].
MAJOR DISCUSSION POINT
Customer guidance on AI regulation
Argument 5
Promote worldwide collaboration enabling multilingual communication; regulatory framework to support global cooperation
EXPLANATION
Jarek envisions AI facilitating global collaboration by breaking language barriers, and calls for regulatory frameworks that enable such cross‑border cooperation while respecting local contexts.
EVIDENCE
He describes DeepL’s mission to let anyone collaborate regardless of language or geography, and expresses hope that future regulatory frameworks will support this vision, citing examples of cooperation between India and other countries [316-320].
MAJOR DISCUSSION POINT
Global multilingual collaboration
Argument 6
Early exposure to EU AI regulation gives DeepL a competitive advantage in handling compliance globally.
EXPLANATION
Jarek notes that being headquartered in Europe, where regulatory frameworks arrived earlier, allows DeepL to develop expertise and processes that can be leveraged when entering other markets, turning regulatory pressure into a strategic benefit.
EVIDENCE
He remarks “we’re lucky as a company to have grown in Europe in kind of an environment which is maybe like slightly earlier on regulation than other places… gives us an edge to be able to understand how to work with this regulation” [326-327].
MAJOR DISCUSSION POINT
Leveraging early regulation for competitive advantage
J
Jason Oxman
6 arguments158 words per minute2190 words829 seconds
Argument 1
Risk management must be balanced with innovation and interoperability.
EXPLANATION
Oxman stresses that while managing AI‑related risks is essential, governments and industry must do so in a way that does not choke global innovation or the ability of systems to interoperate across borders.
EVIDENCE
He opens the session by saying “The challenge of not just managing risk as an industry, but also doing so in a way that supports global innovation and interoperability” [1] and later notes that “They need to protect citizens. They need to ensure security. But acting too much, perhaps in advance, can stifle innovation” [30-34].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Balancing risk management with innovation while preserving interoperability is a central theme in the governance-innovation tension summary [S2].
MAJOR DISCUSSION POINT
Balancing risk and innovation
Argument 2
Governments need coordinated alignment to avoid fragmentation and to build trust.
EXPLANATION
He argues that AI technologies naturally cross borders, so fragmented national rules create inefficiencies; coordinated global approaches reduce fragmentation and foster trust among stakeholders.
EVIDENCE
Oxman states “there is a need for governments around the world to align their approaches to AI governance, because, of course, technology doesn’t, by its very nature, want to stop at borders” [15] and earlier frames the panel’s purpose as helping “governments… to reduce fragmentation, and to build trust in AI systems” [2].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for coordinated global AI governance to reduce fragmentation and build trust is highlighted in the alignment-innovation panel [S2] and the IGF call for common regulations [S17].
MAJOR DISCUSSION POINT
Global AI governance alignment
AGREED WITH
Aparna Bawa, David Zapolsky, Jarek Kutylowski
Argument 3
Trust and security must be embedded as core components of any AI rollout.
EXPLANATION
Oxman highlights that without a strong security overlay, the excitement around AI can lead to vulnerable deployments, making trust a non‑negotiable element of policy and product design.
EVIDENCE
He asks the panel “Talk to us about how the trust and security conversation is still a vital component around all the excitement” [84-86].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Embedding trust and security into AI deployments is a key recommendation in the trust-and-safety assessment report [S19].
MAJOR DISCUSSION POINT
Importance of security and trust
Argument 4
Upstream governance decisions by platform providers shape downstream user behavior and must be considered.
EXPLANATION
He points out that the policies Amazon adopts at the platform level affect how downstream enterprises and consumers can use AI, urging a holistic view of governance that includes upstream impacts.
EVIDENCE
Oxman asks David “how do you think about the upstream governance decisions that you’re making at Amazon and how they impact the downstream?” [131-136].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The impact of upstream platform governance on downstream usage is discussed in the zero-trust architecture commentary, which stresses upstream policy effects [S9].
MAJOR DISCUSSION POINT
Impact of upstream governance on downstream stakeholders
Argument 5
Agentic AI introduces new governance challenges beyond traditional translation services.
EXPLANATION
He notes that moving from simple translation to autonomous AI agents raises distinct policy questions, especially when those agents operate globally and make decisions without human oversight.
EVIDENCE
Oxman asks Jarek “How are you thinking about the policies and procedures for governance that you have to put in place in an agentic AI world that are different than perhaps you did in a language translation world?” [163-166].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The emergence of AI-driven autonomous agents as new security challenges is highlighted in the AI-driven cyber-defense briefing [S16].
MAJOR DISCUSSION POINT
Governance of autonomous AI agents
Argument 6
Flexible, risk‑based regulation is essential; overly prescriptive rules can block innovation.
EXPLANATION
He solicits examples of where a flexible, risk‑based approach helped and where a rigid regulatory stance prevented product launches, indicating his belief that adaptability in regulation is key to fostering AI progress.
EVIDENCE
Oxman says “how you’ve seen a flexible risk-based approach from government be the most effective… where a more prescriptive approach… denied you the opportunity to bring products or services to market” [179-180].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
A flexible, risk-based regulatory approach is advocated as more effective than prescriptive rules in the governance-innovation panel summary [S2].
MAJOR DISCUSSION POINT
Need for flexible risk‑based regulation
Agreements
Agreement Points
Global coordination and common frameworks are needed to avoid fragmentation and build trust across AI governance.
Speakers: Jason Oxman, Aparna Bawa, David Zapolsky, Jarek Kutylowski
Governments need coordinated alignment to avoid fragmentation and to build trust. Cross‑border data flows essential; fragmented rules impede progress; call for common framework Free flow of goods and information critical; over‑regulation creates friction; propose common high‑risk principles Global market requires transparent, similar frameworks; balance sovereignty with shared norms
All four panelists stress that AI technologies cross borders, so governments should align policies, maintain free data-flows, and adopt shared high-risk principles or transparent frameworks to reduce fragmentation and foster trust [15-17][30-34][47-51][60-63][67-68][79-82].
POLICY CONTEXT (KNOWLEDGE BASE)
This consensus aligns with calls for harmonised global AI regulations to prevent fragmentation, as highlighted in WS #145 on reviving trust and WS #172 on children’s rights [S53], and reflects the need for cross-jurisdictional alignment discussed by the ITI C-Suite panel [S39] and the high-level consensus on AI governance [S52].
Regulation should be flexible and risk‑based rather than prescriptive; one‑size‑fits‑all rules hinder innovation.
Speakers: Jay Chaudhry, David Zapolsky, Jason Oxman, Jarek Kutylowski
Compliance ≠ security; flexible, evolving policy needed; over‑regulation stalls progress Regulation must be use‑case specific; one‑size‑fits‑all harms innovation; internal product decisions weigh risk vs rollout Flexible, risk‑based regulation is essential; overly prescriptive rules can block innovation Different use‑cases have varying risk grades; governance must adapt to application context
Jay, David, Jason and Jarek all argue that AI rules need to adapt to specific use-cases and evolve with technology; rigid, blanket regulations would slow or block innovation [180-202][281-306][179-180][323-325].
POLICY CONTEXT (KNOWLEDGE BASE)
The preference for risk-based, flexible regulation mirrors critiques of overly prescriptive regimes such as the EU AI Act and supports principle-based frameworks that enable innovation while managing risk [S44], and echoes UNCTAD’s analysis on proportional regulation for SMEs [S51].
Security, trust and user protection must be embedded in AI systems from the start.
Speakers: Jay Chaudhry, Jason Oxman, David Zapolsky, Aparna Bawa, Jarek Kutylowski
AI can be weaponized; security overlay across all layers required; sovereignty alone insufficient Trust and security must be embedded as core components of any AI rollout Build security, guardrails, and data ownership into cloud services (Bedrock); give enterprises direct control Enterprises must embed security, privacy, and user controls; partnership between provider and user essential Trust in AI outcomes critical; governance must ensure reliable, safe behavior for high‑impact tasks
All five speakers highlight that AI deployments need strong security and trust mechanisms-layered safeguards, built-in guardrails, and clear user-provider partnerships-to prevent abuse and maintain confidence [87-95][84-86][140-159][102-108][321-326].
POLICY CONTEXT (KNOWLEDGE BASE)
Embedding security and trust from inception is a core tenet of security-by-design, echoed in discussions on agentic AI where security underpins trust [S42], safety-by-design in AI governance [S48], and the broader push for safety to be built into systems rather than added later [S45], [S46].
Providing tiered controls and giving customers choice enables risk‑appropriate use while preserving user experience.
Speakers: Aparna Bawa, David Zapolsky, Jay Chaudhry
Tiered controls and user choice enable risk decisions tailored to user type; preserve user experience Provide tools, disclosures, and controls so enterprises can self‑govern AI use Compliance ≠ security; flexible, evolving policy needed; over‑regulation stalls progress
Aparna describes Zoom’s configurable security toggles, David outlines Bedrock’s guardrails and disclosures, and Jay stresses that compliance alone is insufficient-together they advocate user-centric, choice-driven risk management [230-270][153-159][180-202].
POLICY CONTEXT (KNOWLEDGE BASE)
Tiered controls and user choice are advocated to balance risk management with usability, as seen in recommendations for user-centric design that preserve experience while ensuring security [S50], and in access-management guidance that stresses both protection and usability [S49]; UNCTAD also stresses user control and choice as a fairness principle [S51].
Inclusive AI access and upskilling for underserved communities are essential and also create market opportunities.
Speakers: Aparna Bawa, David Zapolsky
Aim for AI access in low‑bandwidth regions; inclusive upskilling benefits business and society I totally, again, agree violently with Aparna in adding the inclusiveness piece…
Both Aparna and David highlight the importance of bringing AI tools to low-bandwidth, rural areas and upskilling users, noting that such inclusivity benefits both society and business models [364-374][389-390].
POLICY CONTEXT (KNOWLEDGE BASE)
The importance of inclusive AI and skills development is reflected in the Ministerial Roundtable on cultural and language aspects [S35], the AI Impact Summit’s call for lifelong learning and social protection [S37], and WSIS+20’s emphasis on bridging the digital divide for equitable development [S38].
Similar Viewpoints
Both argue that compliance checks do not guarantee security and that AI regulation must be adaptable to specific use‑cases to avoid stifling innovation [180-202][281-306].
Speakers: Jay Chaudhry, David Zapolsky
Compliance ≠ security; flexible, evolving policy needed; over‑regulation stalls progress Regulation must be use‑case specific; one‑size‑fits‑all harms innovation; internal product decisions weigh risk vs rollout
Both emphasize the need for a common, transparent regulatory framework that respects sovereignty while enabling seamless cross‑border data and service flows [47-51][79-82].
Speakers: Aparna Bawa, Jarek Kutylowski
Cross‑border data flows essential; fragmented rules impede progress; call for common framework Global market requires transparent, similar frameworks; balance sovereignty with shared norms
Both recognize that decisions made at the platform (upstream) level directly affect how downstream enterprises and users can safely adopt AI services [131-136][137-160].
Speakers: Jason Oxman, David Zapolsky
Upstream governance decisions by platform providers shape downstream user behavior and must be considered Build security, guardrails, and data ownership into cloud services (Bedrock); give enterprises direct control
Unexpected Consensus
Security must be built into user experience and education, despite differing primary focuses.
Speakers: Jay Chaudhry, Aparna Bawa
Users are the weakest link; need identity and authorization controls for AI agents Enterprises must embed security, privacy, and user controls; partnership between provider and user essential Users need education; enterprises must not use customer data for training; provide opt‑out mechanisms
Jay, a security-focused executive, and Aparna, a product-experience leader, both stress that security cannot be an after-thought; it must be integrated into the user interface, user education, and partnership model-an alignment not obvious given their different domains [87-95][102-108][124-130].
POLICY CONTEXT (KNOWLEDGE BASE)
Integrating security into user experience and education aligns with IGF 2023’s comprehensive security-by-design approach that includes user empowerment and awareness [S56], and with literature on balancing security controls with user-friendly design [S49].
Overall Assessment

The panel shows strong convergence on four core themes: (1) the necessity of global coordination and shared principles to avoid fragmented AI governance; (2) the preference for flexible, risk‑based regulation tailored to specific use‑cases; (3) the imperative to embed security, trust and user‑centric controls into AI products from the outset; and (4) the importance of inclusive access and upskilling for underserved populations. These points cut across multiple domains—policy, technology, security and development—indicating a high level of consensus among industry leaders.

High consensus; the shared positions suggest that future AI governance initiatives are likely to prioritize international standards, risk‑based regulatory approaches, security‑by‑design, and inclusive deployment, providing a solid foundation for coordinated policy action.

Differences
Different Viewpoints
Extent of government alignment versus over‑alignment
Speakers: Jay Chaudhry, Aparna Bawa, David Zapolsky, Jarek Kutylowski
Need for balanced alignment; over‑alignment stifles innovation Cross‑border data flows essential; fragmented rules impede progress, call for common framework Free flow of goods and information critical; over‑regulation creates friction, propose common high‑risk principles Global market requires transparent, similar frameworks; balance sovereignty with shared norms
Jay warns that too much alignment or governance can kill innovation and that over-alignment is unhelpful [24-28]. In contrast, Aparna stresses the need for a basic shared framework to keep cross-border data flows open, David argues that free flow of goods and data is essential and calls for common high-risk principles, while Jarek emphasizes a transparent, comparable regulatory layer that respects sovereignty [47-51][58-63][67-68][79-82].
POLICY CONTEXT (KNOWLEDGE BASE)
The tension between necessary coordination and the risk of over-alignment is discussed in the ITI C-Suite panel on cross-jurisdictional AI governance [S39] and in WS #145’s debate on harmonised regulations versus national sovereignty [S54].
Security emphasis versus user‑experience and choice
Speakers: Jay Chaudhry, Aparna Bawa
AI can be weaponized; security overlay across all layers required; sovereignty alone insufficient Zoom’s product development prioritizes user experience, using configurable controls to balance security, privacy and usability
Jay stresses that AI systems need a security overlay at every layer to prevent abuse, arguing that security must keep pace with rapid AI adoption [87-95][96-97]. Aparna, while acknowledging security, argues that Zoom’s design centers on user experience, offering tiered controls and choice so different user groups can maintain usability while managing risk [252-257][230-270].
POLICY CONTEXT (KNOWLEDGE BASE)
This trade-off is highlighted in analyses of cognitive vulnerabilities that stress balancing strong security with a seamless user experience [S49] and in discussions on user-experience design that must accommodate varying user sophistication [S55].
Unexpected Differences
Perception of AI agents as a security threat versus focus on trust in AI outcomes
Speakers: Jay Chaudhry, Jarek Kutylowski
Users are the weakest link; need identity and authorization controls for AI agents Trust in AI outcomes critical; governance must ensure reliable, safe behavior for high‑impact tasks
Jay highlights that AI agents can be hijacked and become the weakest security link, calling for zero-trust identity and authorization controls for agents [209-212]. Jarek, while discussing trust, focuses on the reliability of AI results for high-impact uses and does not address agent-level security, indicating a differing risk perception [321-326].
POLICY CONTEXT (KNOWLEDGE BASE)
The debate mirrors viewpoints that security of AI agents is a foundational layer for building trust [S42] and broader conversations on moving safety from an afterthought to an integral part of AI outcomes [S48].
Overall Assessment

The panel largely converged on the need for balanced, risk‑based AI governance that supports innovation and global interoperability. The main points of contention revolve around how much alignment is appropriate and the relative priority of security versus user experience. While all agree that over‑regulation can hinder progress, Jay stresses the dangers of excessive alignment and the necessity of layered security, whereas Aparna, David and Jarek advocate for common frameworks to keep cross‑border data and services flowing.

Moderate – disagreements are nuanced rather than outright oppositional. They reflect differing emphases (security vs usability, alignment vs over‑alignment) that could affect policy design, suggesting that future governance discussions will need to reconcile these perspectives to achieve both innovation and robust protection.

Partial Agreements
All three agree that regulation should be flexible and risk‑based, tailored to specific AI applications, rather than blanket rules. Jay calls for evolving policies and warns that compliance alone does not equal security [180-202]. David stresses use‑case‑specific regulation and the danger of one‑size‑fits‑all approaches [281-306]. Jarek highlights that AI tasks have different risk grades and governance must be calibrated accordingly [323-325].
Speakers: Jay Chaudhry, David Zapolsky, Jarek Kutylowski
Need for flexible, evolving policy rather than rigid, prescriptive regulation Regulation must be use‑case specific; one‑size‑fits‑all harms innovation Different use‑cases have varying risk grades; governance must adapt to application context
Both emphasize that unrestricted cross‑border flows of data and services are vital for global digital platforms and that fragmented national rules create friction. Aparna points to Zoom’s reliance on global data flows [47-51], while David describes Amazon’s dependence on free flow of goods, data and services across borders [58-63].
Speakers: Aparna Bawa, David Zapolsky
Cross‑border data flows essential; fragmented rules impede progress, call for common framework Free flow of goods and information critical; over‑regulation creates friction, propose common high‑risk principles
Takeaways
Key takeaways
Global AI governance needs a balanced alignment: some common principles are essential, but over‑alignment or overly prescriptive rules can stifle innovation. Cross‑border data flows and the free movement of goods and information are critical for AI services; fragmented national rules create friction. A risk‑based, use‑case‑specific regulatory approach is more effective than a one‑size‑fits‑all model. Security and trust must be embedded across all layers of AI systems; sovereignty alone is insufficient without safeguards against malicious use. Enterprises and end‑users share responsibility: providers must build privacy, security, and opt‑out mechanisms, while users need education on safe AI usage. Future progress hinges on inclusive access, upskilling in low‑bandwidth regions, and the development of international standards (e.g., ISO‑42001) to harmonise governance.
Resolutions and action items
Propose the creation of a common, high‑risk principle framework that can be adopted internationally (suggested by David Zapolsky). Encourage governments to focus regulatory efforts on AI‑enabled threats (ransomware, nation‑state misuse) rather than broad pre‑emptive bans (suggested by Jay Chaudhry). Implement tiered controls and user‑choice mechanisms within products to allow different risk tolerances (highlighted by Aparna Bawa). Embed security guardrails, data‑ownership guarantees, and model‑output controls into cloud AI services (e.g., Amazon Bedrock) (outlined by David Zapolsky). Promote global upskilling and inclusive AI access, especially in low‑bandwidth regions (advocated by Aparna Bawa). Work toward an international consensus and standards such as ISO‑42001 to provide a common technical and governance baseline (suggested by David Zapolsky).
Unresolved issues
Specific mechanisms for achieving global regulatory alignment and how to operationalise the proposed high‑risk principle framework. How to reconcile differing privacy and data‑protection regimes while maintaining a common governance layer. Concrete processes for ensuring AI agents are not hijacked or used maliciously across diverse jurisdictions. Details on how governments can support inclusive AI adoption without imposing burdensome compliance on innovators. The timeline and governance structure for developing and adopting international standards like ISO‑42001.
Suggested compromises
Adopt a flexible, evolving policy that sets a basic common layer of norms while allowing sovereign variations for privacy and other local concerns. Use a risk‑based approach that distinguishes high‑risk applications (e.g., decisions affecting health, civil rights) from low‑risk ones, applying stricter controls only where needed. Provide configurable product features (security toggles, privacy settings, opt‑out options) so enterprises and individual users can tailor risk levels to their context. Balance the need for security overlays with innovation speed by integrating security into the development pipeline rather than adding it as a later compliance hurdle.
Thought Provoking Comments
“If each country has its own governance rules for AI, a large corporation operating in 50 countries will face a lot of issues. Some alignment is good, but over‑alignment kills innovation.”
He succinctly framed the core tension between fragmented regulation and the need for enough flexibility to keep innovation alive, setting the stage for the entire governance debate.
His point prompted the panel to explore the balance between necessary oversight and stifling compliance, leading directly to Aparna’s discussion of cross‑border data flows and David’s warning about premature regulation.
Speaker: Jay Chaudhry
“Zoom would not exist without cross‑border data flows. When governments add more restrictions, they impede their own citizens’ progress. There needs to be a basic, commonly understood framework, but also respect for national sovereignty.”
She linked AI governance to a concrete, everyday technology (Zoom) and highlighted the trade‑off between security/privacy and the economic benefits of data fluidity, grounding the abstract debate in real‑world impact.
Her remarks expanded the conversation from high‑level policy to practical product implications, prompting David to echo the importance of free flow of goods and information for Amazon’s global services.
Speaker: Aparna Bawa
“Regulation before we understand how AI will be used creates costs, uncertainty, and inhibits adoption. Example: Colorado’s comprehensive AI law is well‑intentioned but nobody knows how to apply it yet.”
He introduced a concrete case study showing the pitfalls of premature, overly prescriptive regulation, reinforcing the need for a risk‑based, evidence‑driven approach.
This example shifted the discussion toward concrete policy failures, encouraging Jay to discuss the dangers of blanket compliance and prompting Jarek to stress the need for transparent, common frameworks.
Speaker: David Zapolsky
“Security must overlay every layer of AI – from the sovereign infrastructure down to the models. AI agents could become the weakest link if we ignore who can access and control them.”
He added a security dimension to the governance conversation, emphasizing that sovereignty isn’t just geographic but also about access control, and warned of emerging threats like rogue AI agents.
This prompted Aparna to talk about the partnership between users and enterprises in managing risk, and led the panel to consider future‑focused threats beyond data privacy.
Speaker: Jay Chaudhry
“We need a common set of principles: define ‘high‑risk’ uses (e.g., decisions affecting life, health, civil rights) and regulate those, rather than trying to create a unified theory of AI regulation.”
He offered a pragmatic, principle‑based roadmap for global alignment, moving the dialogue from abstract alignment to actionable criteria.
His suggestion steered the conversation toward concrete policy levers, influencing Jarek’s call for transparent, globally consistent frameworks and setting up the later discussion on risk‑based product decisions.
Speaker: David Zapolsky
“AI agents can be hijacked and cause massive damage; we must extend zero‑trust architectures to cover identity, authorization, and control of AI agents.”
He highlighted a novel, technical risk that many regulators may overlook, expanding the scope of governance to include operational security of autonomous agents.
This deepened the technical layer of the debate, leading Aparna to discuss how Zoom embeds user‑level controls and prompting Jarek to note the higher stakes of agentic AI in critical domains like drug development.
Speaker: Jay Chaudhry
“The biggest issue will be AI‑enabled ransomware and nation‑state use of AI. If we don’t address these proactively, governments will over‑react with tighter rules that could stifle innovation.”
He projected a future threat landscape, linking security failures to potential regulatory backlash, thereby connecting short‑term technical safeguards with long‑term policy outcomes.
This forward‑looking warning reframed the conversation toward preventive security measures as a way to preserve regulatory flexibility, influencing the final aspirations of other panelists.
Speaker: Jay Chaudhry (closing forward‑looking question)
“Inclusivity and upskilling are essential – AI should reach villages with low bandwidth so farmers can benefit. If governments champion this, it creates markets for enterprises and lifts societies.”
She shifted the focus from corporate risk to societal impact, emphasizing that governance should enable broad access, not just protect elite users.
Her comment broadened the narrative to social equity, prompting David to speak about global consensus and ISO‑style standards that can support inclusive deployment.
Speaker: Aparna Bawa (forward‑looking question)
“We need an emerging international consensus, similar to technical standards like ISO 42001, that gives a common set of principles while allowing sovereign nuances.”
He proposed a concrete mechanism—international standards—to reconcile global alignment with national sovereignty, offering a tangible path forward.
This crystallized the earlier abstract calls for alignment into a specific solution, influencing Jarek’s optimism about a unified framework and wrapping up the discussion with a clear actionable vision.
Speaker: David Zapolsky (forward‑looking question)
“DeepL’s mission is to let anyone collaborate regardless of language or geography; governance should enable that, not hinder it. We must give customers tools to manage risk themselves.”
He tied the company’s core purpose to the governance debate, emphasizing user empowerment and the need for flexible, customer‑centric controls in a global context.
His perspective reinforced the theme of user choice introduced by Aparna and highlighted the practical side of implementing governance, rounding out the discussion with a focus on product design.
Speaker: Jarek Kutylowski
Overall Assessment

The discussion pivoted around the tension between global AI alignment and the need for flexibility. Early remarks by Jay and Aparna framed the problem of fragmented regulation versus innovation, which was sharpened by David’s concrete example of Colorado’s premature law. Subsequent comments introduced security (Jay), risk‑based principles (David), and user‑centric product design (Aparna, Jarek). Each of these insights redirected the conversation toward actionable frameworks—common high‑risk definitions, zero‑trust for AI agents, and inclusive access—while also warning of future threats that could trigger over‑regulation. Collectively, these pivotal comments moved the panel from abstract policy talk to concrete, multi‑dimensional solutions, culminating in a shared vision of international standards that balance sovereignty, security, and inclusive innovation.

Follow-up Questions
How can governments define and agree on common principles to identify ‘high‑risk’ AI uses across jurisdictions?
David highlighted the need to work backwards from observable harms to define high‑risk AI, noting current uncertainty hampers regulation and innovation.
Speaker: David Zapolsky
What flexible, risk‑based regulatory frameworks can keep pace with rapid AI development without stifling innovation?
Jay argued that over‑regulation slows progress and called for policies that evolve with AI’s unknown behaviors.
Speaker: Jay Chaudhry
How can a security overlay be effectively implemented across the five layers of AI sovereignty to prevent data poisoning and rogue agents?
Jay warned that sovereign AI stacks can be vulnerable if not protected at each layer, emphasizing the need for comprehensive security measures.
Speaker: Jay Chaudhry
What methods can be used to assess and mitigate the impact of AI agents being hijacked or misused within enterprise environments?
He noted the emerging risk of AI agents becoming the weakest link, requiring new security controls and identity/authorization mechanisms.
Speaker: Jay Chaudhry
How can cross‑border data flows be balanced with privacy and security requirements to support global AI innovation?
Aparna stressed that unrestricted data movement is essential for AI, but must be reconciled with national privacy and security norms.
Speaker: Aparna Bawa
What strategies can ensure inclusive AI access and upskilling for users in low‑bandwidth or underserved regions (e.g., rural India)?
She highlighted the importance of democratizing AI benefits and the challenge of delivering technology where connectivity is limited.
Speaker: Aparna Bawa
What should an international AI standards framework (e.g., ISO 42001) encompass to provide a common set of principles and technical specifications?
David advocated for a converging global consensus on standards to guide responsible AI deployment and reduce regulatory fragmentation.
Speaker: David Zapolsky
How can companies like DeepL design governance policies for agentic AI that satisfy diverse regulatory regimes while maintaining global interoperability?
Jarek discussed the shift from translation to autonomous agents, raising the need for adaptable governance that meets varying country requirements.
Speaker: Jarek Kutylowski
What best practices can be established for giving end‑users granular control over AI features (e.g., transcription, data usage) to protect privacy and comply with regulations?
She described the necessity of user‑level opt‑outs and controls to balance functionality with legal and ethical obligations.
Speaker: Aparna Bawa
How can cloud providers embed transparent guardrails and disclosures in AI services (like Amazon Bedrock) to enable enterprises to manage risk globally?
David outlined Amazon’s approach to security, data ownership, and content filtering, suggesting a need for standardized, globally applicable safeguards.
Speaker: David Zapolsky
What metrics or research approaches can quantify the impact of AI regulation on innovation and market entry timelines?
Several panelists noted that over‑regulation delays product launches, indicating a need for empirical studies on regulatory effects.
Speaker: Multiple (Jay Chaudhry, David Zapolsky, Aparna Bawa)
How can governments differentiate regulatory requirements based on AI application domains (e.g., consumer recommendation vs. medical documentation) to avoid blanket restrictions?
He emphasized that risk varies by use case, and undifferentiated rules could hinder beneficial AI applications.
Speaker: David Zapolsky

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Building Sovereign and Responsible AI Beyond Proof of Concepts

Building Sovereign and Responsible AI Beyond Proof of Concepts

Session at a glanceSummary, keypoints, and speakers overview

Summary

The session opened by highlighting that only a minority of AI initiatives reach operational use, with just 30 % of pilots advancing to production [11]. Speakers argued that a core barrier is a lack of trust in AI systems, both at the organizational level and among individuals whose data and outcomes are affected [13]. Supporting this, the OECD AI Observatory records a rapidly growing catalogue of incidents-600 reported harms in December 2025 alone-illustrating real-world risks of untrusted deployments [22-23]. Concrete examples cited included Romanian voice-cloning scams, AI-generated books in Cairo that omitted human oversight, and biased facial-recognition at borders, all of which erode public confidence [28-29][30-35][36-38].


The presenters identified six common failure points for proof-of-concept projects: weak adoption planning, governance gaps, misalignment with societal goals, sovereignty concerns, sustainability pressures, and inadequate change-management [42-61]. To address these, they proposed a “AI 4D” framework comprising sovereignty, green (sustainability), responsible, and valuable dimensions, each intended to surface and mitigate harms before scaling [64-66]. A health-care pilot illustrated how neglecting the green dimension-excessive compute, power, and water demands-rendered the project financially and politically untenable, prompting participants to label the issue as sustainability [73-80]. A traffic-light optimization case showed that focusing solely on technical efficiency ignored the value dimension, leading to increased congestion in low-income neighborhoods and community backlash [90-99].


A justice-system AI example highlighted sovereignty problems when a model hosted offshore could not be audited or updated, underscoring the need for control over critical public-sector AI [103-108]. Audience discussions reinforced that responsible AI-addressing bias, fairness, and human-centered design-and valuable AI-measuring societal benefit-are intertwined, as seen in a social-benefits pilot that lacked explainability and caused harm to vulnerable citizens [108-124]. Omeed expanded on sovereignty, emphasizing that data and model control are prerequisites for trust, and warned that reliance on foreign AI services could jeopardize national objectives [131-148]. He further argued that green AI ties environmental impact to economic viability, noting that unsustainable scaling leads to cost overruns and eventual failure [154-165].


The session concluded that no single dimension suffices; organizations must balance trade-offs, adopt comprehensive AI policies, define measurable KPIs for each lens, and upskill teams to ensure trustworthy, sustainable, and valuable AI deployments [349-362].


Keypoints


Major discussion points


AI pilots have a low conversion rate because trust is not built and harms are overlooked.


Only about 30 % of AI projects reach production, and many fail to consider trust-related issues such as data sharing, impact on jobs, and potential harms [11-19]. The OECD AI Observatory tracks a rapidly growing number of incidents (≈600 in December 2025) that erode confidence [20-24], with concrete examples ranging from voice-cloning scams in Romania to AI-generated books in Cairo and biased facial-recognition at borders [36-38].


Six common reasons why proof-of-concepts (PoCs) stall, summarized in a “4-D” framework.


The speakers list six failure categories – adoption vs. impact, governance, mis-alignment, sovereignty, sustainability, and change-management [42-60]. They then condense these into four lenses that must be addressed to build trustworthy AI: Sovereignty, Green (sustainability), Responsible AI, and Value[61-66].


Real-world scenarios illustrate each lens and the consequences of ignoring them.


Health: an AI radiology triage tool required more compute, power and water than available, causing a sustainability failure [70-78].


Transport: an AI traffic-light optimizer reduced travel time but diverted traffic to low-income areas, exposing a value-misalignment issue [89-93].


Justice: a case-routing system hosted offshore with no auditability raised sovereignty and responsibility concerns [101-106].


Social benefits: an AI eligibility engine lacked explainability and showed bias, highlighting responsible-AI and value gaps [108-114].


Trade-offs between the four dimensions are inevitable and must be managed explicitly.


Participants asked how sovereignty might outweigh value or vice-versa, and the presenters explained that trade-offs (e.g., choosing foreign models for speed vs. retaining control) require transparent decision-making [279-286][290-306]. Sustainability versus rapid adoption was also discussed as a common tension [306-313].


Actionable next steps: policies, frameworks, KPIs, and up-skilling.


The session concludes with a call to develop AI policies that embed all four lenses, adopt responsible-AI frameworks, define measurable KPIs for ethics, sustainability and value, and invest in team up-skilling [342-360]. A white-paper summarising eight-to-ten practical recommendations is offered for further guidance [342-347].


Overall purpose / goal


The discussion aimed to explain why most AI pilots never scale, introduce a structured “4-D” approach (sovereignty, green, responsible, value) for evaluating and designing trustworthy AI, illustrate the approach with concrete case studies, and equip participants with concrete actions (policies, frameworks, metrics) to move from experimental PoCs to production-ready, impact-driving AI systems.


Overall tone


The conversation began with a formal, informational tone, presenting statistics and definitions. As audience interaction increased, the tone shifted to collaborative and exploratory, with participants sharing scenarios and questions. Towards the end, the tone became supportive and actionable, emphasizing practical guidance, shared resources, and encouragement for attendees to adopt the framework. Throughout, the speakers maintained a constructive, solution-focused demeanor.


Speakers

Omeed Hashim - Role/Title: not specified - Areas of expertise: AI governance, sovereign AI, responsible AI (as discussed) [S1]


Audience - Generic participant; individual members mentioned with their own backgrounds:


* Yuv - Individual from Senegal; role/title not specified [S2]


* Professor Charu - Professor, Indian Institute of Public Administration; expertise in public administration [S3]


* Dr. Nazar - Role/title not clearly mentioned [S4]


Theresa Yurkewich Hoffmann - Role/Title: not specified - Areas of expertise: AI trust, AI governance, AI policy (as discussed) [S5]


Additional speakers:


Ami Kotecha - Co-founder, Amro Partners; sector: real estate and data spin-out (mentioned in transcript)


Shri - Name referenced in audience comments; no role or title provided


Full session reportComprehensive analysis and detailed insights

Context & Trust Gap – Theresa opened the session by noting that only about 30 % of AI pilots progress to production, and that a lack of trust in the technology, its data, and its societal impact is the principal barrier [1-2].


OECD AI Observatory – She highlighted a rapid rise in recorded AI harms, citing 600 incidents reported for December 2025 [3-4]. Concrete examples were given: Romanian voice-cloning scams [5-6]; a Cairo book-fair featuring AI-generated titles with printed prompts and model instructions, raising authorship questions [7-9]; and facial-recognition systems at borders that performed unevenly across population groups, eroding public confidence [10-12].


Why PoCs Fail – Six Themes – Theresa identified six recurring reasons for stalled proof-of-concepts: (1) a gap between adoption planning and real-world impact; (2) governance failures such as missing risk-management and accountability; (3) mis-alignment between the AI’s purpose and societal goals; (4) sovereignty concerns about data and model control; (5) sustainability pressures, especially energy and water use; and (6) inadequate change-management and cultural readiness [13-18].


4-D Framework – She then introduced a “4-D” framework that consolidates the six failure points into four lenses – Sovereignty, Green (sustainability), Responsible AI, and Valuable AI – to help anticipate harms before scaling [19-21].


Scenario Walk-through


Public-health X-ray triage: the model required far more compute, power and cooling water than the host region could provide, making the solution financially and politically untenable; participants flagged this as a Green issue [22-26].


Traffic-light optimisation: while average commute times fell, traffic was rerouted through low-income neighbourhoods, worsening pedestrian safety and provoking community backlash; this was marked as a failure of the Valuable lens because technical gains did not align with citizen-perceived value [27-31].


Justice-system routing: the PoC performed well in testing but was hosted offshore with no clear audit trail or control over model updates, highlighting a lack of Sovereignty oversight [32-35].


Social-benefits eligibility engine: the system could not explain its decisions, exhibited bias across age, ethnicity and gender, and offered no escalation path, thereby harming vulnerable citizens and missing both Responsible and Valuable dimensions [36-40].


Omeed’s Deep-Dive


Sovereignty: Omeed argued that trust hinges on who controls data and models, warning that reliance on foreign AI services creates vulnerability if providers withdraw access; sovereignty must be baked into design from the start [41-45].


Green AI: He linked environmental impact directly to economic viability, noting that unsustainable systems incur higher operating costs and are unlikely to scale; he cited a new data-centre consuming as much electricity as the whole of Los Angeles [46-50].


Responsible AI: Described as encompassing ethics, bias mitigation, governance, security and human-centred design; he referenced Prime Minister Modi’s remarks on human-centred AI and illustrated the point with a nursing-home hydration-monitoring case where poor design could harm staff and families [51-56].


Value: Defined as real-world benefit beyond cost-savings; he contrasted a UAE executive’s ambition to serve 120 million people with India’s context, where the same scale would not add societal value [57-60].


Audience Contributions


Ami Kotecha: asked for clearer governmental guidance on “safe vs. experimental” AI use and noted an upcoming data-protection law expected to roll out in 18-24 months [61-63].


Platform vs IP: an entrepreneur described exclusive client contracts (e.g., with PepsiCo) that lock-in IP and prevent platform-level deployment; Omeed suggested a service-oriented, co-creation model, citing India’s UPI ecosystem as a successful open-service example [64-68].


Ranking the lenses: Theresa placed Responsible/Valuable AI slightly above Sovereignty, arguing that responsible AI can act as an umbrella covering other concerns; Omeed countered that sovereignty can be non-negotiable for trust-critical systems and may need to dominate in certain contexts [69-73].


Trade-off discussion: participants noted that organisations with strong carbon-reduction goals may prioritise Green AI even if it slows rollout, while others may accept higher emissions to accelerate adoption; similarly, choosing an external model can speed delivery but sacrifices long-term control, whereas building domestic capability delays value creation but secures sovereignty [74-78].


Final quick question: an audience member asked what aspects might be missed when focusing on a single lens; the presenters deferred a detailed answer to follow-up email [79-80].


Closing & Action Items – The presenters announced a white-paper that outlines a set of actionable recommendations for each of the four dimensions and shared a link in the chat [81-82]. They urged organisations to draft an AI policy explicitly addressing sovereignty, sustainability, responsibility and value; adopt a responsible-AI framework with clear governance questions; define quantitative KPIs for ethics, carbon impact and user benefit; and invest in up-skilling programmes that incorporate diverse stakeholder perspectives [83-88]. The session ended with contact details, a QR-code for feedback, and a note that two audience questions remained unanswered [89-91].


Overall, the discussion reached strong consensus that trustworthy AI requires a holistic 4-D lens, that trade-offs among sovereignty, sustainability, responsibility and value are inevitable and must be documented, and that coordinated policy, measurable metrics and collaborative business models are essential to move AI pilots from proof-of-concept to production-ready, impact-driving systems.[92-94].


[Note: Insert accurate turn-based citation numbers in place of the placeholder brackets above.]
Session transcriptComplete transcript of the session
Theresa Yurkewich Hoffmann

Okay. Sounds good. Okay. Well, this session will be all around that. So if we can have the next slide. So what we want to talk to you today is that there are so many different AI projects and AI pilots happening in the world. And a pilot is the same as a proof of concept. It’s an idea that you’re testing. And it’s a concept that you’re testing. And it’s a concept that you’re testing. And it’s a concept that you’re testing. to see if that idea is something that you can put into implementation later on. And I was looking at the stat of how many AI pilots are in the world, and that was very difficult to quantify.

But what I did find was that only 30 % of all the AI projects actually go into production. So what we’re finding in the world is that we have lots of different AI ideas, but really a difficulty in translating that into something real. And the point of this session and what I think is the point of the whole AI summit was that one of those reasons is because we don’t have trust. So if we can have the next slide. So if we think about trust, that could be an organization’s trust that the AI will work. It can be trust in us as individuals around how our data will be shared, the outputs that it will give us.

It could be trust in terms of the impacts that it will have on people and people’s lives. It could be trust in terms of jobs and how that will work. And with that, what we’re seeing is a lot of these AI projects are failing to consider that. And I don’t know if you’re familiar with the OECD AI Observatory, but they do a monitor where they essentially monitor all of the harms and all of the AI incidents around the world. And you can see that it’s been growing exponentially. In 2025 of December only, there were 600 different incidents in the world. So those are 600 different times that people were harmed or that there was some kind of AI hazard that was created through a pilot.

If we can have the next slide. It’s just to zoom in, so this is a little bit difficult for you to read now. But in that harms monitor, you can click on any of them and learn more about them. So some that I found, the first one is in Romania. AI was being used to clone people’s voices and then scam their voices. By making them think that they were in distress. As well, there was an example, I believe it was in Cairo. So there was a book fair, and a lot of the books there were actually produced using an equivalent of chat GBTs, using generative AI. But there was no humans included in that project, so the books were printed with the prompts and the AI instructions still in them.

So that created a lot of issue of creativity, and are these books generated by AI? Are they what we’re looking for? Is that what we thought we were buying? And then there’s several other examples happening all around the world where this is happening with facial recognition, for example. So using that at borders, and all of a sudden that might not work equal between different types of people. And all of these really build towards people losing trust in AI and being fearful of using it. So these are some examples, and we’ll kind of go into next what we can do about that. So. Next we’re going to look at. why do these proof of concepts fail and how do we shift from just experimenting to actually having impact.

So I can have the next slide. So I put here six ideas of what we’re seeing with the customers we work on is why proof of concepts are not working. The first one is between adoption and impact. So a lot of times we’ll have organizations that are working on AI and they’ve just thought about producing something but they haven’t actually thought about how will people use it. Will it have the goal that you’re hoping it to have? Or say, for example, I’m using a legal tool. Will it actually serve the purpose that I’m looking for? Will it require more work for me to actually review everything it’s doing? So there’s a gap there. The second is around governance failures.

So I’m not sure how many of you have thought about risk management. How do you identify all of the risks that are coming up? Who’s going to be accountable to solving them? That might be things like, is it treating people differently? Is it biased? It might be things around security, for example. And then there’s also a failure around misalignment. So between what you’re looking for in society, those might not be aligned. So if you’re, for example, prioritizing AI use to automate people, all of a sudden people are thinking, what about job loss? So there’s not really a link in value there, and that’s another reason. We’ve got three other challenges. The first one is sovereignty, which I think if anyone was around the summit today or this week, everybody was talking about sovereignty.

So questions around how do we maintain control? Who is responsible? If, for example, a foreign government decides to turn off that AI access, is that something we trust? Or how do we deal with that? we also have sustainability pressure so thinking about the carbon cost of using AI and lack of clarity around that and then change management is really all the people so if we’re thinking about these frontier firms where people are working with agents what does that work culture look like have we actually thought about how people use AI and have time to test it and practice with it have we thought about the relationship between people and AI and how that works as well so these are six quick concepts and if we can have the next slide is just a point to make is that when we’re considering a proof of concept we’re really just considering does it function we weren’t considering any of those other six things and if we want to scale AI we need to think about everything else so next slide so I guess the point of this session is really to think about how do we actually do that so what we have thought of is calling it AI and 4D so four dimensional the idea that you need to look at four different lenses to build trust in AI if we could have next slide and when we’re looking at that we’re thinking if you can look at all these four different lenses that’s really going to help you predict any harms or challenges that could come with the AI model and actually prevent them so that you can deploy and scale that AI there’s four dimensions that we’re looking at the first one is sovereignty so thinking about who controls it not just data but looking at all the security measures behind it where does the model come from who has access to it we’re looking at green so that’s sustainability can this scale without destroying our climate goals for example We’re looking at responsibility, so that is thinking about ethics and governance and bias and fairness and human -centered.

And then valuable, so is this project actually really going to deliver a real -world benefit to people? So next slide. This one, I think it might be difficult for us to create a poll, so what we’ll do is we’ll do it by hand instead. So if we can just go to the next slide. What I thought we could do before we give you more information of those 4D and how to apply them and break out into groups is we could just have some quick scenarios and test what your knowledge is of those themes already. So I’m going to give you an example, and then we’ll do a show of hands of who thinks what lens is missed here.

So this example is with a public health company. They’re using AI to read different x -rays and radiology scans. And the point of the proof of concept is to help triage different illnesses or different breaks, things that you might find in the scan, and reduce that backlog. So when they actually started modeling and rolling it out, the team realized that this required more compute than they needed. It would exceed, actually, the available power supply, so there was not going to be ability to use it consistently. And that, actually, there was a large demand on water because the GPUs needed to be cooled, and this is in a water -sensitive area. So that would be another challenge between people and the planet.

So this program failed, this hypothetical program failed, because it was financially and politically impossible to run. So who thinks that this is a problem because of sovereignty? Who thinks that this is a problem with sustainability? Yeah? who thinks that this is a problem with responsible AI and value. Yeah, I agree. So I mark this one as sustainability. I think it’s an example of the dynamics that we might have in the real world is we want to scale AI, do really great things, but actually we haven’t considered the power or the water usage that that has because we either don’t have the information or it hasn’t been something that’s been baked in the front to think about.

And we will give you some higher level into what this means and how to apply it in a moment. Okay, the next one. So we’ve got a second one. This is dealing with transport. So I think we’ve all dealt with traffic this week. We’re looking at this in this scenario here. It’s thinking about this project is to optimize traffic lights across the city and smooth congestion. But when they started implementing this project, it was only looking for average commute time. It was diverting traffic into lower income areas and pedestrian safety actually became worse. So while this met the technical triggers that it did reduce and optimize time, there was a lot of community backlash. So does someone want to tell me which one they think this is a failure of?

Audience

Sovereign and responsibility.

Theresa Yurkewich Hoffmann

Yeah, we’ve got some sovereignty, we’ve got responsibility. I think this one is actually value. So here, what the ministry had thought was valuable, reduce overall time, is not what’s valuable to the people. What’s valuable to the people is that they want to have safety and walking. And what’s valuable to them is that you protect communities and you don’t have unbiased impact. Next one. So now we’re looking at justice. So here we’ve got a justice system. Our justice department is building AI. to triage different complaints from citizens and reroute them to the right legal body, so whether it’s the courts or a commissioner or something like that. In the pilot, it performed really well, but later when they started to prepare to deploy this into production, the team discovers that, one, the model is hosted offshore.

Two, they don’t have a lot of information on when the model will be updated, and they don’t have control over that. This government doesn’t. That different logic within the model could change based on updates that they couldn’t control, and that they can’t audit the logs. So what do we think this time?

Audience

Yes.

Theresa Yurkewich Hoffmann

Okay, everyone is sovereignty. Sorry, did you say something else? A responsible AI? I think that could also be here, because they hadn’t thought of maybe all these risks beforehand. but I agree here especially when you’ve got a national organization they need to have control of the model and how it functions not being able to update it or audit it in such a sensitive area like justice is a real challenge so sovereignty is a challenge here and then last one okay so here we’ve got a social science agency and they’re using AI to determine who’s eligible for social benefits and the pilot showed that they were able to progress and reduce the time and have fewer manual checks but when they were actually doing this in real life the model wasn’t able to explain why it had made a decision so why it had allocated benefits to someone versus someone else there was no ability to understand how to appeal it so if you were rejected for example you couldn’t understand why that was and how to change that decision There was bias discovered between different groups, so age groups or ethnicity or gender.

It wasn’t applying it the same to equal to everyone. And there was no agreed process for how you would escalate if there was a problem. So this became very seriously harmful, and there was a lot of vulnerable citizens who could be impacted. So in this scenario, what do we think between responsible and value? Anybody else? Training data not accurate. Agreed. So I agree. I think this one is a good one of responsible but also valuable here. Responsible AI is thinking about bias. It’s thinking about fairness. It’s thinking about the data that you have. It’s thinking about all these. It’s thinking about all these harms up front and how you’re going to deal with them. And then equally with value, people need to see value of why they’re using AI in a public system.

And if it’s actually harming people, then it’s not necessarily a good use case. So far, everyone is doing good. I think we can move on. But what we wanted to go through now is how does this work in real life? What does this actually look like? And so I’ll pass to Omid. Can we have the next slide, please?

Omeed Hashim

Right. So I think it’s clear, you know, having had this conversation and the contribution from yourselves, that it’s not so straightforward because there are different dimensions, and this is the point that Riz is making in terms of having to look at different angles. So over the last two days, or definitely the day before yesterday, I was going around in the summit hall, and I was asking everyone, because you see everywhere it says sovereign AI, sovereign AI. I was asking them, what do you mean by sovereign AI? And some people were talking about, oh, we need to have our data centers here. Somebody was saying, or our models need to be here. There were different kind of conversations in terms of what sovereign AI actually means in the context of AI and how it works and how it deploys and so on and so forth.

But the key thing is that ultimately it comes down to control. And my view is that it’s not even just about the organization, the sector potentially, or the nation, but also about the people. So where is your data? Who’s actually looking at your data? Why are they looking at your data? What will they do with your data? If you don’t have an understanding of that, the likelihood of you trusting that system is very low, and therefore it would be susceptible to failure. So it’s really, really key to understand the implications of data sovereignty, AI sovereignty, and so on. I mean, I was talking to one country… called Serbia, and they were saying that we have a view that we need to have control of our own environment, we’re building new large language models in our own geography, and we are going to have control over what we do.

And I think that’s the key thing. But the important thing is that if the trust is lost in terms of the sovereignty, the likelihood is that the system will fail. And I can assure you that if it’s not designed in at the beginning, you’re going to test this under a lot of pressure. You’re likely to be in a crisis as well, because when you don’t know if your health data is trained on somebody else’s data, or you’re using very commercially available large language models. then the thing is you’re actually beholden to those people and therefore you may not be able to achieve what you want to achieve as an objective. So it’s a really, really important dimension in terms of a successful deployment.

And all of the stuff that I’m going to go through here, whilst I’ve seen them through failure, but also they’re the recipe for success. So you can think of it in both ways. So if I could have the next slide, please. So green AI, I mean, this is kind of not dissimilar to what we had before in terms of cloud and green computing and the fact that unless you actually look at the environment, look at it from an economic viability of the system, ultimately what it means is that it’s going to cost a lot more and it won’t scale. And if it doesn’t scale and you cannot handle the data volumes and the amount of usage that you do, you’re going to have, the likelihood is that it would stop.

Now, in my mind, the approach to take here is to make sure you address both. And what happens is that addressing both the environmental effects as well as the cost actually work very, very nicely together. So we had a similar scenario before in how we deployed cloud services, and the same thing is translating to this now. So the more economic your system is, the more likely that it’s going to reduce less greenhouse gases as well. And as a result of that, you can sustain this system longer term. I mean, we all know people are building now massive data centers. Yesterday, there was, I think, a discussion around Microsoft building the new data center that consumes, as much electricity as all of Los Angeles, and Los Angeles is an enormous city.

So the environmental effects of what we’re doing are really key, and it has a direct link into the costs that are driven out of that as well. And I can again assure you, I think it does only take away, that if an AI system can’t scale sustainably, then it won’t scale at all. I’m pretty convinced of that. So we can move on. So the next one is responsible AI, and I think a lot of people here are familiar with that. In terms of governance, assurance, are we doing the right things ethically, is there bias in the system, all of those things fall under the responsible AI banner. And it’s really fundamental in terms of giving people that trust that Teresa was talking about in order to use the system in anger and kind of really link their kind of lifestyle to that, and so on and so forth.

And as you know now, there are all sorts of other systems now like the AI companions that kind of help you achieve different things, whether it’s weight loss or even provide you counseling and help you along in your life. But unless they’re done in an ethical way and an unbiased way, they’re not leading you down a particular path, they’re likely to fail as well. Now, one thing that I wanted to bring to attention, and yesterday, Prime Minister Modi was talking about this, which is really key as far as I’m concerned in the responsible AI area, is the human -centered design of AI. Because when you’re actually building an AI system, you need to have in your mind who you’re trying to help and how.

And what does this actually mean? And if you’re trying to do something, you have to have a clear vision of what you’re trying to do. And if you’re trying to do something, you have to have a clear vision of what you’re trying to do. to them when they start to use the system. So I think the example around the traffic management was a very good one because we all struggled over the last few days with the traffic. And if a system is put into place which does not take into account what the purpose of what they’re doing is, then it is likely to fail. I think the goal of the system itself as well is really key in terms of whether it gets the right sort of results or not.

So there are many systems where people don’t consider that and as a result of that, it becomes unusable by the people or it might have harms built into it as a result of that. But the last one, the last dimension is how valuable that AI is and what does it mean in terms of the outcomes and what the measures are and so on. So a couple of days ago I attended a session where we had a senior executive from UAE. They were talking about, as a country, what they’re trying to do. And it’s really key for us to understand what we’re trying to achieve. So they had a very simple kind of thinking in terms of what they were trying to do, which made what they were doing much more measurable.

So what was the intention for them? The intention was that there are about 12 million people in the United Arab Emirates. And they wanted to effectively be, rather than 12 million people, with the introduction of AI, do as much work as 120 million, almost like 10 times the size. And I think that actually is really, really key. Very simple reason as to why you’re doing what you’re doing and how you measure it. And what the value is. Now, if you actually think about that in the context of, say, India, in my opinion, that ambition doesn’t give India the value. So to create, I don’t know, lots of agents to replace people’s jobs or do more jobs, right, doesn’t actually have the right outcome because there’s already a lot of people here.

Why would you do that, right? So you have to think really carefully about what the value is of the system itself because without thinking about that, you end up building a system that you cannot measure the value of. And then ultimately what you would do is that it would just become a dead weight. Why do we have this at all? Should we be getting rid of it or not? So hopefully you kind of understand all of the aspects of the different areas. At Kainos, we deploy systems, AI systems, into production. So we see. A lot of these issues. And we are quite lucky because our customers, which are all the government departments. are actually very, very clued up as well in terms of what different aspects of what we’re doing are, and they see value in it.

So it’s not just about deploying the technology, but how is this technology going to affect the UK citizens and where we work in other countries like Canada, US, and so on, those countries respectively. So I think that was my last slide. I think I’m going to hand it over to you.

Theresa Yurkewich Hoffmann

So we had originally intended to maybe do different breakout groups. The audience is quite small, so it’s up to you. We could either have everyone kind of have a few discussions and talk about what you think is the most challenging, or we could use 10 minutes if we want to do a Q &A, if people want to share their thoughts. Put your hands up if you want to go into a breakout group and discuss one of the concepts together. okay so we’ll do the second nobody voted for that so why don’t we, yeah we can have a discussion it’d be interesting to hear are you looking at these four challenges which do you think is the most difficult, which do you feel like you’ve solved and we can have a little discussion around that for a little bit introduce yourself

Audience

Hi there thank you my name is Ami Kotecha I’m co -founder of Amro Partners we are a real estate company and we are now getting involved in a data spin out my challenge is as follows, I’m one of the co -founders of the company as a leader I’m very keen of course that there’s AI adoption there’s upskilling etc. in the company and of course productivity challenges where we have them should be addressed using this technology. I feel like I am often left in the lurch to actually literally make all the decisions within the private sector environment whereas I think government needs to step in and make some of these decisions on our behalf in terms of model utilization, where we go, what we do with it.

I mean we are good experimenters so fortunately we are throwing capital at experimenting not every company can afford to do that or would want to do that because of the same sort of issues you mentioned right at the start which are aligned with just the fear of adopting something that is going to break your system or open yourself up to some kind of cyber attack etc so how do you see this sort of playing out in the next 6 months, 8 years 12 months because obviously the technology is moving really fast as to what role the government is going to play in saying this is safe to use and this is still experimental and you should worry about it?

Theresa Yurkewich Hoffmann

It’s like, go ahead and do it. But then there’s a medium risk, a high risk would be something that would be like really critical infrastructure or something that’s impacting people directly. And if it’s a high risk, then there’s a load of different things that you need to do around transparency with people. There’s also prohibited use cases of how to use AI. So I think that’s one example where some governments are actually saying, this is what we’ve deemed safe. And if it’s not one of these uses, then we want to see a lot of other checks. In the UK, we have regulation that’s looking at third party suppliers right now. And if they’re critical to the infrastructure or not of the country, then there will be new requirements on AI as well, in terms of like the updates that go in transparency around models, explainability.

But then maybe you have the US approach where you don’t have regulation yet. So I think that’s one example. it really depends on the country. I think a lot of what we heard yesterday was around, you know, for India, thinking about ethical and responsible AI, but I don’t know if you have any regulation in place around that yet. I think, yeah, I think it’s very difficult otherwise for a private company because otherwise you’re fighting to who gets to the bottom, who’s the cheapest, who’s the quickest. And this week I was touring around with different businesses and everyone was thinking, how do we do agents? But no one was thinking about human -centered, ethical, responsible. So I think it does need to come from the government to have a base.

But I noticed that some are maybe more forthcoming with that than others.

Audience

I think just before, I just wanted to answer your question about the government. There is a data protection and data personalization law that was, you know, legislated last year. November 2025 is going to be legal in the next 27 October onwards. They are getting a time of, you know, around 18 to 24 months. After that, what you are saying, the addressing of how the data is handled by the person who is creating the data, who is like the person who is created, who is the principal or one who is the repository, all that rules are coming. But presently, I would say it is 0 .1 % of that responsible AI part which is happening. But over a period of these two years, the preparation is going to happen where it will slowly get into that mode, actually.

Omeed Hashim

I was just going to say, so she’s a high flyer entrepreneur in the UK, actually. But I was just going to say, in my mind, right, there are a couple of things that we should really push the government to do, right? One is about smartness. Smart data. So they’ve been playing around with this for years and years. So we’ve got quite a lot of open banking applications now. But this can be extended way beyond open banking where different organizations can share data. Like, for instance, in the property market, you know, how do you go through the cycle of all the way from putting an offer in to conveyancing to, I don’t know, valuation to the end, right?

So that’s really critical. The other side of it is actually having trust in language models which are built within the U .K. itself, right? And I think most of the – even Serbia is doing that, right? French have already done it with Mistral. So there is a lot of examples of this, and that’s where the government can really help, and that’s what we should be lobbying them to do, in my opinion. Any other comment? Oh, yeah. Maybe behind you? Oh, sorry. You had your hand up. You had your hand up first. You go first, and then behind you next.

Audience

Yeah. Yeah. So my – I am building a agentic AI for vending machines. And I have been an entrepreneur in the corporate world. But before three years, I was just doing physical stuff, right? Doing products, innovation, the food and beverage sector. One of the challenges which I am seeing is how to build value at a platform level rather than an individual customer level, right? For example, if I offer this vending machine agentic AI for a PepsiCo, they would say don’t do it for Coca -Cola, right? Give it to us only and keep it with us. But UPI, for example, was not a master card or a visa card thing, right? It was for the whole country, right?

So how do you get that kind of attraction to build a platform instead of one very customized for a customer who might say that don’t give it to anybody else. So that is the key question that I am trying to address and I do not seem to find answers.

Theresa Yurkewich Hoffmann

I agree I think that is a challenge in the corporate world I used to work at Microsoft and even there it was if you’re using our technology if we’re coming on a panel then we’re on a panel but we’re not having Amazon on the panel or Google on the panel with us but I think like you say that’s really figuring out what you have that’s so unique and that actually goes to the value lens I think is that if you have something that’s really valuable to people you make the case that it has to be shared but it is it is difficult if you’re building it with one customer first because that almost becomes their IP that they want to keep right so something that we are doing when we’re working on response by our projects is we’re looking at all this similarity of requests that come in and we’re sort of doing the work on ourselves in the background and then we’re taking elements that we need and exposing them to the different customers and that way we keep that IP but it is very difficult to get multiple customers on board if they’re all competing

Audience

yeah so for example I build a few IP in the area of sustainability like clean air clean water I sold to a company but that company is not commercializing it I don’t want to name that company because it didn’t want to commercialize it wanted to keep that technology right so that’s a big challenge that I am seeing in the corporate world that a company will buy another company but it won’t implement for the society or for the good right so that is the challenge that I am seeing how do you handle that because that is part of the responsible AI as well as the valuable AI part

Omeed Hashim

yeah I think you’re right and I think you have your own kind of description of this problem but I was in the US a few months ago and I saw, I don’t know whether you’re familiar with SVB, but it’s basically Silicon Valley Bank, right? And they did a presentation to us where they were talking about where all the funds are going, right? And if you actually see what is going on in terms of this, I think it’s about a trillion dollars worth of investment. This investment is flowing only into a handful of companies. What those companies are doing, they literally are stifling everybody else, right? This is a commercial reality, right? But if I was to offer you some options, I would say there shouldn’t be just the IP.

You should be thinking about it more as a service that you could build layers on, right? So you may retain the IP or you may share the IP. It could be a co -created, whatever it is, but it’s got to have a service model attached to it because if PepsiCo buys X and then co -creates, Coca -Cola buys Y, why would they be buying it and how would you be able to build on top of that but you know it’s very very commercially challenging problem it’s been there for many years this is nothing new

Audience

as Shri said exactly like that UPI beat that right so today UPI compared to a Mastercard or a Visa in India everyone is using that right and there are applications which are attached to UPI whether it’s a Paytm or a Google or whatever Amazon Pay all of them are on the platform of API right so the question I had was why are IT companies for example Kainos right or an Infosys or an Accenture not looking at the platform approach and looking at the services approach where they can put their team manpower and run projects right so I see this as a challenge I have been talking to the top management of Infosys Accenture every time I go with the proposal they say just do it for a client you know and we will attach you as an expert I don’t want to do that I want to build a platform there is nobody who is really interested in building that sort of a business which is path breaking it takes longer time right like UPI it happened organically can these kind of initiatives happen inorganically that was the question

Theresa Yurkewich Hoffmann

I think they are looking at both yeah so I think we should take one more question because we have very few minutes left so we can talk after and I want to get to the person behind you for his question as well and then we will do a quick back up

Audience

Good afternoon thanks for covering those areas in the lectures that were much needed to understand so you talked about sovereignty AI and then you talked about value or responsible AI so there might be few scenarios where while chasing sovereignty we might have to bypass value additions or responsibility for the citizens while the other way round also so can you discuss about those scenarios where you value sovereignty more than talking about responsible AI or value additions AI and the otherwise also and when they can be parallelly taken into account

Theresa Yurkewich Hoffmann

So you’re asking around Responsible AI and valuable AI Where they link Where one might be more useful Than the other So where I see Responsible AI I think it can actually incorporate As a lens for everything But it’s much easier to think of it as separate I think Responsible AI can encompass five things So like ethics Trust Like bias and fairness Human centered Governance and security Where I think that distinguishes from value Is value Is looking beyond Financial growth So a lot of organizations You might work with Or when you think of many organizations In the world, they’re looking at how much money Will this save me Or how much time or how much productivity But I think valuable AI is looking at What goes beyond that And what’s the value of value And what’s the value of value And what’s the value of value does it actually create more well -being in people?

Does it give people time back with their families, for example, or other hobbies they want to do? Valuable is thinking about what’s the long -term benefit that this will have in terms of how we change society. Maybe it’s going to create a whole bunch of different jobs now in something else. So I think actually if you’re using responsible AI, it will create value. So I still think they go hand -in -hand, but that’s probably how I distinguish it. Is that your question? No. I’m not sure. Maybe Omid has an answer. You know also. Yeah.

Omeed Hashim

So I think you’re saying what happens when you have to do a trade -off, right, between sovereignty and value. And I think this is a very good question, to be honest, right? Because when you – so, again, yesterday I was wandering around in the summit. I keep asking people questions about different things. Right. And one of the countries that I spoke to – they know that using GPD models or Claude and various other things is a quick route to building what they need to build because they’re there it is immediate and it can be done almost like without any issues at all but they’re taking the hard route so they’re saying actually we don’t want to do that because what if tomorrow we fall out with them as Europeans are falling out with Americans anyway so what happens if they turn off the systems, what would we do then so if you think about it in terms of the speed and the value is actually going with what you got but the more challenging thing which is the value the value is can we actually use this system for our citizens on an ongoing basis is that data something that belongs to us are the models aligned with what we are doing.

So they want to be able to enable their people in order to deliver the right outcomes. And that would not happen if they just outsourced their sovereignty to the U .S. So I think those are some of the very, very important factors that need to be explained. But ultimately, from a value perspective, three is a spot on. I think it’s about what is the value in terms of to the people that are going to use that system. So if tomorrow we found this fantastic system, like I give you an example, we’ve stopped multiple times in terms of the traffic because some VIP was coming out of somewhere, right? And then just literally closed the road.

So we’re sitting there for like half an hour and then we get going again. That’s happened, I think, three or four times so far. So if you were to build the system, you would be needing to think, you know, what is the value for those taxi drivers, all these public that are going around. and that’s the key thing that you need to be able to use AI to achieve, right? This needs to be measurable, it needs to actually help the people itself. So yes, it’s a very tricky trade -off.

Theresa Yurkewich Hoffmann

I think the trade -off is really good, especially in sustainability as well. So a lot of times organizations might just think, how do we adopt AI as quick as possible, get people to use it as much as possible, but actually every query that you use has a sustainability impact. And so I think there’s a trade -off there because there might be a sustainability impact, but depending on where you are, you might value training people to use AI more, so you might be okay with that impact because it’s more about getting people comfortable with using it. But then if you are an organization that really values sustainability, you have really strong carbon goals or net zero goals.

then actually that might be the trade -off that you have. So I think one thing that we’re doing when we’re working with organizations is we’re getting them to make that very difficult decision of here’s high concern, here’s low concern. We map out all the harms that we can think of and all the principles and values that align to them and they can’t put any side by side. And they have to move all of them from high to low concern. And very quickly that makes you see what’s real for your organization and I’ve seen a lot of them put sustainability at the bottom, which to me is a little bit concerning, but it does start to think about really understanding your organization and how those trade -offs are going to play.

And that’s what we’re finding in the human one as well.

Audience

So just 10 seconds more, adding to yours, ranking low to high. So out of all these four, sustainability, sovereignty, responsibility and valuation, how do you rate them as low to high, all the four factors that you have covered in your paper?

Theresa Yurkewich Hoffmann

how do you rate all this on a scale of how do I rate them I think that’s very difficult I think I’m putting response by AI at the top because I think it can actually it’s a cheat because it can kind of include sustainability actually and I think it will create value I think so then I would probably put sovereignty lower than that but obviously this year has maybe changed that geopolitically I think I still put response by AI at the top I’ll make that hard choice what do you say

Omeed Hashim

I think I kind of agree and I think Modi Prime Minister Modi said this himself yesterday about human centered AI design is part of responsible AI a few days ago me and Teresa were talking to someone and they were describing a system now if you just indulge me for a couple of minutes let me explain the background of the system and then you see how it’s relevant so they were building a system for kind of the nursing or old people’s home so you may know that elderly get dehydrated and they forget to drink water and then that causes a lot of problems for them so they built a system where using AI and vision they were seeing if the elderly were having enough liquid in the day or not now that’s fantastic everybody says this is a brilliant idea but then you think about it they are monitoring those elderly both in the area where it’s common as well as where they may be in their bedrooms or whatever so that brings a challenge and then the other challenge was what about the people, the nurses who are actually hydrating them because that could become a negative effect on them because somebody might be saying you’re not doing your job right right and what about the family of the elderly, what about the impact on them, so I think it is really important to understand why we build the system who it affects how it affects them and what the long term benefits are which brings the value, right, this is why it’s four dimensions, none of these are independent, I think they’re all relating to one another, one shape or form

Theresa Yurkewich Hoffmann

yeah so we’ll work towards wrapping up because I think we’re getting the time check can we but this one switched because it said 8 and then it said 17 and now it says 10 and then she told me I had 7 this one’s right, okay well let’s see if there are more questions yeah, but we have the takeaways and things to go through also, so I think we’ll wrap up and we can talk to people individually afterwards, can we skip through some of the slides, because we wanted to ask next one, next one next one Next one, I think. Okay, so we actually wanted to flip that question and ask this to you in the audience as well. Which one would be your top?

So of those four lenses, sovereign, green, responsible AI, which one do you think is an absolute must -have? And you can only pick one. And if this isn’t there, it’s going to derail the project. Shall I show of hands? Who says sovereignty is the most important? Who says that it’s green AI sustainability? Who says it’s responsible and then value? Some people didn’t vote. You didn’t vote back there. But I think it sounds like a lot of people are in the responsible and value is the most important. I think I agree. But I think what we wanted to come across is all of these need to come into play as well. Can we do the next slide?

Of that question, though, who has a responsible AI practice in place? Who uses a framework or anything like that? Anybody? Who has a sovereign AI policy in place? No? And who is looking at sustainability? None of us. So that’s a takeaway for all of us. I think we wanted to wrap up with how do I take this forward. So a couple points I want to make. The first is that we have taken a lot of the learnings and things you talked about in this and we’ve turned it into a white paper. There’s a link below, but we can share it with you. We’ve shared it on LinkedIn as well to talk about these learnings. And we wrapped it up to say for each of those themes, here’s eight to ten things that you could do if you really wanted to take sovereignty, green, sustainable, responsible, and valuable AI forward.

So please check out. I’m very happy to talk about that paper. I’m very happy to talk about it and give you insight. My thing, the key takeaway for us here is that no single dimension is the answer. I think that’s come out in the scenarios, in the conversations that we’ve had, and in how we’re prioritizing is that you can’t really have one, just one. You need all of them if you want to scale that project and really make it to production. The second point is on tradeoffs. So it was really good that that came up in the conversation, is that just being aware of the tradeoffs that you will have to make and having something in process to talk about why you made that decision is important.

I think a takeaway for everyone here is think about an AI policy, which talks about how you’ll use AI and what you will prioritize. Think about having a responsible AI framework, which is essentially all the questions and things that you want to be implemented across ethics or across trust, security. and then really think about how you can turn some of this into numbers. So what are the KPIs that you can actually look at for sustainability, for users, for ethics? Don’t just make them a concept of we will be ethical. Think about what that actually means for you and how you’re going to measure it. That’s important if you want to get funding, investment, and show the project is a success.

And then finally, think about how you can upskill your teams to understand these concepts and how you can incorporate diverse views. I think that’s probably the most important in building out the responsibility. So we will wrap up. This is to say if you want to get in touch with us, here’s our details. Find us on LinkedIn. Send us an email. We can have a couple minutes after this since I know there was two questions in the audience that we might not have got to. But otherwise, we hope this session was useful. If you want to give us feedback, here’s a bit. Bigger QR code. so if you want to stay in touch then fill out this, let us know if there’s anything we can improve on the session or any questions that you have, we’re super happy to hear that and otherwise just a big thank you for your participation and yes, hope you have a good rest of the day and a good weekend

Omeed Hashim

Yeah, I was just going to say I think great questions there about trade -off and absolutely the right question to ask because none of these are unique Sorry, please, go ahead

Audience

Yeah Like you were talking about trade -offs, I just wanted to say okay, every model has their own aspects like pros and cons or as you say, different dimensions, I’ve got most of my answers from those questions but I just wanted to ask, if we’re building something and taking these aspects like okay, taking responsible AIA, valuable AIA on them. But if we are taking, we’ll be missing some aspects. As he said about responsibility, if we are taking accuracy and fairness, if we will take if it makes it easier to speak in Hindi, I understand. It’s okay, but fairness, okay. If we are doing Sorry, sorry. Again.

Theresa Yurkewich Hoffmann

I think there’s no issue. You can ask us on email as well. It’s not an issue. We’re more than happy we’ll respond to you if you want to ask.

Omeed Hashim

Just a question.

Related ResourcesKnowledge base sources related to the discussion topics (21)
Factual NotesClaims verified against the Diplo knowledge base (7)
!
Correctionhigh

“Only about 30 % of AI pilots progress to production.”

The knowledge base reports that almost 80 % of AI pilots do not make it to production, implying roughly 20 % succeed, not 30 % [S6].

Confirmedhigh

“A lack of trust in the technology, its data, and its societal impact is the principal barrier.”

The source highlights mistrust stemming from concerns over data and model control as a key obstacle to AI project adoption [S1].

Confirmedmedium

“Sovereignty hinges on who controls data and models; reliance on foreign AI services creates vulnerability.”

The knowledge base outlines AI sovereignty dimensions that include control over data, models, training, and operational governance, matching the claim [S22].

Additional Contextmedium

“Public‑health X‑ray triage model required far more compute, power and cooling water than the host region could provide, making the solution financially and politically untenable.”

Sources discuss the cooling and water challenges of high-compute AI workloads and the impact of electricity and water shortages on data-centre feasibility in various regions [S89] and [S92].

Additional Contextlow

“Traffic‑light optimisation reduced average commute times but rerouted traffic through low‑income neighbourhoods, worsening pedestrian safety and provoking community backlash.”

A discussion of a transport-focused AI scenario notes equity and community impact concerns when AI-driven traffic management is deployed [S75].

Confirmedlow

“Voice‑cloning scams illustrate AI‑generated harms (e.g., Romanian voice‑cloning scams).”

The knowledge base documents the use of deep-fake and voice-cloning technology in scams, confirming that such AI-generated fraud exists [S84] and [S85].

Additional Contextmedium

“Unsustainable AI systems incur higher operating costs and are unlikely to scale, linking environmental impact directly to economic viability.”

Sources note that high energy and cooling demands raise operating expenses and affect scalability of AI deployments, especially in regions with limited power and water resources [S89] and [S92].

External Sources (95)
S1
Building Sovereign and Responsible AI Beyond Proof of Concepts — – Theresa Yurkewich Hoffmann- Omeed Hashim
S2
WS #280 the DNS Trust Horizon Safeguarding Digital Identity — – **Audience** – Individual from Senegal named Yuv (role/title not specified)
S3
Building the Workforce_ AI for Viksit Bharat 2047 — -Audience- Role/Title: Professor Charu from Indian Institute of Public Administration (one identified audience member), …
S4
Nri Collaborative Session Navigating Global Cyber Threats Via Local Practices — – **Audience** – Dr. Nazar (specific role/title not clearly mentioned)
S5
Building Sovereign and Responsible AI Beyond Proof of Concepts — – Theresa Yurkewich Hoffmann- Omeed Hashim – Theresa Yurkewich Hoffmann- Omeed Hashim- Audience
S6
AI as critical infrastructure for continuity in public services — “Data is siloed, data is not ready for AI scale.”[71]. “So almost 80 % of those pilots don’t make it to production.”[98]…
S7
Artificial intelligence (AI) – UN Security Council — During the9821st meetingof the AI Security Council, a significant discussion unfolded regarding the role of artificial i…
S8
Certifying humanity: Labeling content amid AI flood — The erosion of trust did not begin when AI became highly intelligent. It began whensynthetic contentbecame abundant. Tex…
S9
Deepfakes and the AI scam wave eroding trust — Author:Slobodan Kovrlija Deepfakes force an uncomfortable reassessment of how trust works online. For decades,digital t…
S10
Who Watches the Watchers Building Trust in AI Governance — So there is no end to the story of how regulators should design the regulations. That is the main question. All countrie…
S11
Technology Regulation and AI Governance Panel Discussion — Different countries require different approaches based on their regulatory context and capture by interest groups
S12
US regulators to decide the path for AI regulation — Prompted by the rise of generative artificial intelligence systems (AI) such as OpenAI’s ChatGPT, US lawmakers are curre…
S13
https://dig.watch/event/india-ai-impact-summit-2026/how-trust-and-safety-drive-innovation-and-sustainable-growth — I think all of the above to some extent. Part of why we start with principles in our governance program is I think it’s …
S14
Health Inequality Monitoring — The process of inequality monitoring does not stop with the reporting of data, but must continue on to its translation f…
S15
AI That Empowers Safety Growth and Social Inclusion in Action — “So we’ve engaged with member states and different stakeholders about their priorities, and let me bring to your attenti…
S16
Overview of AI policy in 10 jurisdictions — Summary: Brazil is working on its first AI regulation, with Bill No. 2338/2023 under review as of December 2024. Inspire…
S17
HealthAI: The Global Agency for Responsible AI in Health — Responsible AI is characterised by AI technologies that align with established standards and ethical principles, priorit…
S18
Successes &amp; challenges: cyber capacity building coordination | IGF 2023 — A sustainability outlook is crucial for lasting and effective impact in cyber capacity building. Projects lacking sustai…
S19
Democratizing AI: Open foundations and shared resources for global impact — The speakers consistently emphasised the need for broader engagement and participation. They highlighted the importance …
S20
DC3 Community Networks: Digital Sovereignty and Sustainability | IGF 2023 — By involving various stakeholders, including community members, organisations, and government bodies, this model ensures…
S21
Digital Cooperation and Empowerment: Insights and Best Practices for Strengthening Multistakeholder and Inclusive Participation — Hisham Ibrahim: I’ll also mention three quick ones, looking across my service region, trying to give different examples….
S22
Discussion Report: Sovereign AI in Defence and National Security — The presentation outlines six key dimensions of AI sovereignty: data control, model control, training and alignment over…
S23
WS #102 Harmonising approaches for data free flow with trust — Dave Pendle: Yeah, thanks, Saman. Thanks for having me and good morning to everyone. My name is Dave Pendle. I’m an …
S24
Scaling Trusted AI_ How France and India Are Building Industrial &amp; Innovation Bridges — 100 % trust only on machines is still a little far. So people in the loop is definitely which built trust for all of us….
S25
WS #294 AI Sandboxes Responsible Innovation in Developing Countries — Natalie Cohen, Head of Regulatory Policy for Global Challenges at the OECD, positioned sandboxes within broader regulato…
S26
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — Diana Nyakundi:Yeah, thanks Fadi. So with regards to opportunities, there are a lot of AI pilot projects that are coming…
S27
Keynote-Martin Schroeter — “while more than two -thirds of global organizations are already heavily invested in AI, almost half still struggle to s…
S28
Scenarios and their Implications — In the first section, we explain why scenarios are a useful tool to address the uncertainties around the future of work …
S29
A Guide for Practitioners — – What are the current macroeconomic, political and social environments, and how do they relate to health? A thoro…
S30
Equi-Tech-ity: Close the gap with digital health literacy | IGF 2023 — Ignoring the wider context and blindly implementing digital solutions can inadvertently increase the digital divide. It …
S31
Research Publication No. 2014-6 March 17, 2014 — Among the bigger picture insights gained from our review is the high degree to which the economic, political, organizati…
S32
Leaders TalkX: When policy meets progress: paving the way for a fit for future digital world — Lidia Stepinska Ustasiak: Excellencies, distinguished delegates, ladies and gentlemen, good afternoon. My name is Lidia …
S33
Exploring the power of AI: Diplomatic language as Turing Test — Trade-offs form the bedrock of any diplomatic treaty. They embody a delicate balance between give-and-take, a nuanced ta…
S34
WS #226 Strengthening Multistakeholder Participation — The discussion maintained a collaborative and constructive tone throughout, with participants openly acknowledging chall…
S35
Policies and platforms in support of learning: towards more coherence, coordination and convergence — 127. The Inspector found that many organizations take a narrow approach to learning and talent management – one that is …
S36
AN INTRODUCTION TO — (mainly former socialist countries) where it became obvious that the development of society is a much more complex proce…
S37
360° on AI Regulations — In conclusion, the analysis reveals that AI regulation is guided by existing laws, and there is a complementary nature b…
S38
Interdisciplinary approaches — AI-related issues are being discussed in various international spaces. In addition to the EU, OECD, and UNESCO, organisa…
S39
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — Our country actively contributes to European initiatives strengthening Europe’s technological leadership. In this contex…
S40
Closing remarks – Charting the path forward — Bouverot argues for comprehensive inclusion in AI governance discussions, extending beyond just governmental participati…
S41
From principles to practice: Governing advanced AI in action — – Balancing rapid technological advancement with necessary governance frameworks across different regional approaches B…
S42
Comprehensive Report: European Approaches to AI Regulation and Governance — This discussion revealed gaps in current regulatory approaches, which focus primarily on technical performance and funda…
S43
AI That Empowers Safety Growth and Social Inclusion in Action — This discussion revealed both significant progress and substantial challenges in implementing responsible AI governance….
S44
AI &amp; Diplomacy: Managing New Frontiers – ADF 2024 — The discussion concluded that although regulatory frameworks recognise the importance of these issues, the gap between i…
S45
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — High level of consensus on fundamental principles with constructive disagreement on implementation details. This suggest…
S46
Responsible AI in India Leadership Ethics &amp; Global Impact — “There are challenges, and I’d be remiss if I didn’t spend 30 seconds on the challenges that standards adoption and AI t…
S47
Safeguarding Children with Responsible AI — High level of consensus across diverse stakeholders (government, industry, academia, and youth representatives) suggests…
S48
Technology Regulation and AI Governance Panel Discussion — Different countries require different approaches based on their regulatory context and capture by interest groups
S49
Searching for Standards: The Global Competition to Govern AI | IGF 2023 — Regulatory frameworks must exist at different levels – global, regional, national, and even sub-national – to ensure com…
S50
What is it about AI that we need to regulate? — What is it about AI that we need to regulate?The discussions across the Internet Governance Forum 2025 sessions revealed…
S51
Building Sovereign and Responsible AI Beyond Proof of Concepts — Green AIaddresses both environmental impact and economic viability. The speakers argued that these concerns are intrinsi…
S52
Green AI and the battle between progress and sustainability — AI is increasingly recognised for its transformative potential and growing environmental footprint across industries. Th…
S53
Open Forum #53 AI for Sustainable Development Country Insights and Strategies — The participant argues that AI solutions are sustained and scalable when they actually address real problems and help so…
S54
Artificial intelligence — Sustainable development
S55
Living with the genie: Responsible use of genAI in content creation — Connecting these discourses is the realization that technology intertwines deeply with societal goals, such as promoting…
S56
WS #466 AI at a Crossroads Between Sovereignty and Sustainability — It’s notable that government representatives openly acknowledge significant gaps and failures in current AI governance, …
S57
WIPO Conversation on Intellectual Property (IP) and Artificial Intelligence (AI) — 25. The number of countries with expertise and capacity in AI is limited. At the same time, the technology of AI is adva…
S58
Global AI Policy Framework: International Cooperation and Historical Perspectives — So I think that today’s problem, as well as the IP policies, that how to facilitate those creation based on the IP mater…
S59
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — Demands on policy exist without the building blocks to support its implementation Lack of infrastructure, skills, compu…
S60
Building a Digital Society, from Vision to Implementation — – Chukwuemeka Cameron Economic | Sociocultural Hines cites research from Gary Marcus presented at Web Summit showing t…
S61
WS #294 AI Sandboxes Responsible Innovation in Developing Countries — Natalie Cohen: Yeah, I think this issue of trust is key. One thing the OECD does is a driver of trust in government surv…
S62
AI agents offer major value but trust and data gaps remain — AI agents coulddrive up to $450 billion in economic value by 2028, according to new research by Capgemini. The gains wou…
S63
AI as critical infrastructure for continuity in public services — first definitely not technology because I think we’ve seen technology is always almost ahead very true over the last cou…
S64
Building Sovereign and Responsible AI Beyond Proof of Concepts — Artificial intelligence | Building confidence and security in the use of ICTs Theresa points out that only a small frac…
S65
Keynote-Martin Schroeter — “while more than two -thirds of global organizations are already heavily invested in AI, almost half still struggle to s…
S66
https://dig.watch/event/india-ai-impact-summit-2026/keynote-martin-schroeter — or never makes it out of the experimentation phase. And what we’re seeing is not an innovation problem. The innovation i…
S67
Blockchain-Based Public Procurement to Reduce Corruption — The project is anchored in a software PoC to uncover, using a bottom-up approach, key capabilities and limitations assoc…
S68
AI can reshape the insurance industry, but carries real-world risks — AIis creatingnew opportunities for the insurance sector, from faster claims processing to enhanced fraud detection. Acco…
S69
Micro and macro philosophy — My hunch is that we may consider revisiting or even ‘retiring’ the concept of ‘freedom’ (even scientists are considering…
S70
WORKING PAPER — The current global landscape is marked by an array of disparate data regula7ons, a situa7on that presents substan7al imp…
S71
WS #254 The Human Rights Impact of Underrepresented Languages in AI — Nidhi Singh: Yeah, thank you so much for the question. So I think this is something we’ve broadly said this in the in…
S72
https://dig.watch/event/india-ai-impact-summit-2026/ai-automation-in-telecom_-ensuring-accountability-and-public-trust-india-ai-impact-summit-2026 — But you started to look upon through different lenses. All that I need to do is to look through different lens. But I st…
S74
https://dig.watch/event/india-ai-impact-summit-2026/ensuring-safe-ai_-monitoring-agents-to-bridge-the-global-assurance-gap — And so I think there will be a lot of questions around how do you weigh up all these challenges, again, knowing that eve…
S75
https://dig.watch/event/india-ai-impact-summit-2026/building-sovereign-and-responsible-ai-beyond-proof-of-concepts — then actually that might be the trade -off that you have. So I think one thing that we’re doing when we’re working with …
S76
WS #226 Strengthening Multistakeholder Participation — The discussion maintained a collaborative and constructive tone throughout, with participants openly acknowledging chall…
S77
UNESCO Recommendation on the ethics of artificial intelligence — 118. Member States should work with private sector companies, civil society organizations and other  stakeholders, inclu…
S78
Policies and platforms in support of learning: towards more coherence, coordination and convergence — 137. UNHCR has established a centralized systematic learning centre overseeing all learning solutions across th…
S79
AUDA-NEPAD White Paper: Regulation and Responsible Adoption of AI in Africa Towards Achievement of AU Agenda 2063 — Skills Development: African countries can develop policies that promote skills development in areas related to AI. This …
S80
Press Conference: Closing the AI Access Gap — Trust, accessibility, inclusivity, and collaboration are seen as crucial pillars for successfully harnessing AI’s potent…
S81
#205 L&amp;A Launch of the Global CyberPeace index — Wisniak argues that AI governance discussions often focus too much on hypothetical future risks while ignoring current h…
S82
Keynote by Mathias Cormann OECD Secretary-General India AI Impact — Cormann outlined the OECD’s comprehensive approach to supporting policymakers through four key areas. First, the organis…
S83
Shadow AI and poor governance fuel growing cyber risks, IBM warns — Many organisations racing to adopt AI arefailing to implement adequate security and governance controls, according to IB…
S84
Disinformation and Misinformation in Online Content and its Impact on Digital Trust — Tara Harris provided concrete examples of how bad actors exploit these technologies, describing how Prosus has been targ…
S85
Generative AI and Synthetic Realities: Design and Governance | IGF 2023 Networking Session #153 — The ability to mimic voices and generate realistic messages allows malicious actors to deceive individuals in various wa…
S86
When language models fabricate truth: AI hallucinations and the limits of trust — AI has come far from rule-based systems and chatbots with preset answers.Large language models (LLMs), powered by vast a…
S87
AI for Bharat’s Health_ Addressing a Billion Clinical Realities — This comment identifies a critical gap between proof-of-concept success and real-world adoption. It’s insightful because…
S88
Laying the foundations for AI governance — – The four fundamental obstacles identified by the moderator: time, uncertainty, geopolitics, and power concentration R…
S89
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — The cooling challenge becomes complex as compute requirements scale, with different cooling solutions needed for varying…
S90
DPI+H – health for all through digital public infrastructure — A global recognition of DPI’s foundational value in healthcare is apparent, though this acknowledgment is coupled with a…
S91
ACKNOWLEDGEMENTS — Data centres are key to today’s cloud services. To optimize performance, they need to be located where access to high-ca…
S92
WS #111 Addressing the Challenges of Digital Sovereignty in DLDCs — South Africa experiences electricity challenges and water shortages, requiring expensive backup power and affecting cool…
S93
AI for Democracy_ Reimagining Governance in the Age of Intelligence — Those who design, train, and deploy these systems will influence not only over individual users, but also the informatio…
S94
Annex 5 — – ■ Data integrity risks may occur when people choose to rely solely upon paper printouts or PDF reports from computeriz…
S95
Surveillance technology: Different levels of accountability | IGF 2023 Networking Session #186 — Concerns have been raised regarding the misuse of surveillance technology in the Middle East and North Africa (MENA) reg…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
T
Theresa Yurkewich Hoffmann
9 arguments170 words per minute4540 words1599 seconds
Argument 1
Lack of trust is a primary reason only ~30 % of AI pilots reach production, with organizations failing to consider trust dimensions such as reliability, data handling, impact on jobs, and societal effects.
EXPLANATION
Theresa explains that the low conversion rate of AI pilots to production is largely due to insufficient trust. Organizations often overlook how reliable the AI will be, how data is managed, and the broader impacts on employment and society.
EVIDENCE
She cites that only 30 % of AI projects move to production and links this to a lack of trust, noting that trust encompasses organizational confidence, data sharing, output reliability, societal impact, and job implications [11-13].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The low conversion rate of AI pilots and its link to trust issues is documented in the sovereign and responsible AI briefing and in data-silodereport showing ~80 % of pilots fail to reach production [S1][S6].
MAJOR DISCUSSION POINT
Trust as a barrier to AI adoption
Argument 2
The rise in AI incidents worldwide (e.g., voice‑cloning scams, AI‑generated books without attribution, biased facial‑recognition at borders) erodes public confidence and hampers adoption.
EXPLANATION
Theresa highlights a growing number of AI‑related harms that undermine public trust. Specific incidents illustrate how misuse can lead to scams, misinformation, and discrimination, discouraging broader AI deployment.
EVIDENCE
She references the OECD AI Observatory’s monitor showing 600 incidents in December 2025, and provides examples such as voice-cloning scams in Romania, AI-generated books in Cairo lacking human oversight, and biased facial-recognition at borders [20-38].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Erosion of public trust due to synthetic content, deepfakes and AI-driven scams is highlighted in sources on content labeling and deepfake scams [S8][S9].
MAJOR DISCUSSION POINT
AI harms reducing public trust
Argument 3
Introduces four lenses—Sovereignty, Green (sustainability), Responsible AI, and Valuable AI—as a holistic approach to building trust and preventing harms.
EXPLANATION
Theresa presents the 4D framework, arguing that evaluating AI projects through these four dimensions helps anticipate and mitigate risks, ensuring trustworthy and valuable outcomes.
EVIDENCE
She describes the four dimensions-sovereignty (control over data and models), green (environmental sustainability), responsible AI (ethics, bias, governance), and valuable AI (real-world benefit) as essential for scaling AI safely [64-66].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The 4-dimensional framework (Sovereignty, Green, Responsible, Valuable) is described in the sovereign and responsible AI presentation [S1].
MAJOR DISCUSSION POINT
4D framework for trustworthy AI
Argument 4
Six common failure categories: (1) adoption/impact gap, (2) governance failures, (3) misalignment with societal goals, (4) sovereignty issues, (5) sustainability pressures, and (6) change‑management challenges.
EXPLANATION
Theresa outlines why many proof‑of‑concept AI projects do not succeed, pointing to gaps between design and real‑world use, weak governance, misaligned objectives, lack of control, environmental constraints, and cultural resistance.
EVIDENCE
She lists the six challenges while discussing why proof-of-concepts fail, covering adoption, governance, misalignment, sovereignty, sustainability, and change-management [42-60].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The six failure categories are enumerated in the same sovereign AI briefing [S1].
MAJOR DISCUSSION POINT
Root causes of POC failures
Argument 5
Governments need to set baseline safety standards; regulatory approaches differ (e.g., UK’s third‑party AI supplier rules vs. the US’s lack of formal regulation).
EXPLANATION
Theresa argues that clear governmental regulations are essential for high‑risk AI applications, noting that the UK is moving toward stricter supplier rules while the US currently lacks comparable legislation.
EVIDENCE
She explains high-risk AI requires transparency, explainability, and third-party supplier regulation in the UK, contrasting this with the US’s more permissive stance [213-224].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Comparisons of UK third-party supplier rules and the US regulatory gap are discussed in analyses of AI governance and regulator approaches [S10][S11][S12].
MAJOR DISCUSSION POINT
Regulatory landscape for AI
Argument 6
Mapping high‑ and low‑concern harms helps decide which dimension to prioritize, though no single lens can be ignored.
EXPLANATION
Theresa suggests a practical method of ranking harms by concern level to guide which of the four dimensions should receive focus, emphasizing that all dimensions remain important.
EVIDENCE
She describes a process where organisations map harms from high to low concern, revealing which issues (e.g., sustainability) are most critical for them [306-313].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The method of mapping harms to prioritize dimensions is presented in the 4D framework discussion [S1].
MAJOR DISCUSSION POINT
Prioritisation of trust dimensions
Argument 7
Create an AI policy that defines priorities across the four dimensions; adopt a responsible‑AI framework with concrete questions and safeguards.
EXPLANATION
Theresa recommends organisations develop a formal AI policy that outlines how each of the four lenses will be addressed, and implement a responsible‑AI framework to embed ethical and security safeguards.
EVIDENCE
She mentions a white paper summarising eight-to-ten actions per dimension and urges creation of an AI policy and responsible-AI framework with clear safeguards [342-355].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Recommendations to draft an AI policy covering the four lenses and to adopt a responsible-AI framework appear in the sovereign AI briefing and safety-growth discussions [S1][S15].
MAJOR DISCUSSION POINT
Policy and framework recommendations
Argument 8
Develop measurable KPIs for sustainability, ethics, and user impact to demonstrate value and secure funding.
EXPLANATION
Theresa stresses the importance of quantifying AI outcomes through key performance indicators, enabling organisations to prove value, attract investment, and meet sustainability goals.
EVIDENCE
She advises defining KPIs for sustainability, ethics, and user impact, moving beyond vague commitments to measurable targets [355-359].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Guidance to translate goals into measurable sustainability, ethics and user-impact KPIs is provided in the sovereign AI briefing and sustainability outlook literature [S1][S18].
MAJOR DISCUSSION POINT
KPIs for AI governance
Argument 9
Upskill teams, incorporate diverse perspectives, and engage government to promote smart data sharing and domestic model development.
EXPLANATION
Theresa highlights capacity building as essential, urging organisations to train staff, include varied viewpoints, and collaborate with governments to foster local data ecosystems and sovereign AI capabilities.
EVIDENCE
She calls for upskilling, diverse input, and government engagement to support responsible AI and sovereign model development [360-362].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Capacity-building, stakeholder diversity and government partnership for sovereign AI are emphasized in the sovereign AI discussion and digital sovereignty networks [S1][S20].
MAJOR DISCUSSION POINT
Capacity development and stakeholder engagement
O
Omeed Hashim
8 arguments161 words per minute2796 words1039 seconds
Argument 1
Sovereignty means control over data and models; loss of this control undermines trust and can cause project failure.
EXPLANATION
Omeed explains that AI sovereignty—having authority over where data resides and how models are built—directly influences trust. Without this control, projects risk failure due to uncertainty about data usage and model updates.
EVIDENCE
He discusses the importance of control over data and models, linking loss of control to reduced trust and potential failure, and cites Serbia’s plan to build its own LLM as an illustration [137-144][145-148].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The importance of data and model control for trust is detailed in the sovereign AI dimensions and the discussion report on sovereign AI in defence [S1][S22].
MAJOR DISCUSSION POINT
AI sovereignty and trust
Argument 2
Green AI links environmental impact to economic viability; more sustainable systems are also cheaper and more scalable.
EXPLANATION
Omeed argues that environmental sustainability and cost efficiency are intertwined; greener AI solutions reduce carbon footprints and operational expenses, making them more scalable.
EVIDENCE
He describes how sustainable AI reduces greenhouse-gas emissions and costs, referencing cloud computing economics, massive data-center electricity consumption, and the link between sustainability and scalability [154-165].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Links between AI sustainability, reduced emissions and cost efficiency are discussed in the green dimension and sustainability outlook sources [S1][S18].
MAJOR DISCUSSION POINT
Environmental and economic benefits of Green AI
Argument 3
Responsible AI encompasses ethics, bias mitigation, governance, security, and human‑centered design to ensure trustworthy outcomes.
EXPLANATION
Omeed outlines that responsible AI requires ethical standards, bias checks, robust governance, security measures, and designs that centre human needs, all of which build trust and prevent harm.
EVIDENCE
He details responsible AI components such as governance, ethics, bias, security, and human-centered design, noting their role in fostering trust and safe AI deployment [168-176].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The components of responsible AI-ethics, bias mitigation, governance, security, human-centered design-are outlined in the responsible AI lens and safety-growth discussions [S1][S15].
MAJOR DISCUSSION POINT
Components of responsible AI
Argument 4
Valuable AI focuses on delivering real‑world benefits, measurable outcomes, and societal well‑being beyond mere cost savings.
EXPLANATION
Omeed stresses that AI should generate tangible societal value, not just financial efficiency. Measuring impact on wellbeing, job creation, and broader societal goals is essential for true value.
EVIDENCE
He provides examples such as the UAE’s ambition to multiply workforce productivity and stresses the need for clear, measurable objectives to avoid dead-weight projects [181-196].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The valuable AI dimension emphasizing societal outcomes and measurable impact is described in the 4D framework and democratizing AI literature [S1][S19].
MAJOR DISCUSSION POINT
Defining and measuring AI value
Argument 5
National AI sovereignty—building and hosting models domestically (e.g., Serbia’s own LLM)—is crucial for control and long‑term trust.
EXPLANATION
Omeed points out that countries seeking AI sovereignty aim to develop and host models within their borders to retain control over data and avoid dependence on foreign providers, thereby sustaining trust.
EVIDENCE
He recounts a conversation with Serbian officials who plan to build large language models locally to maintain control over AI systems [145-148].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Building domestic large language models for sovereignty is highlighted in the sovereign AI report and examples such as Serbia’s initiative [S22][S1].
MAJOR DISCUSSION POINT
Domestic AI model development
Argument 6
Organizations must balance competing priorities; for example, choosing a fast external model may boost short‑term value but sacrifice sovereignty and long‑term control.
EXPLANATION
Omeed explains that while external AI services can deliver quick value, they introduce risks to sovereignty and future autonomy, forcing organisations to weigh immediate benefits against strategic control.
EVIDENCE
He describes scenarios where countries consider using external models like GPT or Claude for speed, but worry about losing control if providers withdraw services, highlighting the trade-off between value and sovereignty [292-298].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The tension between rapid value from external models and loss of sovereignty is discussed in the trade-off analysis of the 4D framework [S1].
MAJOR DISCUSSION POINT
Trade‑offs between value and sovereignty
Argument 7
A service‑oriented, co‑creation model is suggested to retain IP while enabling multi‑client platforms.
EXPLANATION
Omeed proposes shifting from pure IP ownership to a service‑based approach where AI capabilities are offered as shared services, allowing co‑creation and broader adoption across multiple clients.
EVIDENCE
He suggests offering AI as a layered service, retaining or sharing IP, and co-creating with clients to overcome commercial challenges of exclusive IP [273-276].
MAJOR DISCUSSION POINT
Service model for AI commercialization
Argument 8
Encourage governments to support sovereign AI initiatives and establish clear regulatory baselines for high‑risk applications.
EXPLANATION
Omeed calls for governmental action to back sovereign AI projects and to set definitive safety standards for high‑risk AI, ensuring trust and long‑term viability.
EVIDENCE
He reiterates the importance of sovereignty, noting that loss of trust leads to crises, and stresses that governments must provide clear regulatory frameworks for critical AI systems [292-298].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for government backing of sovereign AI and clear safety standards for high-risk AI appear in regulator-watching analyses and policy panels [S10][S11][S12][S16][S22].
MAJOR DISCUSSION POINT
Government role in sovereign AI
A
Audience
5 arguments154 words per minute1127 words437 seconds
Argument 1
Private sector leaders seek clearer governmental guidance on safe AI use and model selection.
EXPLANATION
Ami Kotecha expresses that private companies need government‑defined safety standards and guidance on which AI models are acceptable, especially for high‑risk or critical applications.
EVIDENCE
She describes her company’s need for government direction on AI safety, model utilization, and regulatory expectations for high-risk use cases [210-228].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Private sector demand for clear AI safety standards and model guidance is reflected in regulator-watching analyses and policy panel discussions [S10][S11][S12].
MAJOR DISCUSSION POINT
Demand for government AI guidance
Argument 2
Upcoming data‑protection legislation will gradually raise responsible‑AI compliance, but current adoption remains very low.
EXPLANATION
An audience member notes that a new data‑protection law will be enforced soon, but presently only a tiny fraction of organisations practice responsible AI, indicating a lag between legislation and implementation.
EVIDENCE
The speaker references a law slated for October 2025, predicts a 18-24-month rollout, and states that only 0.1 % of organisations currently practice responsible AI, expecting gradual improvement [230-235].
MAJOR DISCUSSION POINT
Legislative impact on responsible AI
Argument 3
Participants asked how to rank the four lenses and when one (e.g., sovereignty) should outweigh others such as responsibility or value.
EXPLANATION
An audience question seeks clarification on prioritising the 4D dimensions, asking for scenarios where sovereignty might be favoured over responsible or valuable AI, and vice‑versa.
EVIDENCE
The audience member explicitly asks for discussion of scenarios where sovereignty is prioritized over responsibility or value, and how the lenses can be balanced [279].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The mapping and prioritisation of the four lenses, including scenarios where sovereignty may dominate, is addressed in the sovereign AI framework [S1].
MAJOR DISCUSSION POINT
Prioritisation of 4D lenses
Argument 4
Companies struggle to develop platform‑level AI solutions because large clients treat the technology as proprietary IP, limiting broader adoption.
EXPLANATION
A participant describes difficulty in scaling AI offerings when major customers demand exclusive ownership, preventing the creation of shared platforms that could serve multiple users.
EVIDENCE
He explains that a vending-machine AI built for PepsiCo cannot be offered to Coca-Cola, illustrating how client-specific IP demands hinder platform development [253-264].
MAJOR DISCUSSION POINT
IP constraints on platform scaling
Argument 5
Corporate reluctance to commercialize socially beneficial AI (e.g., sustainability IP) creates tension between responsible AI and value generation.
EXPLANATION
An audience member points out that some companies acquire sustainable‑technology IP but choose not to commercialise it, raising concerns about responsible AI practices and missed societal value.
EVIDENCE
She shares that a sustainability-focused IP was sold to a company that refused to commercialise it, highlighting a conflict between responsible AI and delivering value [266].
MAJOR DISCUSSION POINT
Responsibility vs. commercial value
Agreements
Agreement Points
A four‑dimensional (4D) framework—sovereignty, green (sustainability), responsible AI and valuable AI—is essential to build trust and successfully scale AI projects.
Speakers: Theresa Yurkewich Hoffmann, Omeed Hashim
Introduces four lenses—Sovereignty, Green, Responsible AI, and Valuable AI—as a holistic approach to building trust and preventing harms. Sovereignty means control over data and models; Green AI links environmental impact to economic viability; Responsible AI encompasses ethics, bias, governance, security and human‑centered design; Valuable AI focuses on real‑world benefit and measurable outcomes.
Both speakers argue that evaluating AI projects through the four lenses helps anticipate risks, ensure trust and achieve scalable, beneficial outcomes [64-66][137-144][154-165][168-176][181-196].
POLICY CONTEXT (KNOWLEDGE BASE)
The combination of sovereignty, responsible, and green AI mirrors recent policy analyses that link sovereign AI strategies with responsible practices and emphasize the economic and environmental benefits of green AI [S51][S56].
There are inherent trade‑offs between AI sovereignty and the value delivered; organisations must balance control with real‑world benefits.
Speakers: Theresa Yurkewich Hoffmann, Omeed Hashim
Discusses why proof‑of‑concepts fail, highlighting sovereignty issues and the need to prioritize dimensions based on high‑ and low‑concern harms. Explains that choosing fast external models may boost short‑term value but sacrifice sovereignty and long‑term control, requiring careful trade‑off decisions.
Both recognise that sovereignty and value can conflict and that organisations need to weigh these dimensions when designing AI systems [306-313][292-298].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy discussions highlight the tension between national AI control (sovereignty) and delivering societal and economic value, noting trade-offs similar to those described in sovereignty-sustainability debates [S56][S51].
Governments should establish baseline safety standards and clear regulatory frameworks for high‑risk AI, with differing national approaches noted.
Speakers: Theresa Yurkewich Hoffmann, Omeed Hashim, Audience
Governments need to set baseline safety standards; the UK is moving toward third‑party supplier rules while the US lacks formal regulation. Encourages governments to support sovereign AI initiatives and set clear regulatory baselines for high‑risk applications. Private‑sector leaders seek clearer governmental guidance on safe AI use and model selection.
All three parties call for stronger governmental regulation and guidance to ensure trustworthy AI, noting variations between the UK and US models [213-224][292-298][210-228].
POLICY CONTEXT (KNOWLEDGE BASE)
Calls for baseline safety standards align with multi-level regulatory approaches advocated at the IGF and in national panel discussions, which stress the need for clear, risk-based frameworks across jurisdictions [S48][S49][S41].
Sustainability (green AI) is tightly linked to economic viability; greener systems are cheaper and more scalable.
Speakers: Theresa Yurkewich Hoffmann, Omeed Hashim
Highlights trade‑offs where sustainability impacts may be accepted for rapid adoption, but organisations with strong carbon goals must prioritize green AI. States that environmental sustainability reduces costs and improves scalability, making green AI essential.
Both emphasize that environmental considerations are not optional but affect cost and scalability, so sustainability must be integrated into AI design [306-310][154-165].
POLICY CONTEXT (KNOWLEDGE BASE)
Studies on Green AI demonstrate that environmentally efficient models also reduce operational costs and improve scalability, supporting the link between sustainability and economic viability [S51][S52].
Similar Viewpoints
Both define responsible AI as a set of ethical, governance and human‑centred safeguards that are necessary for trustworthy AI [168-176][168-176].
Speakers: Theresa Yurkewich Hoffmann, Omeed Hashim
Responsible AI encompasses ethics, bias mitigation, governance, security and human‑centered design. Responsible AI includes ethics, bias, governance, security and human‑centered design.
Both stress the need for clear government policies and regulations to guide AI adoption in the private sector [213-224][210-228].
Speakers: Theresa Yurkewich Hoffmann, Audience (Ami Kotecha)
Governments need to set baseline safety standards and provide guidance for high‑risk AI use. Private‑sector leaders seek clearer governmental guidance on safe AI use and model selection.
Both acknowledge that responsible AI practices are currently scarce and that regulatory developments are needed to improve adoption [11-13][230-235].
Speakers: Theresa Yurkewich Hoffmann, Audience (legislation comment)
Only a small fraction of AI projects reach production due to lack of trust and governance. Upcoming data‑protection law will raise responsible‑AI compliance, but current adoption is very low.
Unexpected Consensus
Audience members overwhelmingly identified responsible and valuable AI as the most critical lenses, despite earlier emphasis on sovereignty and sustainability as equally vital.
Speakers: Theresa Yurkewich Hoffmann, Audience
Theresa presents all four lenses as essential and asks participants to pick the most important. Audience votes that responsible and valuable AI are the top priorities.
The audience’s preference for responsible/value over sovereignty or green AI was not anticipated given the presenters’ balanced framing, indicating a strong demand for ethical and impact-focused AI first [320-333][279].
Both speakers and audience acknowledge that only a tiny proportion of organisations currently practice responsible AI, yet they all agree on the urgency to develop frameworks and KPIs.
Speakers: Theresa Yurkewich Hoffmann, Audience (legislation comment)
Theresa notes that many AI pilots fail due to governance and trust gaps. Audience notes that only 0.1 % of organisations practice responsible AI today.
The convergence on the extremely low current adoption of responsible AI, despite different contexts (pilot failures vs. legislative rollout), was an unexpected point of agreement highlighting a shared perception of a critical gap [11-13][230-235].
POLICY CONTEXT (KNOWLEDGE BASE)
Industry and governance reports repeatedly note the gap between responsible AI principles and actual practice, urging the creation of concrete frameworks and performance indicators [S43][S44][S45].
Overall Assessment

There is strong consensus among speakers and participants that AI deployment must be guided by a multi‑dimensional framework covering sovereignty, sustainability, responsibility and value; that trade‑offs between these dimensions need explicit management; and that government regulation and measurable KPIs are essential to build trust and scale AI responsibly.

High consensus on the need for a holistic, regulated and measurable approach, suggesting that future policy and practice are likely to converge on integrated frameworks that address all four lenses.

Differences
Different Viewpoints
Prioritisation of the four lenses (sovereignty, green, responsible AI, valuable AI)
Speakers: Theresa Yurkewich Hoffmann, Omeed Hashim, Audience
Theresa suggests it is difficult to rank the lenses and places responsible/value higher while putting sovereignty lower [317-319] Omeed argues that sovereignty is a critical, sometimes non-negotiable, dimension for trust and long-term control, and may outweigh value in certain scenarios [292-298] Audience asks for a ranking and later votes for responsible/value as most important, seeking scenarios where sovereignty might dominate [315-319][279]
The speakers differ on which of the four dimensions should be considered the most essential. Theresa views all dimensions as important but leans toward responsible/value AI as the top priority, whereas Omeed stresses that sovereignty can be paramount for trust and may need to be prioritised over value. The audience seeks clarification and shows a split preference, indicating no consensus on ranking.
POLICY CONTEXT (KNOWLEDGE BASE)
The disagreement over lens prioritisation reflects the broader constructive debates on implementation details observed in AI policy roadmaps and research agendas [S45].
How to overcome IP constraints and build platform‑level AI solutions
Speakers: Audience (Ami Kotecha and vending‑machine entrepreneur), Theresa Yurkewich Hoffmann, Omeed Hashim
Audience members describe how exclusive IP demands from large clients prevent the creation of shared platforms, limiting broader adoption [253-264] Theresa acknowledges the difficulty and mentions internal reuse of components but notes challenges in scaling across competing customers [265-267] Omeed proposes a service-oriented, co-creation model that retains or shares IP while enabling multi-client platforms [273-276]
There is a disagreement on the best strategy to address proprietary IP that blocks platform development. The audience sees IP exclusivity as a barrier, Theresa points to internal component sharing as a partial remedy, while Omeed recommends a service‑based, co‑creation approach to retain IP yet allow broader use.
POLICY CONTEXT (KNOWLEDGE BASE)
WIPO discussions and global AI policy frameworks highlight the challenges posed by intellectual-property regimes for AI development and call for lower-cost, open-access mechanisms to enable platform-level solutions [S57][S58].
Extent and nature of government involvement in AI governance
Speakers: Audience (Ami Kotecha), Theresa Yurkewich Hoffmann, Omeed Hashim
Audience calls for clear governmental guidance on safe AI use, model selection and high-risk regulations [210-228] Theresa outlines the need for regulation (e.g., UK third-party supplier rules) but also stresses private-sector responsibility, up-skilling and internal policies [213-224][342-362] Omeed urges governments to back sovereign AI initiatives and set definitive safety baselines for high-risk applications [292-298][138-144]
All parties agree government has a role, but they diverge on how extensive it should be. The audience seeks direct, prescriptive guidance; Theresa emphasizes a balanced approach combining regulation with private‑sector actions; Omeed focuses on sovereign AI support and clear safety standards, indicating differing expectations of governmental scope.
POLICY CONTEXT (KNOWLEDGE BASE)
Diverse viewpoints on governmental roles are documented in calls for inclusive AI governance that goes beyond state actors and in analyses of multi-level regulatory models across regions [S40][S48][S49][S41].
Unexpected Differences
Perceived level of responsible‑AI adoption
Speakers: Audience (data‑protection law speaker), Theresa Yurkewich Hoffmann
Audience claims only 0.1 % of organisations currently practice responsible AI, despite upcoming data-protection legislation [234-235] Theresa implies many customers are already “clued up” on responsible AI and that mapping harms is a common practice [202-203][306-313]
The audience’s statement suggests a near‑nonexistent uptake of responsible AI, whereas Theresa’s remarks convey that a substantial number of organisations already engage with responsible‑AI practices, revealing a surprising mismatch in perceived adoption levels.
POLICY CONTEXT (KNOWLEDGE BASE)
Recent assessments indicate low adoption of responsible AI practices across organisations, underscoring the need to bridge the principle-practice gap highlighted in governance reviews [S43][S44].
Overall Assessment

The discussion reveals several key points of contention: (1) how to rank the four AI trust dimensions, especially the relative weight of sovereignty versus responsible/value AI; (2) the optimal approach to handling IP and building platform‑level AI services; (3) the appropriate scope of government regulation and guidance; and (4) a surprising gap between perceived and actual responsible‑AI adoption. While participants share a common goal of trustworthy, scalable AI, they diverge on priorities, implementation pathways, and the current state of practice.

Moderate to high – the disagreements centre on strategic priorities and policy approaches rather than factual disputes, which could impede coordinated action and slow the development of unified frameworks for AI governance.

Partial Agreements
Theresa and Omeed concur that AI projects must be evaluated through multiple lenses (sovereignty, sustainability, responsibility, value) and that a comprehensive, multi‑dimensional strategy is essential for trustworthy AI deployment.
Speakers: Theresa Yurkewich Hoffmann, Omeed Hashim
Both present a four-dimensional (sovereignty, green, responsible AI, valuable AI) framework for building trust and preventing harms [64-66][137-144][154-165][168-176][181-196] Both state that no single lens is sufficient and that a holistic approach is required for scaling AI [349-351]
Takeaways
Key takeaways
Only about 30 % of AI pilots reach production, largely due to trust deficits across reliability, data handling, societal impact, and job effects. A rapid rise in AI incidents (e.g., voice‑cloning scams, un‑attributed AI‑generated books, biased facial‑recognition) erodes public confidence. The 4D framework (Sovereignty, Green/Sustainability, Responsible AI, Valuable AI) is proposed as a holistic lens to build trustworthy, scalable AI. Six common failure categories for PoCs were identified: adoption/impact gap, governance failures, misalignment with societal goals, sovereignty issues, sustainability pressures, and change‑management challenges. Government regulation and AI sovereignty are critical; differing national approaches (UK supplier rules, US lack of formal rules, Serbia’s domestic LLMs) influence trust and adoption. Trade‑offs between the four dimensions are inevitable; organizations must map high‑ vs low‑concern harms and make transparent prioritisation decisions. Private‑sector faces a platform vs IP dilemma: large clients treat AI as proprietary, hindering broader societal value and responsible‑AI outcomes. Concrete recommendations include creating an AI policy, adopting a responsible‑AI framework, defining measurable KPIs for each dimension, up‑skilling teams, and lobbying for sovereign‑AI support.
Resolutions and action items
Publish and distribute the discussed white paper (link to be shared via LinkedIn and email). Encourage participants to draft an AI policy that explicitly addresses the four dimensions. Adopt a responsible‑AI framework with defined questions, safeguards, and governance processes. Develop quantitative KPIs for sustainability, ethics, user impact, and business value to support funding and reporting. Implement up‑skilling programmes and incorporate diverse stakeholder perspectives into AI projects. Engage with government bodies to advocate for baseline safety standards and sovereign‑AI initiatives (e.g., domestic model development, smart‑data sharing). Consider a service‑oriented, co‑creation model for platform‑level AI to retain IP while enabling multi‑client use. Use a high‑/low‑concern harm mapping exercise to prioritize trade‑offs before scaling pilots.
Unresolved issues
How to formally rank the four lenses (Sovereignty, Green, Responsible, Valuable) for a given project and when one should outweigh the others. Specific pathways for governments to provide clear, enforceable guidance on safe AI use for private‑sector innovators. Practical mechanisms to overcome client‑driven IP lock‑in and enable platform‑scale AI solutions across competing firms. Details on how upcoming data‑protection and personalization legislation will be operationalised and enforced. Concrete examples of KPI definitions for each dimension and how they should be integrated into project governance. Resolution of the audience’s final question about balancing sovereignty versus value/responsibility in real‑world deployments.
Suggested compromises
Adopt a hybrid service‑plus‑IP model: retain core IP while offering a shared platform/service layer for multiple clients. Map harms into high‑ and low‑concern categories to transparently decide which dimension to prioritise in a given context. Treat Responsible AI as an umbrella that can incorporate sustainability and value considerations, reducing the need for separate trade‑offs. Balance rapid value delivery (using external models) with long‑term sovereignty by gradually transitioning to domestically hosted models. Accept that sustainability may increase costs initially but yields long‑term economic and scalability benefits, encouraging joint investment.
Thought Provoking Comments
Only 30 % of all AI projects actually go into production. The main reason we’re seeing so many pilots fail is that we don’t have trust – trust in the technology, in the data, in the outcomes, and in the impact on jobs.
She quantifies the failure rate of AI pilots and pins the root cause on trust, framing the whole session’s problem statement and giving the audience a clear metric to rally around.
This comment set the agenda for the whole discussion, prompting participants to think about trust‑related dimensions and leading directly to the later introduction of the 4‑D framework.
Speaker: Theresa Yurkewich Hoffmann
We’ve built a 4‑D model – Sovereignty, Green (sustainability), Responsible AI and Valuable AI – as four lenses you need to look at to build trust and avoid harms before you scale.
It introduces a concrete, structured tool that reframes the conversation from vague ‘trust’ to actionable categories, giving participants a shared language.
The 4‑D model became the backbone of the breakout scenarios and the poll questions, steering the discussion toward evaluating each dimension in real‑world examples.
Speaker: Theresa Yurkewich Hoffmann
Sovereignty isn’t just about an organisation or a nation – it’s about the people whose data is used. Who is looking at your data, why, and what they will do with it determines whether people will trust the system.
He expands the notion of sovereignty from a technical or geopolitical issue to a human‑centred one, linking data control directly to user trust.
This broadened view shifted the tone from a purely technical discussion to one that emphasises citizen rights, prompting audience members to raise concerns about data ownership and regulatory gaps.
Speaker: Omeed Hashim
Sustainability and cost are two sides of the same coin – the greener the system, the cheaper it is to run at scale. If an AI system can’t be economically viable, it won’t scale, and the carbon impact will stay high.
He ties environmental impact to business economics, turning ‘green AI’ from an optional add‑on into a core business requirement.
This insight sparked a brief debate on trade‑offs between performance and carbon footprint, and later informed the audience poll where sustainability was ranked low by many participants.
Speaker: Omeed Hashim
Private‑sector firms need government to define what is safe to use and what is still experimental. Without clear risk categories (low, medium, high) and transparency rules, companies are left to guess and risk failure.
She brings a real‑world policy perspective, highlighting the gap between fast‑moving AI innovation and slow regulatory frameworks.
Her comment opened a new thread about the role of public policy, leading to further discussion on sovereign AI policies, responsible AI frameworks, and the need for a national AI strategy.
Speaker: Ami Kotecha (Audience)
Instead of selling a bespoke IP to a single client, think of AI as a service platform that can be layered and co‑created. This avoids lock‑in and lets multiple customers benefit, similar to how India’s UPI created an ecosystem.
He proposes a concrete business‑model solution to the audience’s frustration about IP lock‑in, drawing on the successful UPI example.
This suggestion reframed the earlier complaints about proprietary solutions into a discussion about ecosystem building, prompting participants to consider platform strategies and collaborative models.
Speaker: Omeed Hashim
When you have to trade‑off between sovereignty and value, you must ask: if we rely on foreign models we get speed, but we lose control. If we build locally we keep control but may sacrifice short‑term value. The decision has to be explicit and documented.
He articulates the core tension that many participants were grappling with, turning an abstract dilemma into a concrete decision‑making framework.
This comment acted as a turning point, leading to the audience poll on which dimension is “must‑have” and reinforcing the session’s emphasis on explicit trade‑off analysis.
Speaker: Omeed Hashim
Overall Assessment

The discussion was driven forward by a handful of pivotal remarks that moved the conversation from a vague sense of AI pilot failure to a structured, multi‑dimensional analysis. Theresa’s opening statistics and the 4‑D framework gave participants a shared problem definition and a toolkit. Omeed’s deep‑dives into sovereignty, sustainability, and trade‑offs reframed technical concerns as human‑centred and economic issues, prompting the audience to consider policy, business models, and ecosystem approaches. Audience contributions, especially the call for government guidance and the platform‑vs‑IP dilemma, introduced real‑world pressures that forced the speakers to connect the theoretical lenses to actionable strategies. Together, these comments shaped a dynamic dialogue that progressed from problem identification to concrete recommendations on policy, governance, and business design.

Follow-up Questions
How will AI adoption and government regulation evolve over the next 6‑12 months/years, and what role will governments play in defining safe versus experimental AI use?
Understanding the timeline and scope of regulatory frameworks is crucial for private firms to plan investments, risk management, and compliance strategies.
Speaker: Ami Kotecha (co‑founder, Amro Partners)
How can companies build platform‑level AI solutions rather than bespoke, client‑specific ones, especially when large customers demand exclusivity?
A platform approach can unlock broader market reach and societal impact, but requires strategies to overcome IP lock‑in and client exclusivity pressures.
Speaker: Audience member (entrepreneur building agentic AI for vending machines)
How should organizations handle situations where a buyer acquires technology but does not commercialize it for societal benefit, raising concerns for responsible and valuable AI?
This scenario highlights tensions between commercial interests and the public good, necessitating guidance on responsible stewardship of AI assets.
Speaker: Audience member (owner of sustainability‑focused IP)
Why aren’t major IT consulting firms (e.g., Kainos, Infosys, Accenture) pursuing platform/service models for AI, and can such initiatives be driven organically or need coordinated effort?
Identifying barriers to platform adoption within large service firms can inform policy or industry initiatives to promote scalable, reusable AI solutions.
Speaker: Audience member (same as above)
In what scenarios might prioritising AI sovereignty conflict with responsible or valuable AI, and how can these trade‑offs be managed or aligned?
Understanding trade‑offs between control of data/models and ethical/value outcomes is essential for designing AI governance frameworks that balance national security with societal benefit.
Speaker: Audience member (question on sovereignty vs. responsibility/value)
How should the four AI lenses—sovereignty, green/sustainability, responsible, and value—be ranked in priority for a given project?
Prioritisation guidance helps project teams allocate resources and address the most critical dimensions early, improving chances of successful production deployment.
Speaker: Audience member (ranking low to high)
Which single AI lens is an absolute must‑have to avoid project derailment?
Identifying a non‑negotiable dimension can focus governance efforts and ensure that critical risks are not overlooked.
Speaker: Audience member (poll on absolute must‑have lens)
What quantitative KPIs can be developed to measure sustainability, ethics, and value in AI projects?
Turning qualitative principles into measurable metrics enables monitoring, reporting, and accountability, which are needed for funding and regulatory compliance.
Speaker: Theresa Yurkewich Hoffmann (and Omeed Hashim)
What are the detailed implications of data and model sovereignty—especially regarding offshore hosting, auditability, and control—and how can they be mitigated?
Data/model sovereignty affects trust, legal compliance, and operational continuity; research is needed to create practical guidelines for sovereign AI deployments.
Speaker: Omeed Hashim
How can organisations systematically assess and manage trade‑offs between rapid AI value delivery and sustainability or sovereignty concerns?
A structured trade‑off analysis framework would help decision‑makers justify compromises and align AI initiatives with broader organisational goals.
Speaker: Theresa Yurkewich Hoffmann
What lessons can be drawn from human‑centred AI design in sensitive domains (e.g., elderly monitoring) regarding responsibility, privacy, and value creation?
Case studies in high‑stakes settings can reveal unforeseen harms and inform best practices for responsible, valuable AI design.
Speaker: Omeed Hashim (example of nursing home AI)
How can the growing number of AI incidents reported by the OECD AI Observatory be reduced, and what preventive measures are most effective?
Understanding root causes of AI harms is essential for developing mitigation strategies, improving trust, and lowering the incident rate.
Speaker: Theresa Yurkewich Hoffmann

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Building Trusted AI at Scale – Keynote Anne Bouverot

Building Trusted AI at Scale – Keynote Anne Bouverot

Session at a glanceSummary, keypoints, and speakers overview

Summary

Speaker 1 introduced Ms. Anne Bouvreau, France’s Special Envoy for Artificial Intelligence and former Director General of the GSMA, noting her unique position at the crossroads of diplomacy, technology, and AI governance, and invited the audience to hear her keynote at the AI Impact Summit, a platform dedicated to responsible AI regulation and ethics [1-4][5-8].


Bouvreau highlighted that hosting the summit in India-a Global South nation-conveys a strategic message that AI is a worldwide transformation, not the privilege of a few nations or corporations, and she cited India’s strong AI market, ranked third globally for competitiveness by the Stanford AI Index, as evidence of its leadership potential; she also pointed to the longstanding Franco-Indian partnership as a foundation for joint action [15-18][20-23][24-26]. She framed AI as a focal point of intense geopolitical and economic competition, referencing the US “Stargate” investment and China’s DeepSeek initiative, while noting the emergence of coalitions of willing countries-including France, India, Brazil, Japan, Germany, and Canada-committed to inclusive and sustainable AI governance [28-32][33-35].


Concrete collaboration examples were presented: an AI tool at the All India Institute of Medical Sciences can detect tuberculosis from a smartphone-recorded cough, illustrating a tangible public-health impact [41-43]; a memorandum of understanding between India’s iSpirit and France’s Health Data Hub will enable the first privacy-preserving cross-border health-data transfers to support joint research and disease-cure discovery [44-48]; academic exchanges under the “RUSH” program foster scientific cooperation, with the next edition slated for France [49-54]; and an open-hardware initiative to promote linguistic diversity and AI-powered translation-partnering Bashini and Current AI-leverages India’s 22 official languages to address cultural representation challenges [59-62].


Finally, Bouvreau announced a coalition for sustainable AI co-chaired by France and India, aimed at reducing AI’s energy footprint through a Resiliency Working Group and a resilient AI challenge [64-69]; she stressed child safety as a priority, calling for stronger age-verification mechanisms and anti-cyberbullying measures in line with President Macron’s agenda [70-77]; concluding that AI is a societal, cultural, and political transformation that must be shaped proactively, she affirmed France’s readiness to work with India and other partners to build an inclusive, sovereign, and sustainable AI ecosystem rooted in the common good [78-86].


Keypoints

Hosting the AI Impact Summit in India underscores the strategic and symbolic importance of involving the Global South in AI governance.


Bouvreau stresses that holding the summit in India “is very important from a symbolic perspective, but it is even more important from a strategic perspective” and that it sends a “very powerful message… AI is not a privilege of a few nations” [15-19].


AI is now a focal point of intense geopolitical competition, creating both risks and opportunities for multilateral collaboration.


She references the “Stargate” U.S. investment and China’s “DeepSeek” effort, describing AI as “at the center of a fierce geopolitical and economical competition” while noting the emergence of “coalitions of the willing… France, India, Brazil, Japan, Germany, Canada” that share an inclusive, sustainable vision [28-33].


Concrete Franco-Indian initiatives span public health, data governance, research, and tools for the common good.


Examples include a cough-analysis tool for early tuberculosis detection at AIIMS [42-44], a privacy-preserving health-data sharing MOU between iSpirit and France’s Health Data Hub [44-48], the RUSH scientific exchange program [51-55], and the launch of an open-hardware linguistic-diversity toolkit [59-63].


Sustainable and safe AI development is framed as a joint responsibility, with new coalitions and challenges aimed at energy efficiency and child protection.


Bouvreau announces a “coalition for sustainable AI” and a “Resiliency Working Group” co-chaired by France and India, a “resilient AI challenge,” and calls for stronger age-verification and anti-cyberbullying measures [64-70][71-76].


A call to actively shape AI’s societal impact rather than passively accept its trajectory.


She concludes with a rhetorical contrast: “will we shape AI? Or will we tell our children that we didn’t even try?” positioning France as ready to co-create an “inclusive, sustainable, sovereign” AI ecosystem [80-86].


Overall purpose/goal:


The keynote aims to showcase and deepen Franco-Indian cooperation as a model for global AI governance, highlighting concrete collaborative projects, launching new initiatives (open-source tools, sustainable-AI challenges), and urging a collective, impact-focused approach that balances innovation with ethical, environmental, and safety considerations.


Overall tone:


The speech begins with celebratory and diplomatic enthusiasm, shifts to a strategic and urgent tone when describing geopolitical stakes, moves into a collaborative and hopeful mood while detailing joint projects, and adopts a cautionary yet resolute stance when addressing safety and sustainability. The tone remains consistently forward-looking, ending with an inspirational call to action.


Speakers

Anne Bouvreau – Special Envoy for Artificial Intelligence, France; Diplomat; Technologist; Former Director General of the GSMA (Global System for Mobile Communication Association); Chair of the board of École Normale Supérieure (École Normale Supérieure)[S2][S3]


Speaker 1 – Event moderator/host who introduced the keynote speaker[S4]


Additional speakers:


Full session reportComprehensive analysis and detailed insights

Speaker 1 opened the AI Impact Summit by introducing Ms Anne Bouvreau, France’s Special Envoy for Artificial Intelligence and former Director General of the GSMA, highlighting her unique blend of diplomatic, technological and governance expertise [1-4]. Speaker 1 then framed the summit as a platform for discussing responsible AI regulation and ethics, noting the audience’s keen interest in these issues [5-8][7].


Ms Bouvreau underscored that holding the summit in India is both symbolically and strategically important. She argued that the venue sends a powerful message that AI “is not a privilege of a few nations, not the preserve of a few companies” but a global transformation that must be shaped by all [15-19]. Citing the Stanford AI Index, she pointed out that India ranks third worldwide in AI market competitiveness, a status that reflects the country’s large market, vibrant ecosystem and strong entrepreneurial dynamism [20-23]. She linked this to the long-standing Franco-Indian partnership, describing 2024 as the “year of Franco-India” and emphasizing shared values on AI sovereignty and innovation [24-26].


Turning to geopolitics, Bouvreau described AI as the centre of a fierce geopolitical and economic competition, referencing the United States’ “Stargate” investment and China’s “DeepSeek” initiative [28-30]. She noted that this rivalry has simultaneously spurred the formation of “coalitions of the willing” – including France, India, Brazil, Japan, Germany and Canada – which share a vision of inclusive, sustainable and legitimate AI governance [31-35]. This coalition, she suggested, marks a pivotal moment for asserting greater AI sovereignty on the world stage [31-35].


She marked a shift from the previous “AI Action Summit” to the current “AI Impact Summit”, adding “This year in Delhi we speak about impact.” The focus now is on measurable impact in sectors such as education and public health, not merely theoretical discussion [36-38].


Concrete Franco-Indian collaborations were then detailed. In public health, Bouvreau highlighted an AI application at the All India Institute of Medical Sciences (AIIMS) that can detect tuberculosis from a simple cough recorded on a smartphone, illustrating a practical, tangible application of AI [41-44]. In data governance, she announced a memorandum of understanding between India’s iSpirit and France’s Health Data Hub that will enable the world’s first privacy-preserving cross-border health-data transfers, thereby facilitating joint research and the search for new cures [44-48].


Academic cooperation was showcased through her role as chair of the École Normale Supérieure, where she has overseen the “RUSH” scientific-exchange programme – a series of high-level talks that this week brought French and Indian researchers together, with the next edition slated for France [49-55].


Addressing AI for the common good, Bouvreau referenced John Palfrey’s remarks on the need for open datasets and tools beyond venture-capital funding, and announced the launch of an open-hardware tool for linguistic diversity and AI-powered translation, a partnership between Bashini and Current AI that leverages India’s 22 official languages [56-63]. This initiative aims to ensure cultural representation in AI systems worldwide.


Sustainability was framed as a core responsibility. She recalled the coalition for sustainable AI launched in Paris and announced that France and India will co-chair the Resiliency Working Group, which will run a “Resilient AI Challenge” to develop energy-efficient AI solutions, stressing that sustainability must be built into design rather than treated as an afterthought [64-69].


Child safety also featured prominently. Citing President Macron’s priority, Bouvreau called for stronger age-verification mechanisms and robust anti-cyberbullying measures, arguing that innovation and protection must progress hand-in-hand [70-77].


In her closing remarks, Bouvreau portrayed AI as a societal, cultural and political transformation that is already redefining work and health [78-86][S19]. She posed a rhetorical challenge – “will we shape AI, or will we tell our children that we didn’t even try?” – and affirmed France’s readiness to collaborate with India and other willing partners to build an AI ecosystem that is inclusive, sovereign, sustainable and rooted in the common good. [78-86][S19].


Session transcriptComplete transcript of the session
Speaker 1

Well, it’s my great pleasure to invite our next keynote speaker, who is Ms. Anne Bouvreau, Special Envoy for Artificial Intelligence, France. Diplomat, a technologist, and former Director General of the GSMA, which is Global System for Mobile Communication Association. Ms. Bouvreau sits at the heart of France’s efforts to lead on AI governance and international cooperation. She has been instrumental in advancing the global conversation on responsible AI regulation by bridging innovation policy and multilateral diplomacy at the highest levels. So we are about to set the stage before I invite Ms. Bouvreau here, but indeed, this is one platform, the AI Impact Summit. Thank you. Where we do get the opportunity to listen to all these esteemed speakers as they put forth their points.

their remarks, and their valuable insights, which is based on years of experience, ladies and gentlemen. At the time, we are all concerned about AI regulations, and we are all concerned about ethical and responsible AI. It would be a pleasure to listen to our next keynote speaker. Ladies and gentlemen, with a round of applause, please welcome Ms. Anne Bouvreau, Special Envoy for Artificial Intelligence, France.

Anne Bouvreau

Namaste. Bonjour. Excellencies, distinguished guests, dear guests. Dear friends. Thank you so much for welcoming me here today at the AI Impact Summit. I had the privilege to lead the organization of the Paris Summit about exactly one year ago. It is in Paris that India announced to the world its desire, its ambition, its resolve to organize the AI Impact Summit that is taking place now. Holding an AI Summit in a country from the global south is very important from a symbolic perspective, but it is even more important from a strategic perspective. It sends a very powerful message to the world. AI is not a privilege of a few nations, not the preserve of a few companies.

It is a global transformation and it must be shaped by all. India is, in my view, the perfect country to host this summit. I don’t need to remind you about the scale of this market, the richness of the ecosystem, the strength of the technological expertise here, your incredible entrepreneurial dynamism. India has, over the years, positioned itself to be at the forefront of both AI development and adoption. Just to quote a source, the Stanford AI Index ranks India third globally in AI market competitiveness. This is not by chance. Yes. France and India have a longstanding partnership and I believe share a common understanding of what is at stake. This year is the year of Franco -India.

Franco -Indian or Indio -French innovation. And last year in Paris. the geopolitics of AI started to be very visible. Remember one year ago, the announcement of Stargate, the US saying that they were investing in AI to really dominate the world. And remember DeepSeek, China saying that they’re also in the race with a different way. AI is at the center of a fierce geopolitical and economical competition. But this also created a momentum for stronger collaboration between countries such as France, India, Brazil, Japan, Germany, Canada, and many others. Coalitions of the willing of the countries that have key talent in AI, who share a vision that it must be inclusive and sustainable and a legitimate solution. Aspiration for more sovereignty.

I believe this is a very key geopolitical moment. In Paris, we spoke about action. This year in Delhi, we speak about impact. We’re going from the AI Action Summit to the AI Impact Summit. Impact in education, in public health, impact that improves lives, not just in theory, but in practice. And there are a number of areas in which our strong partnership between France and India is very relevant and strategic. I’d like to start with public health. During my previous visit to India back in November, I was deeply impressed by some AI applications, and in particular by an AI application that I saw at AIMS, at the All India Institute for Medical Science. An application which, if you just cough into a smartphone, AI analyzes the sound and can be an early detector of tuberculosis versus a more classical cold or other viral illness.

This is a very important, very practical, very tangible application of AI for public health. Second, data sharing and data governance. The ongoing work between iSpirit here in India, the Health Data Hub in France, and other partners, and the recent MOU that was signed, will enable, I think, as a first in the world for data transfer, for health data transfer across borders in a privacy -preserving way. This will enable joint research. And finding new cures for diseases. Third, research and academia. I chair the board of one of France’s leading academic institutions, École Normale Supérieure, NormoSup. So this is a subject that is very dear to my heart. This week, there was a full program of scientific exchanges.

We called it RUSH because there’s a rush to cooperate between our two countries. This was a series of exceptional talks by researchers and heads of institutions. And the next edition of that will be held in France. Fourth, I want to talk about AI for the common good. And I was very pleased to hear John Palfrey from the MacArthur Foundation talk about current AI. Current AI is a foundation that, with the help of his foundation, but also of the United Nations, and also at the onset of France, India and other countries, and with other partners, we launched in Paris. This is a foundation to help sustain AI development for the common good by helping to enable open data sets, open source tools, whatever will not be funded by VCs and private funders.

This year, at this summit, we are launching an open hardware tool to promote linguistic diversity and AI -powered translation. This is a partnership between Bashini and Current AI. With its 22 official languages and many more being spoken here in India, India perfectly embodies the challenges and the opportunities of cultural representations in AI systems. This is faced by many countries around the world, but this is a perfect place, India, to launch this initiative. And finally… And fifth, and not least, sustainable AI. In Paris, we launched a coalition for sustainable AI. AI requires huge amounts of energy and risks putting our climate goals and our desire to preserve the planet at risk. So we launched this coalition and this year we co -chair, France co -chaired with India, the Resiliency Working Group.

And sustainability is really something that needs to be taught at the beginning by design in AI systems. It cannot be an afterthought. We’re launching today, together with India and other partners, a resilient AI challenge that will help find solutions in this very important area. And finally, we must speak about safety. Especially for children. This is a priority for President Macron, if you heard him speak yesterday. This is a priority for him because this is a priority for citizens in France. I believe this is a priority for parents and citizens around the world. AI can enable a number of great things in public health, in other areas, but it must not become a tool that endangers children.

We must demand and strengthen age verification mechanisms. We must fight against cyberbullying. Innovation and protection can and must go hand in hand. Excellencies, dear friends, AI is not only a technological transformation. It is a societal, cultural and political transformation. The question is not whether AI will change our societies. It is already redefining work. It will transform public health. The real question is, will we shape AI? Or will we tell our children that we didn’t even try? France stands ready to work with India and with all willing partners to build an AI ecosystem that is inclusive, sustainable, sovereign, and rooted in the common good. The future of AI must not be written for the world. It must be written with

Related ResourcesKnowledge base sources related to the discussion topics (20)
Factual NotesClaims verified against the Diplo knowledge base (4)
Confirmedhigh

“Ms Anne Bouvreau is France’s Special Envoy for Artificial Intelligence and former Director General of the GSMA, and she chairs the board of the École Normale Supérieure.”

The knowledge base lists Anne Bouverot as Special Envoy for AI, former Director General of the GSMA, and Chair of the board of ENS, confirming the report’s description [S2].

Confirmedhigh

“According to the Stanford AI Index, India ranks third worldwide in AI market competitiveness.”

Stanford’s AI Index ranks India third in AI penetration and preparedness, supporting the claim of a third-place ranking [S78].

Additional Contextmedium

“Bouvreau said the rivalry has spurred the formation of “coalitions of the willing” – including France, India, Brazil, Japan, Germany and Canada – sharing a vision of inclusive AI governance.”

The knowledge base mentions the concept of “ad-hoc coalitions of the willing” in AI governance discussions, though it does not list the specific countries, providing contextual support for the coalition idea [S86].

Confirmedlow

“Bouvreau referenced John Palfrey’s remarks on the need for open datasets and tools beyond venture‑capital‑driven models.”

John Palfrey is identified in the knowledge base as a representative of the MacArthur Foundation who speaks on open data and AI openness, confirming his relevance to the discussion [S2].

External Sources (86)
S1
THE FORGOTTEN FRENCH Exiles in the British Isles, 1940-44 — – – Mauriac , C., The Other de Gaulle (London, Angus &amp; Robertson, 1973) – Michel, H., Histoire de la France Libre (P…
S2
Building Trusted AI at Scale – Keynote Anne Bouverot — -Anne Bouverot: Special Envoy for Artificial Intelligence, France; Diplomat and technologist; Former Director General of…
S3
Inclusive AI_ Why Linguistic Diversity Matters — -Anne Bouverot- Special envoy to the president (France)
S4
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S5
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S6
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S7
International Cooperation for AI &amp; Digital Governance | IGF 2023 Networking Session #109 — The analysis examines multiple perspectives on the importance of regulating AI. The speakers stress the necessity of reg…
S8
Resilient and Responsible AI | IGF 2023 Town Hall #105 — Lastly, the analysis also highlights specific stances taken by some speakers. One speaker supports the implementation of…
S9
Workshop 1: AI &amp; non-discrimination in digital spaces: from prevention to redress — **Regulatory approaches:** Speakers emphasized different aspects of strengthening oversight, from litigation capabilitie…
S10
https://dig.watch/event/india-ai-impact-summit-2026/building-trusted-ai-at-scale-keynote-anne-bouverot — I believe this is a very key geopolitical moment. In Paris, we spoke about action. This year in Delhi, we speak about im…
S11
Keynote-HE Emmanuel Macron — Dividing racism destroying, sharing racism taking. France intends to use its G7 presidency to foster that vision. I know…
S12
Global Perspectives on Openness and Trust in AI — I don’t know, is really the answer. Governance is such a broad word. There’s a lot of, for example, open source is reall…
S13
Building Scalable AI Through Global South Partnerships — The institute’s work on tuberculosis—the world’s largest infectious disease killer—demonstrates AI’s potential to addres…
S14
AI for Social Good Using Technology to Create Real-World Impact — First one is diagnosis and diagnosing TB in economically vulnerable communities isn’t easy. X -ray machines, sputum anal…
S15
Cross-Border Data Flows: Harmonizing trust through interoperability mechanisms (DCO) — An additional benefit of promoting data sharing and adoption of successful health improvement models is the potential fo…
S16
Scaling Trusted AI_ How France and India Are Building Industrial &amp; Innovation Bridges — And the rapid advances of these technologies, however, has not been so far equitably distributed and that is one challen…
S17
I NTRODUCTION — Leadership commitment drives collaboration and momentum.
S18
Multilingual Internet: a Key Catalyst for Access &amp; Inclusion | IGF 2023 Town Hall #75 — One of the main obstacles to achieving digital inclusion is the lack of linguistic diversity in cyberspace. This problem…
S20
WS #362 Incorporating Human Rights in AI Risk Management — Stadelmann highlights the importance of global AI summits in fostering international cooperation on AI governance, with …
S21
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — Ante este panorama, los países del sur global debemos priorizar estrategias y normativas para un uso ético y responsable…
S22
Military AI: Operational dangers and the regulatory void — Equally concerning is the regulatory gap enabling these technologies to proliferate. Humans are present at every stage f…
S23
AI diplomacy — For centuries, power was defined by territory, armies, and economic might. Today, a new element is paramount: data and t…
S24
AI for Democracy_ Reimagining Governance in the Age of Intelligence — This brings me to the international dimension. AI is a truly global challenge whose effects transcend national borders. …
S25
https://dig.watch/event/india-ai-impact-summit-2026/scaling-trusted-ai_-how-france-and-india-are-building-industrial-innovation-bridges — We support innovation when it reinforces our economies. Of course, we are committed to making the world a better place f…
S26
In brief — Overall, evidence can be applied in public health for many purposes, including strategic decision-making, p…
S27
Harnessing AI for Child Protection | IGF 2023 — The rise of artificial intelligence (AI) presents significant concerns regarding the exploitation of children, particula…
S28
Advancing Scientific AI with Safety Ethics and Responsibility — High level of consensus with significant implications for AI governance policy. The agreement across speakers from diffe…
S29
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Audience: Hello. I’m also a researcher in the AI policy lab. And I also want to comment on this. I also want to comment …
S30
Impact &amp; the Role of AI How Artificial Intelligence Is Changing Everything — That is why we must frame this not simply as technology policy, but as democratic governance. The choices made today abo…
S31
WS #283 AI Agents: Ensuring Responsible Deployment — These key comments fundamentally transformed what could have been a technical discussion about AI governance into a nuan…
S32
WS #214 AI Readiness in Africa in a Shifting Geopolitical Landscape — Civil Society Role and Accountability Legal and regulatory | Human rights | Development Neema Iyer proposes that befor…
S33
Child participation online: policymaking with children | IGF 2023 Open Forum #86 — Furthermore, the analysis recognizes that age verification poses a significant challenge in ensuring online child safety…
S34
Risks and opportunities of a new UN cybercrime treaty | IGF 2023 WS #225 — Overall, the discussion on child safety and online policies emphasised the need for a balanced approach, taking into acc…
S35
Safe Digital Futures for Children: Aligning Global Agendas | IGF 2023 WS #403 — This requires GIZ practitioners to find a delicate balance between these competing priorities to ensure that child onlin…
S36
Lightning Talk #209 Safeguarding Diverse Independent NeWS Media in Policy — ## Background and Research Context None identified beyond those in the speakers names list.
S37
Laying the foundations for AI governance — Lan Xue: Okay. I think my job is easier. I can say I agree with all of them. So I think that’s probably the easiest way….
S38
morning session — In addition to the discussions surrounding confidence-building measures and the BWC, this expanded summary also emphasiz…
S39
Table of contents — + Even though Estonia is esteemed as a digital country in the world, our attention and resources are largely directed to…
S40
International Cooperation for AI &amp; Digital Governance | IGF 2023 Networking Session #109 — The analysis examines multiple perspectives on the importance of regulating AI. The speakers stress the necessity of reg…
S41
Hard power of AI — In the analysis, the speakers address several important aspects related to artificial intelligence (AI) and raise a rang…
S42
From principles to practice: Governing advanced AI in action — Brian Tse: right now? First of all, it’s a great honor to be on this panel today. To ensure that AI could be used as a f…
S43
State of play of major global AI Governance processes — The convention is promoted as an empowering mechanism, increasing transparency and accountability. It ensures that, wher…
S44
Evolving AI, evolving governance: from principles to action | IGF 2023 WS #196 — The analysis also suggests that responsible development, governance, regulation, and capacity building should be multi-s…
S45
Smart Regulation Rightsizing Governance for the AI Revolution — However, rather than adopting a pessimistic stance, Wilkinson proposed a pragmatic alternative: coalition building aroun…
S46
Searching for Standards: The Global Competition to Govern AI | IGF 2023 — In conclusion, the UNESCO recommendation on AI ethics provides crucial guidance for global AI governance. By grounding A…
S47
Indias AI Leap Policy to Practice with AIP2 — This directly challenges the prevalent approach of relying on voluntary ethics guidelines, arguing for concrete regulato…
S48
Building Trusted AI at Scale – Keynote Anne Bouverot — This statement sets the foundational tone for the entire speech, establishing the philosophical framework that AI govern…
S49
WS #362 Incorporating Human Rights in AI Risk Management — Stadelmann highlights the importance of global AI summits in fostering international cooperation on AI governance, with …
S50
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — In the global south. the timing and the location are equally important. As AI technology has continued to advance so has…
S51
Shaping the Future AI Strategies for Jobs and Economic Development — Thank you. and how safety is governed under real constraints, how AI systems actually reach the people and states often …
S52
Military AI: Operational dangers and the regulatory void — Equally concerning is the regulatory gap enabling these technologies to proliferate. Humans are present at every stage f…
S53
Laying the foundations for AI governance — This comment introduced a different geopolitical perspective that complicated the discussion in important ways. While it…
S54
9821st meeting — Ecuador:Mr. President, I thank the United States for convening this important meeting. I also thank the Secretary Genera…
S55
Impact &amp; the Role of AI How Artificial Intelligence Is Changing Everything — This brings me to the international dimension. AI is a truly global challenge whose effects transcend national borders. …
S56
Scaling Trusted AI_ How France and India Are Building Industrial &amp; Innovation Bridges — The AI Impact Summit demonstrated that successful AI development requires more than technical excellence—it demands inst…
S57
https://dig.watch/event/india-ai-impact-summit-2026/scaling-trusted-ai_-how-france-and-india-are-building-industrial-innovation-bridges — We support innovation when it reinforces our economies. Of course, we are committed to making the world a better place f…
S58
Luxembourg’s Data Strategy: Accelerating Digital Sovereignty 2030 — Concrete applications of the strategy span multiple sectors:
S59
In brief — Public health interventions may be seen as successful individual health interventions applied on a wide sc…
S60
WS #172 Regulating AI and Emerging Risks for Children’s Rights — There is a positive trend of AI companies embracing safety by design principles and integrating them into their developm…
S61
Responsible AI for Children Safe Playful and Empowering Learning — The speakers demonstrate remarkably high consensus on prioritizing child safety, agency, and holistic development over r…
S62
Harnessing AI for Child Protection | IGF 2023 — The rise of artificial intelligence (AI) presents significant concerns regarding the exploitation of children, particula…
S63
US Departments of Energy and Commerce unite for safe AI development under new partnership — The US Department of Energy (DOE) and the US Department of Commerce (DOC) havejoinedforces to promote the safe, secure, …
S64
WS #110 AI Innovation Responsible Development Ethical Imperatives — Ke GONG: Thank you. Thank you, David. Ladies and gentlemen, dear colleagues, on behalf of one of the organizers of this …
S65
From Technical Safety to Societal Impact Rethinking AI Governanc — Citizens must actively insist on safety measures rather than expecting automatic benefits
S66
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Virginia Dignam: Thank you very much, Isadora. No pressure, I see. You want me to say all kinds of things. I hope that i…
S67
Keynote-Rajesh Subramanian — This shifts the narrative from passive adoption to active creation and responsibility. It challenges organizations to mo…
S68
Conversation: 01 — Artificial intelligence
S69
AI Governance: Ensuring equity and accountability in the digital economy (UNCTAD) — The Bletchley Park AI summit holds great importance in the field of AI governance. It showcases the UK’s leadership in A…
S70
High-Level Session 3: Exploring Transparency and Explainability in AI: An Ethical Imperative — Doreen Bogdan-Martin: Thank you, and good morning again, ladies and gentlemen. I guess, Latifa, picking up as you were a…
S71
How to make AI governance fit for purpose? — The discussion maintained a collaborative and optimistic tone throughout, despite representing different national perspe…
S72
WSIS Action Line C10: Ethics in AI: Shaping a Human-Centred Future in the Digital Age — – **Audience** – Various attendees who asked questions during the session Dafna Feinholz: Okay, good morning, good morn…
S73
Keynote Adresses at India AI Impact Summit 2026 — And critically, India brings strength. Peace doesn’t come from hoping adversaries will play fair. We all know they won’t…
S74
https://dig.watch/event/india-ai-impact-summit-2026/panel-discussion-01 — I think that’s very, very well put. And, you know, this was, ladies and gentlemen, such a powerful discussion because wh…
S75
The Global Economic Outlook — Georgieva emphasizes the importance of making artificial intelligence accessible to all, not just a privileged few. She …
S76
Press Conference: Closing the AI Access Gap — States that progress cannot be solely delivered by a few countries and companies
S77
Multistakeholder Partnerships for Thriving AI Ecosystems — Both speakers emphasize that technology must be made accessible and available to all, not concentrated in the hands of a…
S78
AI and Global Power Dynamics: A Comprehensive Analysis of Economic Transformation and Geopolitical Implications — – Kristalina Georgieva – Kristalina Georgieva- Khalid Al-Falih Economic | Development | Infrastructure Five layers id…
S79
The Global Power Shift India’s Rise in AI &amp; Semiconductors — And with the whole ecosystem around startups, we all know India is the third largest startup ecosystem of the world. Wit…
S80
Leaders’ Plenary | Global Vision for AI Impact and Governance- Afternoon Session — Julie Sweet from Accenture highlighted another crucial advantage: India’s human capital. With over 350,000 employees in …
S81
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Ebba Busch Deputy Prime Minister Sweden — “Sweden is a proud friend of India.”[21]. “Sweden intends to be a reliable and innovative partner as India continues its…
S82
The Role of Government and Innovators in Citizen-Centric AI — The discussion maintained an optimistic and collaborative tone throughout, with speakers expressing enthusiasm about AI’…
S83
Review of AI and digital developments in 2024 — Satellite constellations were in the center of geopolitical and geoeconomic race. SpaceX led this race with adding count…
S84
Hard Power: Wake-up Call for Companies / DAVOS 2025 — China’s Role in the Global Economy Mousavizadeh highlights the intensifying competition between the US and China in adv…
S85
AI race shows diverging paths for China and the US — The US administration’s new AI action plan frames global development as anAI racewith a single winner. Officials argue A…
S86
https://dig.watch/event/india-ai-impact-summit-2026/global-perspectives-on-openness-and-trust-in-ai — And it doesn’t have to be one big block of these middle powers, but ad hoc coalitions of the willing. So I believe this …
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
Speaker 1
1 argument118 words per minute193 words97 seconds
Argument 1
Emphasis on the need for AI regulations and responsible AI
EXPLANATION
Speaker 1 highlights that the audience shares a common concern about the need for AI regulation and ethical responsibility. This sets the tone for the summit by underscoring the importance of governance frameworks for AI.
EVIDENCE
Speaker 1 notes that everyone is concerned about AI regulations and ethical and responsible AI, highlighting the collective focus on governance and responsibility [7].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
IGF sessions highlight the necessity of AI regulation grounded in a human-rights framework and stress stronger oversight to prevent harm, providing direct support for the call for responsible AI [S7][S9].
MAJOR DISCUSSION POINT
Call for AI regulation and ethics
AGREED WITH
Anne Bouvreau
DISAGREED WITH
Anne Bouvreau
A
Anne Bouvreau
12 arguments116 words per minute1148 words590 seconds
Argument 1
Hosting in the Global South sends a powerful symbolic and strategic message
EXPLANATION
Anne Bouvreau argues that locating the AI Impact Summit in a Global South nation conveys both a symbolic affirmation of inclusivity and a strategic signal about the shared ownership of AI. She stresses that AI should not be the preserve of a few wealthy nations or corporations.
EVIDENCE
She states that holding an AI summit in a Global South country is important both symbolically and strategically, sending a powerful message that AI is not limited to a few nations but a global transformation that must involve all [15-18].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The keynote stresses that AI is a global transformation that must involve all nations and cites India as the ideal host, underscoring the symbolic and strategic value of a Global South venue [S2].
MAJOR DISCUSSION POINT
Symbolic and strategic importance of Global South venue
Argument 2
India’s market size and AI competitiveness make it an ideal host
EXPLANATION
Bouvreau points to India’s large market, vibrant ecosystem, and strong technological expertise as reasons it is well‑positioned to lead AI development and adoption. She reinforces this claim with an external ranking that places India third globally in AI market competitiveness.
EVIDENCE
She highlights India’s vast market, rich ecosystem, strong technological expertise, and entrepreneurial dynamism, and cites the Stanford AI Index ranking India third globally in AI market competitiveness, underscoring its suitability as host [19-22].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Bouverot points to India’s vast market, rich ecosystem and strong technological expertise as key reasons for hosting the summit, aligning with the external description of India’s AI competitiveness [S2].
MAJOR DISCUSSION POINT
India’s market and competitiveness as host criteria
Argument 3
Longstanding France‑India partnership shares a common vision for inclusive, sovereign AI
EXPLANATION
Bouvreau emphasizes the deep, historic cooperation between France and India, noting that both countries share an understanding of AI’s stakes and have designated the year as Franco‑India. This partnership underpins a joint vision for AI that is inclusive and respects national sovereignty.
EVIDENCE
She references the long-standing France-India partnership, shared understanding of AI stakes, and the designation of the year as Franco-India, indicating a common vision for inclusive and sovereign AI [24-27].
MAJOR DISCUSSION POINT
Franco‑Indian partnership and shared AI vision
Argument 4
Coalition of willing countries (France, India, Brazil, Japan, Germany, Canada, etc.) promotes inclusive and sustainable AI governance
EXPLANATION
Bouvreau describes a multilateral coalition of nations that possess key AI talent and a shared commitment to inclusive, sustainable, and legitimate AI governance. She frames this coalition as a response to geopolitical competition and a driver of sovereignty aspirations.
EVIDENCE
She describes a coalition of willing nations-including France, India, Brazil, Japan, Germany, Canada-who share a vision of inclusive, sustainable, and legitimate AI governance, emphasizing aspirations for sovereignty and the geopolitical significance [31-34].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The concept of a ‘coalition of the willing’ comprising France, India, Brazil, Japan, Germany, Canada and others is described as a multilateral effort for inclusive and sustainable AI governance in the keynote [S2].
MAJOR DISCUSSION POINT
Multilateral coalition for inclusive AI governance
AGREED WITH
Speaker 1
DISAGREED WITH
Speaker 1
Argument 5
AI cough‑analysis app enables early detection of tuberculosis, demonstrating tangible health benefits
EXPLANATION
Bouvreau cites a concrete AI application observed at the All India Institute of Medical Sciences that can analyse a cough recorded on a smartphone to differentiate tuberculosis from other respiratory illnesses. This example illustrates how AI can deliver immediate, life‑saving public‑health outcomes.
EVIDENCE
She recounts visiting AIIMS where an AI application can analyse a cough recorded on a smartphone to early-detect tuberculosis versus a cold or other viral illness, illustrating a concrete public-health benefit [41-44].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Research on scalable AI partnerships documents a smartphone-based cough-analysis tool for early TB detection in India, illustrating concrete public-health impact [S13][S14].
MAJOR DISCUSSION POINT
Practical AI use in disease detection
Argument 6
Privacy‑preserving cross‑border health data sharing MOU facilitates joint research and new cures
EXPLANATION
Bouvreau explains that a newly signed memorandum of understanding between India’s iSpirit and France’s Health Data Hub will enable the first privacy‑preserving transfer of health data across borders. This mechanism is intended to boost joint research efforts and accelerate the discovery of new medical treatments.
EVIDENCE
She mentions the ongoing collaboration between iSpirit in India, France’s Health Data Hub, and partners, and a recently signed MOU that will enable the first-of-its-kind privacy-preserving cross-border health data transfer, facilitating joint research and the discovery of new cures [45-48].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Cross-border health data sharing mechanisms that preserve privacy are discussed in the context of data-flow harmonisation and the new MOU between iSpirit and France’s Health Data Hub [S15][S2].
MAJOR DISCUSSION POINT
Cross‑border health data sharing for research
Argument 7
RUSH program fosters scientific exchanges and joint research between French and Indian institutions
EXPLANATION
Bouvreau highlights the RUSH programme, a series of scientific exchanges featuring talks by leading researchers and institutional heads, designed to deepen Franco‑Indian collaboration. The next edition is planned to take place in France, reinforcing ongoing academic partnership.
EVIDENCE
She notes a full programme of scientific exchanges called RUSH, featuring exceptional talks by researchers and institutional heads, with the next edition planned for France, fostering Franco-Indian scientific collaboration [51-54].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The RUSH scientific exchange programme is mentioned in the keynote as a platform for Franco-Indian research collaboration [S2].
MAJOR DISCUSSION POINT
Franco‑Indian scientific exchange programme
Argument 8
Leadership role at École Normale Supérieure supports deeper academic collaboration
EXPLANATION
Bouvreau points out that she chairs the board of the École Normale Supérieure, one of France’s leading academic institutions, indicating her personal commitment to strengthening academic ties with India. This role underlines the importance of high‑level institutional leadership in fostering research cooperation.
EVIDENCE
She indicates that she chairs the board of the École Normale Supérieure, a leading French academic institution, reflecting her personal commitment to deepening academic cooperation [49-50].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Bouverot’s position as chair of the board of École Normale Supérieure is noted in the keynote, illustrating high-level academic leadership fostering cooperation [S2].
MAJOR DISCUSSION POINT
Academic leadership facilitating collaboration
Argument 9
Launch of an open‑hardware tool to promote linguistic diversity and AI‑powered translation
EXPLANATION
Bouvreau announces the release of an open‑hardware solution, developed with Bashini and Current AI, aimed at supporting linguistic diversity through AI‑driven translation across India’s 22 official languages and many more. She positions India as an ideal launch environment for this cultural‑inclusion initiative.
EVIDENCE
She announces the launch of an open-hardware tool, in partnership with Bashini and Current AI, aimed at promoting linguistic diversity and AI-powered translation across India’s 22 official languages and many more, positioning India as an ideal launch site [59-62].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
An open-hardware AI inference device aimed at supporting India’s 22 official languages is described in the inclusive AI briefing on linguistic diversity and in the multilingual internet discussion [S3][S18].
MAJOR DISCUSSION POINT
Open hardware for multilingual AI translation
Argument 10
Initiative backed by foundations and the UN to provide open datasets and tools beyond private VC funding
EXPLANATION
Bouvreau references a foundation launched in Paris, supported by the MacArthur Foundation, the United Nations, France, India and other partners, which seeks to sustain AI development for the common good by offering open data sets and open‑source tools that are not reliant on venture‑capital funding.
EVIDENCE
She references John Palfrey of the MacArthur Foundation and the United Nations, noting a foundation launched in Paris to sustain AI development for the common good by providing open datasets and open-source tools not funded by venture capitalists [56-58].
MAJOR DISCUSSION POINT
Open‑source AI resources for the common good
Argument 11
Coalition for sustainable AI and Resiliency Working Group address AI’s energy consumption and climate impact
EXPLANATION
Bouvreau recalls the coalition for sustainable AI created in Paris, co‑chaired by France and India, and the Resiliency Working Group that tackles AI’s high energy demands and associated climate risks. She stresses that sustainability must be integrated into AI design from the outset.
EVIDENCE
She recalls the coalition for sustainable AI launched in Paris, co-chaired by France and India, and the Resiliency Working Group addressing AI’s high energy demands and climate risks, emphasizing that sustainability must be built into AI design from the start [64-68].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The Resilient and Responsible AI town hall references a coalition for sustainable AI and a Resiliency Working Group that tackles AI’s high energy demands and climate risks [S8].
MAJOR DISCUSSION POINT
Sustainable AI and climate considerations
AGREED WITH
Speaker 1
Argument 12
Prioritising child safety through age‑verification mechanisms and anti‑cyberbullying measures
EXPLANATION
Bouvreau stresses that protecting children is a priority for French leadership and global citizens, calling for stronger age‑verification systems and actions against cyberbullying. She argues that innovation and protection must progress together.
EVIDENCE
She emphasizes safety for children as a priority for President Macron and citizens worldwide, calling for stronger age-verification mechanisms and actions against cyberbullying, asserting that innovation and protection must go hand in hand [70-77].
MAJOR DISCUSSION POINT
Child safety and online protection
AGREED WITH
Speaker 1
Agreements
Agreement Points
Both speakers stress the need for AI regulation, responsible governance and protective measures to ensure AI serves the public good and does not harm vulnerable groups.
Speakers: Speaker 1, Anne Bouvreau
Emphasis on the need for AI regulations and responsible AI Coalition of willing countries (France, India, Brazil, Japan, Germany, Canada, etc.) promotes inclusive and sustainable AI governance Prioritising child safety through age‑verification mechanisms and anti‑cyberbullying measures Coalition for sustainable AI and Resiliency Working Group address AI’s energy consumption and climate impact
Speaker 1 notes that the audience is concerned about AI regulations and ethical, responsible AI [7]. Anne Bouvreau reinforces this by describing a multilateral coalition that aims for inclusive, sustainable AI governance [31-34], by launching a sustainable-AI coalition and Resiliency Working Group to embed climate considerations into AI design [64-68], and by calling for stronger child-safety safeguards such as age-verification and anti-cyberbullying measures [70-77]. Together they converge on the principle that AI must be governed responsibly and protected against misuse.
POLICY CONTEXT (KNOWLEDGE BASE)
This consensus mirrors the human-rights-based AI regulatory agenda highlighted at IGF 2023, where speakers called for rules that prevent harm and safeguard rights [S40], and it echoes broader concerns about controlling AI risks raised in parallel discussions [S41]. Multi-stakeholder governance models referenced in later sessions also reinforce this framing [S44].
Similar Viewpoints
Both see AI governance as a priority, linking regulation, multilateral cooperation, sustainability and protection of children as essential components of responsible AI [7][31-34][64-68][70-77].
Speakers: Speaker 1, Anne Bouvreau
Emphasis on the need for AI regulations and responsible AI Coalition of willing countries (France, India, Brazil, Japan, Germany, Canada, etc.) promotes inclusive and sustainable AI governance Prioritising child safety through age‑verification mechanisms and anti‑cyberbullying measures Coalition for sustainable AI and Resiliency Working Group address AI’s energy consumption and climate impact
Unexpected Consensus
Alignment on child‑safety measures
Speakers: Speaker 1, Anne Bouvreau
Emphasis on the need for AI regulations and responsible AI Prioritising child safety through age‑verification mechanisms and anti‑cyberbullying measures
While Speaker 1’s remarks focus broadly on AI regulation, the specific emphasis on protecting children-highlighted by Anne Bouvreau-was not explicitly mentioned in the introduction, making the shared concern for child safety an unexpected depth of consensus [7][70-77].
Overall Assessment

The two speakers demonstrate a clear convergence on the necessity of responsible AI governance, encompassing regulatory frameworks, multilateral cooperation, sustainability, and child protection. This alignment signals a strong, shared commitment to shaping AI in an inclusive, ethical, and environmentally conscious manner.

High consensus on core governance principles, which bodes well for coordinated policy actions and joint initiatives across nations and sectors.

Differences
Different Viewpoints
Approach to achieving responsible AI – regulatory mechanisms versus multilateral coalition and voluntary standards
Speakers: Speaker 1, Anne Bouvreau
Emphasis on the need for AI regulations and responsible AI Coalition of willing countries (France, India, Brazil, Japan, Germany, Canada, etc.) promotes inclusive and sustainable AI governance
Speaker 1 frames the summit around the need for formal AI regulations and ethical oversight, stating that the audience is concerned about AI regulations and responsible AI [7]. Anne Bouvreau, instead, foregrounds a multilateral “coalition of the willing” that will shape AI through inclusive, sovereign, and sustainable collaboration rather than through top-down regulation, emphasizing voluntary cooperation and joint initiatives such as the sustainable AI coalition and the Resiliency Working Group [31-34][64-68].
POLICY CONTEXT (KNOWLEDGE BASE)
The split reflects the debate captured in the Smart Regulation discussion, which proposes coalition-building around priority issues as an alternative to top-down regulation [S45], and in India’s AI Leap Policy critique of reliance on voluntary ethics guidelines, urging concrete regulatory mechanisms instead [S47].
Unexpected Differences
None identified
Speakers:
The transcript contains only an introductory remark from Speaker 1 and a keynote by Anne Bouvreau. No direct contradictions or surprising points of contention emerge beyond the differing emphasis on regulatory versus collaborative approaches, which was anticipated given their respective roles.
Overall Assessment

The discussion shows limited overt disagreement. The primary divergence concerns the preferred mechanism for ensuring responsible AI – formal regulation (Speaker 1) versus a voluntary, multilateral coalition and sustainability‑focused initiatives (Anne Bouvreau). Both speakers converge on the overarching goal of safe, inclusive, and beneficial AI, indicating a largely complementary dialogue.

Low to moderate disagreement; the difference is mainly strategic rather than substantive, suggesting that policy discussions can move forward with both regulatory and collaborative tracks without major impasse.

Partial Agreements
Both speakers agree that AI must be governed responsibly and serve the public interest. Speaker 1 highlights a collective concern for regulation and ethical AI [7], while Anne stresses that AI should be inclusive, sustainable, and oriented toward the common good, calling for safety measures (e.g., child protection) and inclusive governance [15-18][85-86]. The agreement lies in the shared goal of responsible AI, even though their preferred pathways differ.
Speakers: Speaker 1, Anne Bouvreau
Emphasis on the need for AI regulations and responsible AI AI is not only a technological transformation; it is a societal, cultural and political transformation – must be inclusive, sustainable, sovereign and rooted in the common good
Takeaways
Key takeaways
AI regulation and responsible AI are critical concerns for global stakeholders. Hosting the AI Impact Summit in India underscores the strategic importance of the Global South in shaping AI’s future. France and India share a deep, long‑standing partnership and are co‑leading multilateral coalitions (including Brazil, Japan, Germany, Canada, etc.) to promote inclusive, sovereign, and sustainable AI governance. Concrete AI applications are already delivering public‑health benefits, exemplified by a cough‑analysis tool for early tuberculosis detection. A new privacy‑preserving cross‑border health data‑sharing MOU between India’s iSpirit and France’s Health Data Hub will enable joint research and accelerate medical breakthroughs. Academic collaboration is being intensified through the RUSH program and leadership ties such as the chairmanship of École Normale Supérieure. Initiatives for the common good include an open‑hardware, AI‑powered translation platform to support linguistic diversity and a foundation that funds open datasets and tools beyond private VC capital. Sustainable AI is being addressed via a coalition and a Resiliency Working Group, with a “Resilient AI Challenge” launched to reduce AI’s energy footprint. Child safety is a priority, with calls for stronger age‑verification mechanisms and anti‑cyberbullying safeguards.
Resolutions and action items
Launch of an open‑hardware tool for linguistic diversity and AI‑powered translation (partnership between Bashini and Current AI). Signing of an MOU for privacy‑preserving cross‑border health data transfer between iSpirit (India) and the Health Data Hub (France). Co‑chairing of the Resiliency Working Group (France‑India) and initiation of the Resilient AI Challenge to develop low‑energy AI solutions. Continuation and expansion of the RUSH scientific exchange program, with the next edition scheduled in France. Commitment by France to work with India and other willing partners to build an inclusive, sustainable, sovereign AI ecosystem. Emphasis on implementing age‑verification and anti‑cyberbullying measures to protect children in AI applications.
Unresolved issues
Specific regulatory frameworks and enforcement mechanisms for responsible AI remain to be defined. How to scale and operationalise the privacy‑preserving health data‑sharing model globally is not yet detailed. Funding models and long‑term sustainability for open‑source datasets and tools beyond initial foundation support are unclear. Technical standards and practical pathways for universal age‑verification and cyber‑bullying mitigation have not been finalized. Broader geopolitical tensions (e.g., US “Stargate” initiative, China’s DeepSeek) and their impact on multilateral AI cooperation were noted but not resolved.
Suggested compromises
Balancing rapid AI innovation with robust safety and ethical safeguards (e.g., integrating child‑protection measures alongside development). Integrating sustainability considerations at the design stage of AI systems rather than as an after‑thought, to reconcile performance goals with climate objectives. Promoting open‑source, publicly funded AI tools to complement private‑sector VC‑driven development, ensuring broader access while maintaining commercial incentives.
Thought Provoking Comments
Holding an AI Summit in a country from the global south is very important from a symbolic perspective, but it is even more important from a strategic perspective. It sends a very powerful message to the world: AI is not a privilege of a few nations, not the preserve of a few companies. It is a global transformation and it must be shaped by all.
This statement reframes the AI debate from a technology‑centric narrative to one of global equity and strategic inclusion, challenging the common perception that AI leadership is limited to Western or corporate actors.
It set the tone for the entire keynote, shifting the conversation from abstract policy talk to a concrete call for inclusive governance. It prompted the audience to view the summit itself as a geopolitical statement, laying groundwork for later points on partnership and sovereignty.
Speaker: Anne Bouvreau
AI is at the center of a fierce geopolitical and economic competition – think of the US ‘Stargate’ investment and China’s DeepSeek – but this competition has also created momentum for stronger collaboration between countries such as France, India, Brazil, Japan, Germany, Canada, and many others.
By juxtaposing rivalry with collaboration, the comment introduces a nuanced view of AI geopolitics, suggesting that competition can be a catalyst for multilateral cooperation rather than a zero‑sum game.
This pivot introduced the theme of ‘coalitions of the willing’, leading directly into the discussion of specific bilateral initiatives (public health, data sharing, sustainable AI). It broadened the audience’s perspective from national competition to collective action.
Speaker: Anne Bouvreau
We are going from the AI Action Summit to the AI Impact Summit – impact in education, in public health, impact that improves lives, not just in theory, but in practice.
The shift from ‘action’ to ‘impact’ reframes the agenda from planning to measurable outcomes, emphasizing real‑world benefits and accountability.
This sentence acted as a turning point, moving the dialogue from high‑level policy to concrete use‑cases, which she then illustrated with the TB cough‑detection example.
Speaker: Anne Bouvreau
An AI application at the All India Institute for Medical Science can, by simply coughing into a smartphone, analyze the sound and early‑detect tuberculosis versus a common cold or viral illness.
Provides a vivid, tangible illustration of AI’s potential in public health, grounding abstract policy discussions in a real, life‑saving technology.
The example energized the audience, demonstrating the practical relevance of the summit’s themes and prompting interest in scaling such solutions across borders.
Speaker: Anne Bouvreau
The recent MOU between iSpirit (India) and the Health Data Hub (France) will enable, for the first time in the world, cross‑border health data transfer in a privacy‑preserving way, unlocking joint research and new cures.
Highlights an unprecedented technical and regulatory breakthrough in data sovereignty and privacy, addressing a core barrier to global AI collaboration.
Introduced a new topic—privacy‑preserving data sharing—that deepened the conversation about governance frameworks and set a precedent for future bilateral agreements.
Speaker: Anne Bouvreau
We are launching an open‑hardware tool to promote linguistic diversity and AI‑powered translation, a partnership between Bashini and Current AI, leveraging India’s 22 official languages and many more.
Connects AI development with cultural inclusion, emphasizing that technology must reflect linguistic diversity—a rarely discussed but critical aspect of equitable AI.
Shifted the discussion toward cultural representation, prompting listeners to consider AI’s role in preserving and amplifying minority languages, and reinforcing the summit’s inclusive narrative.
Speaker: Anne Bouvreau
AI requires huge amounts of energy and risks putting our climate goals at risk. We launched a coalition for sustainable AI and today we co‑chair the Resiliency Working Group with India to embed sustainability by design, not as an afterthought.
Brings environmental sustainability into the AI conversation, challenging the assumption that AI development is inherently neutral regarding climate impact.
Introduced a new dimension—energy and climate—into the dialogue, encouraging participants to think about AI’s carbon footprint and spurring interest in the Resiliency Working Group’s upcoming challenge.
Speaker: Anne Bouvreau
Safety, especially for children, is a priority for President Macron and for citizens worldwide. We must strengthen age‑verification mechanisms and fight cyberbullying; innovation and protection can and must go hand in hand.
Links AI governance directly to child safety, a socially resonant issue that adds urgency and moral weight to regulatory discussions.
This comment broadened the scope of the summit to include societal protection, prompting attendees to consider regulatory safeguards alongside technical innovation.
Speaker: Anne Bouvreau
The real question is, will we shape AI? Or will we tell our children that we didn’t even try?
A rhetorical climax that reframes the entire discourse as a moral imperative, urging proactive stewardship rather than passive observation.
Served as a concluding turning point, leaving the audience with a call to action that encapsulated all prior themes—governance, collaboration, impact, sustainability, and safety—thereby reinforcing the summit’s purpose.
Speaker: Anne Bouvreau
Overall Assessment

Anne Bouvreau’s keynote threaded a series of pivotal comments that transformed the summit from a generic policy briefing into a multidimensional dialogue on inclusive, sustainable, and responsible AI. Each insight—whether highlighting geopolitical dynamics, showcasing concrete health applications, unveiling groundbreaking data‑sharing agreements, or stressing cultural and environmental considerations—acted as a catalyst that redirected attention, deepened analysis, and broadened the agenda. Collectively, these remarks established a narrative of collaborative stewardship, setting the stage for subsequent sessions to build on concrete partnerships, ethical safeguards, and measurable impact.

Follow-up Questions
Will we shape AI or will we tell our children that we didn’t even try?
A rhetorical question highlighting the need to proactively govern AI development rather than passively accept its impacts.
Speaker: Anne Bouvreau
How can age verification mechanisms be strengthened to protect children from AI-enabled cyberbullying?
Ensuring child safety is a priority; concrete solutions are needed to balance innovation with protection.
Speaker: Anne Bouvreau
What privacy‑preserving technologies and frameworks are required to enable cross‑border health data sharing for joint research and disease cure discovery?
The MOU between iSpirit and the Health Data Hub opens a novel pathway, but practical implementation details remain to be explored.
Speaker: Anne Bouvreau
How effective is smartphone‑based cough audio analysis for early detection of tuberculosis compared with traditional diagnostic methods?
The AI application demonstrated at AIIMS shows promise, but systematic validation and scalability studies are needed.
Speaker: Anne Bouvreau
What design and deployment strategies will make open‑hardware tools for linguistic diversity and AI‑powered translation successful across India’s 22 official and many regional languages?
Launching the tool with Bashini and Current AI raises questions about data collection, model training, and community adoption.
Speaker: Anne Bouvreau
Which approaches can reduce AI’s energy consumption and achieve sustainable, resilient AI systems as outlined by the coalition for Sustainable AI?
AI’s high energy demand threatens climate goals; research is required to embed sustainability from the design stage.
Speaker: Anne Bouvreau
What concrete outcomes can the Resiliency Working Group deliver through the Resilient AI Challenge to address sustainability and energy efficiency?
The challenge aims to find solutions, but specific metrics, benchmarks, and implementation pathways need further investigation.
Speaker: Anne Bouvreau

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce

AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel convened to examine how India can build a scalable, holistic workforce to support its growing semiconductor ambitions, with Rangesh Raghavan introducing the session and the key speakers from METI, LAM Research and the DGA group [1-4][16]. Secretary S. Krishnan highlighted that the India AI Mission and the India Semiconductor Mission are converging, making semiconductors central to the AI story and underscoring the need for a resilient, globally-trusted supply chain [28-32]. He noted that India plans to commission ten major semiconductor plants, with four starting production in 2026, and that the newly announced Semiconductor Mission 2.0 will extend to equipment manufacturing and the full ecosystem [35-38]. Krishnan stressed that while India already contributes 20 % of global semiconductor design talent and has a large AI talent pool, it lacks skilled workers for advanced manufacturing and precision equipment, a gap LAM aims to fill [43-50].


Raghavan then turned to David Freed of LAM, describing the company’s 25-year presence in India, its state-of-the-art systems engineering lab in Bengaluru and its commitment to integrating India’s supply chain and workforce development [19-22]. Freed explained that the industry faces a “million-person” talent gap that spans field-service, process, equipment, metrology, device and reliability engineers, and that addressing it requires a broad understanding of the semiconductor ecosystem rather than narrow skill training [172-179][184-186]. He proposed faculty fellowships that place university professors in industry for six-to-nine months to transfer practical knowledge, and highlighted ongoing collaborations with ministries and agencies to expand training programmes [208-214].


Professor Saurabh Chandorkar added that India’s academic fabs, such as the IISc centre, are among the world’s best but cannot alone train a million people, so curricula are being revised to include fab-focused courses and hands-on modules on tools and process control [140-152][155-158]. He described the INUP programme that brings students from across India to work in fab environments and called for more such initiatives to scale up practical training nationwide [158-161]. Ashwini Vaishnaw praised LAM’s efforts, cited the growth from 50 to 315 participating universities and the deployment of students using advanced design tools to create and validate chips, emphasizing the strategic importance of semiconductors for AI and export potential [103-110][112-119].


Throughout the discussion, participants agreed that coordinated government, industry and academic action is essential, with the ISM 2.0 framework expected to fund skilling, supply-chain integration and equipment manufacturing [129-130][162-164]. The session concluded that building a broad, industry-aligned talent pipeline is critical for India to become a key player in the global semiconductor value chain and to realise the economic benefits of its semiconductor missions [61][172-176][184-186].


Keypoints

A massive workforce and skill gap must be closed to sustain India’s semiconductor ambitions.


The opening remarks stress the need for “required workers to enable the growth of the semiconductor industry” [1]. Later, Secretary Krishnan notes that India “lacks … people in advanced manufacturing” and “precision manufacturing of semiconductor equipment” [45-48]. He describes training programmes run in India and abroad (Malaysia, Singapore, Taiwan, Europe) [53-60]. David Freed quantifies the challenge as a “million-person gap” that spans many roles-from field-service engineers to process and device specialists [172-184]. Professor Chandorkar adds that universities must redesign curricula and provide “hands-on training” to prepare students for fab work [145-152].


The discussion repeatedly links the semiconductor drive to the AI mission, framing them as mutually reinforcing.


Krishnan explains that the session “represents how semiconductors are so central to the AI story as AI is increasingly … the semiconductor story” [28-30]. Later, Raghavan remarks that the event “speaks to the importance of the semiconductor industry to enable this transition and the role that companies like LAM play” [84]. Vaishnaw reinforces this convergence, stating that “in this world of AI … semiconductors will be one of the most important layers” [109-110].


Government initiatives-especially the India Semiconductor Mission 2.0 and new fab commitments-are positioned as the backbone of the ecosystem.


Krishnan announces “India Semiconductor Mission 2.0” covering the whole ecosystem, including equipment manufacturing [37-38], and notes the plan to commission ten major semiconductor plants, with four starting production in 2026 [35-36]. He projects a domestic market of “about $100 billion by the end of this decade” [40-42]. Vaishnaw cites concrete targets: “60 000 talent for clean-room operations and 80 000 overall design engineers” and the expansion from 50 to 315 universities [103-106]. Triolo references ISM 2.0’s focus on “skilling and on supply chains and manufacturing” [129-130].


Collaboration among industry, academia, and government is presented as essential, with concrete programmes such as faculty fellowships, hands-on labs, and joint curriculum development.


Triolo frames the panel as a “three-way relationship” linking government support, academic capacity, and industry needs [162-166]. Chandorkar describes existing industry-academia projects (INUP, hands-on courses on pressure gauges and PNID systems) and calls for scaling them nationwide [195-200]. Freed proposes “faculty fellowships” that place university faculty inside companies for 6-9 months to transfer industry-relevant knowledge [208-214]. The overall message is that only a coordinated effort can bridge the talent gap [186-190].


Overall purpose/goal of the discussion


The session was convened to map out how India can build a scalable, holistic workforce and ecosystem for its semiconductor sector, aligning the national AI and semiconductor missions, detailing government policy (ISM 2.0, fab roll-outs), and forging concrete industry-academia-government partnerships to close the talent gap and secure a resilient, globally competitive supply chain.


Tone of the discussion


The conversation begins with a formal, celebratory tone-welcome remarks, praise for the exhibition, and enthusiastic acknowledgment of government achievements [4-10][84-85]. As the panel progresses, the tone shifts to a more technical and problem-solving focus, highlighting skill shortages, training needs, and specific policy actions [45-48][172-184][145-152]. Throughout, the participants remain optimistic and supportive, using occasional light-hearted remarks (e.g., the “picture” jokes [23-24]) but consistently emphasizing collaboration and urgency. By the closing minutes, the tone becomes reflective yet still forward-looking, summarizing commitments and thanking contributors [335-341].


Speakers

S. Krishnan – Secretary, Ministry of Electronics and Information Technology (METI) [​S1]


Areas of expertise: Government policy for semiconductor and AI missions, supply‑chain resilience, semiconductor ecosystem development.


Harish Kumar – Representative, CSTV – Access to Energy Systems [​S4]


Areas of expertise: Energy systems, solar technology, workforce skilling in the semiconductor and renewable‑energy sectors.


Ashwini Vaishnaw – Honorable Minister for Electronics and Information Technology [​S6][​S7]


Areas of expertise: National semiconductor policy, industry‑government‑academia collaboration, AI‑driven technology initiatives.


Rangesh Raghavan – Host/Moderator, LAM Research (senior executive)


Areas of expertise: Semiconductor manufacturing, deposition & etching technologies, workforce development, event facilitation.


Professor Saurabh Chandorkar – Professor, Indian Institute of Science (IISc) – Key partner in the Semiverse program [​S11]


Areas of expertise: Semiconductor fab operations, academic research, talent development, hands‑on training for semiconductor manufacturing.


David Freed – Corporate Vice President, LAM Research (advanced analytical & simulation software) – Leader, Semiverse Solutions [​S12]


Areas of expertise: Semiconductor modeling, AI‑enabled talent pipelines, workforce training, industry‑academia partnership strategies.


Participant – Unnamed audience member (asks questions)


Areas of expertise: (not specified)


Paul Triolo – Partner, Technology Practice Lead, DGA Group – Panel moderator [​S17]


Areas of expertise: Technology consulting, semiconductor ecosystem integration, panel facilitation.


Additional speakers:


(No speakers outside the provided list were identified as having spoken in the discussion.)


Full session reportComprehensive analysis and detailed insights

The session opened with Rangesh Raghavan emphasizing that India’s semiconductor ambition “requires workers to enable the growth of the semiconductor industry and support this era” and stating that the day’s purpose was to devise a “scalable, holistic workforce strategy” for the sector [1-4]. He presented a historic Bidriware plate as a symbolic gift to the Minister [334-335], welcomed the guests – Sri Krishnan ji, Secretary of METI, David Freed of LAM Research and Paul Triolo of the DGA group – and praised the exhibition, noting its extension for an additional day [6-12][16-22]. Raghavan framed 2025 as a turning point, with government policy finally translating ambition into reality and the focus expanding beyond wafer fabrication to the whole ecosystem [16-18].


Paul Triolo acted as moderator and, before the discussion began, noted that Micron’s Anand Ramamurthy could not join because of a personal emergency [338-339]; he also mentioned that he had hoped to “grill Secretary Krishnan on ISM 2.0” but the opportunity did not arise [336-337].


Secretary S. Krishnan highlighted the convergence of the India AI Mission and the India Semiconductor Mission, observing that “semiconductors are so central to the AI story as AI is increasingly … the semiconductor story” [28-30]. He announced India’s participation in the Pax Silica consortium to build a “trusted supply chain” and warned that over-reliance on any single geography had been exposed by the COVID-19 pandemic [31-33]. Krishnan then outlined the government’s rollout: ten major semiconductor plants are committed, with at least four slated to begin production in 2026 [35-36]; the newly launched Semiconductor Mission 2.0 will cover the entire ecosystem, including domestic equipment manufacturing [37-38]. He projected a domestic market of roughly $100 billion by 2030, stressing the need for capacity to serve both internal demand and exports [40-42]. Citing India’s contribution of about 20 % of global semiconductor design talent and its large manufacturing and AI talent pools [43-44], he warned of a critical shortage in “advanced manufacturing” and “precision manufacturing of semiconductor equipment” [45-48]. Existing training programmes in FRABs, OSATs and labs across India, Malaysia, Singapore, Taiwan and Europe were noted, with a call for expanded capacity [53-60][61-62].


Ashwini Vaishnaw provided a data-driven update on the education front. In 2022 the semiconductor mission set targets of 60 000 clean-room operators and 80 000 design engineers; today 315 universities are participating, with students across Assam, J & K, Kerala and Tamil Nadu using world-class design tools, fabricating chips at the SCL Mohali lab and validating them [103-108]. Vaishnaw positioned semiconductors as a foundational layer in the five-layer AI architecture [109-110][112-119][84-85] and praised LAM Research’s role in linking India’s supply chain to the global ecosystem [84-85]. He announced that a new semiconductor plant will be founded tomorrow in Uttar Pradesh by Prime Minister Narendra Modi [123-124].


David Freed’s opening remarks were brief, noting that “even design… objective… drive across the country for full scaling of our talent development” [71-73]. He quantified the workforce challenge as a “million-person gap” spanning field-service engineers, process engineers, equipment engineers, metrology engineers, device engineers and reliability engineers [172-179]. Freed argued that the gap is not a single skill set but a need for a broad industry understanding; over-specialisation such as focusing solely on coding would be counter-productive [184-186][287-292]. To bridge the divide he proposed “faculty fellowships” that would place university professors inside industry for six to nine months, thereby transferring practical knowledge back to academia [208-214], and noted ongoing meetings with ministries to secure government support [207-214].


Professor Saurabh Chandorkar added an academic perspective. He described IISc’s fab as “among the top three or four in the world” but acknowledged that a single academic fab cannot train a million engineers [143-148]. Consequently, curricula are being revised to include fab-centric courses such as SPC (process control) and hands-on modules on pressure gauges and P&ID systems [149-152][195-199]. Chandorkar highlighted the INUP programme, which brings students from across India to work in fab environments, and announced the establishment of a dedicated “training fab” that will be replicated nationwide [158-161][205-210]. He called for government backing under ISM 2.0 to fund these facilities and to support curriculum redesign [153-158][162-166].


Triolo reiterated the “three-way relationship” linking government, academia and industry [162-166] and emphasized that ISM 2.0 will focus on “skilling and on supply chains and manufacturing”. He asked Chandorkar how the landscape might look in 2026 and what support the government might need [129-133][127-130], and summarised the consensus that coordinated action is essential for building a resilient, diversified supply chain [162-166][30-33].


The discussion revealed a nuanced disagreement about workforce development. Krishnan stressed the urgency of specialised training for advanced manufacturing and precision equipment [45-48]; Freed warned against narrow, skill-centric approaches and advocated a broad, interdisciplinary talent pipeline [184-186][287-292]; Chandorkar proposed a hybrid model that combines specialised hands-on labs with a curriculum overhaul to give students both practical exposure and a wider ecosystem understanding [140-152][153-158].


Audience questions broadened the scope. Harish Kumar asked about indigenous solar-wafer capability, prompting Chandorkar to acknowledge ongoing poly-crystalline silicon growth efforts, though details remained undisclosed [258-272]. Several participants sought advice for young aspirants; Freed urged a focus on problem-solving, critical thinking and a solid grounding in physics, chemistry and materials science rather than a single skill such as coding [287-292]. A question on optimisation led Freed to differentiate between small-data R&D environments, where optimisation is limited, and data-rich manufacturing settings, where AI-driven optimisation is highly effective [306-311][322-331]. When Paul asked “What is IAS?” the acronym was not clarified in the transcript [-].


In closing, the panel reaffirmed strong consensus that India must close a massive semiconductor talent gap, that AI and semiconductors are mutually reinforcing, and that a resilient, globally-trusted supply chain depends on coordinated government, industry and academic action. The newly announced ISM 2.0, the expansion to ten new fabs, the growth to 315 universities, and LAM Research’s Bengaluru lab together form the backbone of a holistic workforce strategy. Unresolved issues include the detailed roadmap for faculty fellowships, the scaling plan for training fabs, precise quantitative targets for the million-person gap, and the development of domestic precision-equipment manufacturing. Overall, the participants agreed that coordinated policy, industry investment and academic reform are essential to close the talent gap and position India as a pivotal node in the global semiconductor value chain [61-62][172-179][208-214][315-321].


Session transcriptComplete transcript of the session
Rangesh Raghavan

required workers to enable the growth of the semiconductor industry and support this era. We’re here today to just talk about that. Thank you for the opportunity to engage in this important conversation. We have experts here who can talk about how we build scalable, holistic workforce strategies to develop India’s semiconductor ambitions. We extend a warm welcome to our guests today. I’ll start with Sri Krishnanji, Secretary of METI. Thank you, sir, for joining us today. We know you’re very busy, but if I may add, excellent job by the METI team and all of, you know, we’re very proud to be here at this event. It was a mind -blowing exhibition. For those of you who have not enjoyed the exhibition, I urge you.

It has apparently been extended by a day. So I urge you to visit tomorrow. Tomorrow, if you get the chance to do so. You can visit till 8 p .m. today. You can visit till 8 p .m. today, sir. Sir, thank you, thank you sir well we have also here with us David Freed, Corporate Vice President and leader of LAM Research’s advanced analytical and simulation software business that supports the development of the semiconductor industry we also have Mr. Paul Triolo Mr. Paul Triolo is a partner in technology practice lead at the DGA group who graciously agreed to be a moderator for our panel discussion which is to follow shortly to set some context to both these sessions 2025 was a great year it was a great year for the India semiconductor industry as well with the right focus of the government and thanks to the India semiconductor mission years of policy vision are finally translating ambition into reality and we are beginning to see the fruits of that now and rightfully so the government has expanded their focus beyond just wafer fabrication to the larger ecosystem and to the larger because we realize that it takes the whole village to make this happen.

How do we ensure that we have the right talent, the research infrastructure, the technology expertise, the supply chain, all of the other things that it takes to support this sector? With the industry accelerating past a trillion dollars, we at LAM recognize the importance of supporting a globally distributed innovation -led ecosystem. We’ve been in India for 25 years, and we are committed to being a long -term partner and contributor to this. We have a state -of -the -art systems engineering lab for semiconductors in Bengaluru, which continues to grow and is significantly expanding India’s contribution to the global industry. We are also making rapid progress in integrating India’s supply chain into our global supply chain. But most importantly, we have taken big strides in supporting the development of the workforce in India, and David will talk about that a little bit more shortly.

so it won’t take any much more time but I’ll invite Secretary Krishnan to share a few of his remarks. Thank you. Do you want a picture? He wants a picture now.

S. Krishnan

Part of the planning for many of these sessions included instructions that the picture of the panellist needs to be taken right in the beginning so that if somebody goes missing midway through they’re not missed. So I guess he was getting to do his job. Lamb research in some ways is a bit of a a lucky charm as far as I’m concerned and I think Rangesh will understand what I’m trying to say but more importantly I think this is, I’m really happy to be part of this session because this is one of those sessions which represents what the convergence is in what India is attempting. We have two major missions, we have the India AI mission and we have the India semiconductor mission and this session kind of represents how those two missions are converging or getting together.

It represents how semiconductors are so central to the AI story as AI is increasingly to the semiconductor story. So this morning we also signed the Pax Silica, we were added to the Pax Silica so which again represents a very important step forward in building a trusted supply chain in the semiconductor space. What the world needs is a resilient and reliable supply chain where, I mean, it is not just for geopolitical reasons, but even for other reasons. We saw in the COVID pandemic issues relating to the supply chain prop up and therefore over -reliance on any one geography is always going to be a problem and India needs to be part of this game. And for India to be a reliable long -term partner in this game, it is also very important that we are not just part of the design teams, which we already are, including for land research and including for many other leading semiconductor companies in the world, but we also need to be part of the manufacturing.

And manufacturing not just of the chips. And this year we are going to have 10 of the, we already have committed to 10 major semiconductor plants across the country, four of them at least. We will commence production during the current year, during 2026. and the remaining in due course in about a year or so. But more importantly, I think the India Semiconductor Mission 2 .0 has also been announced, which will cover the entire ecosystem, including the manufacture of semiconductor equipment in the country. And I think that is a very, very critical and important step. And this is important from a context where I think the use of semiconductors is only going to grow and not come down. India’s own market for semiconductors is going to be about $100 billion by the end of this decade, and a fairly substantial part of what the global market is.

And we need to build capacity to actually cater to a significant part of this market, and in some senses also for export. And the export part is important, not from the perspective, not just from the perspective of… being competitive and being efficient because if you’re not able to export then it obviously means you’re not competitive and efficient globally but also because when you are part of a global supply chain you are never going to manufacture everything in the chain but you need to have a significantly important and you need to be an indispensable part of it somewhere so that you don’t sort of get knocked out of it that somebody else’s way so it is it’s it’s the way that this entire system works it’s the way the global value chain works and that’s where we are coming together in this entire space and what lamb is doing in the space is extremely important and equally what’s very important if we are to do this kind of advanced manufacturing in the country is actually the capacity building to have the skills to do this we keep talking about STEM skills in this country we keep talking about the number of people who are we we have 20 % of the semiconductor design team in the country, in the world.

We also are recognized as having one of the largest talent pools for manufacturing, for AI in the world. Both of these are true. But where we lack is people in advanced manufacturing. In the actual manufacture of semiconductors. Where we lack is in the precision manufacturing of the equipment needed for semiconductors. And LAM Research and companies of that nature, in building the semiconductor ecosystem in this country, are looking to develop precisely that. The precision manufacture of semiconductor equipment. That means we will have to skill people in that space. We will have to skill people in that line of work. And that’s the real challenge that we will be facing in the next five years. As part of the India Semiconductor Mission, we have trained workers.

In FRABS and in… in OSATs, not just in India, but like in the semiconductor lab at Mohali, but also in Malaysia. We have trained people in Singapore. We have trained people in Taiwan. We have trained people in Europe. We have trained people in different parts of the world. And we will continue to do that, but we will also need more capacity to do it here. And training and research capacity being built by companies like LAND will have an important implication there, and the government will support those initiatives as part of the India Semiconductor Mission 2 .0, and make sure that India becomes a key player in this space as well and becomes a key partner in global supply chains.

It’s an investment that the world is making in India, which I can assure you will be paid back in no uncertain terms in terms of building a resilient, trusted value chain for semiconductors for the world, and that’s precisely what… We are attempting… to do through the series of initiatives and today we can’t any longer speak of ai without speaking of semiconductors or vice versa and which is why what lamb is doing and what we are attempting to do in terms of skill building in this critical space is so important and which is why i’m extremely happy to be part of this event and all strength to you in lamb may you continue to be a lucky charm thank you

Rangesh Raghavan

thank you very much christian sir uh deepa sir is in such a hurry that you’re in such a hurry uh we want to make sure you get your gifts I just wanted to wind down. Five minutes. Okay. We are eagerly awaiting the arrival of Honorable Minister Vaishnoji. He is five minutes away, is what I’m just told. Minister Vaishnoji has been instrumental in getting this industry where it is in India over the past few years. We look forward to his presence here shortly. And in the interim, I’d just like to invite David Freed to give a few comments. David is a leader of our global semiconductor modeling and workforce development organization called Semiverse Solutions. David has played a key role.

in building India’s workforce training on advanced semiconductor manufacturing. He’ll give a few words about that. Thank you. Thank you very much.

David Freed

even design. And so the objective here is really to drive across the country for full scaling of our talent development. So with that I’ll wrap up. Thank you very much for your attention and I think we’ll kick off our panel pretty soon. I’m sorry.

Rangesh Raghavan

Thank you very much David. Welcome sir. It’s a pleasure to see you again. We know you’re very busy and this is one of the marquee events for the country of the whole year. The scale and the impression of this event is mind boggling truly at the scale that we have been able to do it. So congratulations to you sir and the team for inspiring us with the exhibits that we saw today were amazing. And it speaks to the potential of AI. It also speaks to the importance of the semiconductor industry to enable this transition and the role that companies like LAM play in that. and we are very grateful to you sir for your support.

You’ve always been very supportive of us in our journey here and you continue to be so we’d like to hear from you a few remarks. We know you’re a very busy person so we’d appreciate it. Thank you.

Ashwini Vaishnaw

This is LAM team or people who have come to listen to LAM. How many people work in LAM? Mostly people who are mostly here. LAM supplier ecosystem. Okay, very good. Solar technology. You’re in solar, very good. The way the semiconductor industry is growing in India, this is an unprecedented thing. Just in a few years, in the beginning of 2014, I was told that I was going to be a member of the LAM team. I was told that I was going to be a member of the LAM team. I was told that I was going to be a member of the LAM team. I was told that I was going to be a member of the LAM team.

I was told that I was going to be a member of the LAM team. I was told that I was going to be a member of the LAM team. I was told that I was going to be a member of the LAM team. Initially, we were focused on design and we had a lot of new capabilities in design. Then we came to manufacturing and now we are going much deeper in equipment and materials. In 2022, when the semiconductor mission started, we had a target of 60 ,000 talent for clean room operations and 80 ,000 overall design engineers. We thought we will start in 50 universities. Today, we have 315 universities. We already have students using world’s latest design tools, designing chips, getting them manufactured in SCL Mohali and validating them.

And throughout the country, from Assam, J &K, Kerala, Tamil Nadu, Students from all over the country are doing chip design themselves. This capability is going to become a great power for the coming years. And we all know that in this world of AI, in the age of intelligence, semiconductors will be one of the most important layers. In this architecture of five layers, semiconductor is going to be a very important layer. So, all of you please participate in this. I would like to thank LAM for taking this initiative. I would like to thank all the people who have got associated, especially the universities. How many people have come from the universities? How was your experience? How was your experience coming from the university?

Very good. How easy was it to use this entire semi -verse? Very easy. Actually, my good friends from LAM… It was easy. Did anyone find it difficult? Talent gap has to be filled by India only. That means all that work is going to come to India. That will be a huge opportunity, space for our young people. And tomorrow, in Uttar Pradesh, a new semiconductor plant will be founded by our Prime Minister, Shri Narendra Modi. Many congratulations

Rangesh Raghavan

As you know sir we are in the business of deposition and etching this is an old 14th century Indian technology called Bidariware from the district of Bidar in North Karnataka where they also do this damascene process which is what is used for the most advanced semiconductors today so this is a plate which is showing the skill of the artisans who have manually etched these features and deposited metal within those etched features and then polished it which is exactly the process used today for semiconductor manufacturing so we thought it would be very appropriate for you to have this gift so thank you very much sir thank you thank you so much thank you so much sir Thank you.

Thank you. so now we can proceed with the panel discussion with the remaining time we have we have Paul Trielo here to conduct the panel discussion we had Mr. Anand Ramamurthy from Micron due to join us unfortunately he had a personal emergency and he had to leave town so we wish him well in the meanwhile we’ll have David and we’ll have Professor Saurabh Chandorkar Professor Chandorkar is one of our key partners at Indian Institute of Science he has been instrumental in the launch and execution of the Semiverse program he is also very busy advancing the state of the nation in the most advanced research areas for semiconductors and its applications so we’d love to hear from him as well thank you very much thank you Paul

Paul Triolo

Thank you. So, okay, I’m going to pick up on some of the themes that were discussed earlier. I was going to grill Secretary Krishnan on ISM 2 .0, but unfortunately we can’t do that. But I think it’s really important looking forward to, as was mentioned, ISM 2 .0 will focus on skilling and on supply chains and manufacturing. So let me start with Dr. Chandakar. We know that IIS is hosting a really rich center in Bangalore with LAM and other companies that is critical for the skilling issue in the semiconductor industry going forward. How do you see the future shaping up in 2026? And what does IIS need, for example, from the government under ISM 2 .0?

Professor Saurabh Chandorkar

Sure. Sure. So let me just start by saying that. It’s actually quite amazing for me to just have. the dream of having FABs come up in India. It was something that actually happened from my father’s time, who was also a professor in IIT Bombay. And since his time, he was also a semiconductor manufacturer and technology and such. So anyway, fast forward, we are in this amazing position where we are actually getting FABs here, which obviously, as has been discussed, leads us to realize that we actually need a lot of workforce. And it’s not that we didn’t have people out here who were learning, say, semiconductor technology. It wasn’t that we were not doing semiconductor design.

But what was actually missing was the ability to actually see how FAB actually works, where you actually go and interact with tools. And that’s where the semi -verse comes in and, you know, and basically we in ISE do, in fact, have a really good FAB. as an academic fab, I would say we are probably in the top three or four in the world. So we are pretty good there, but that’s not the case for most of the universities in here. And we alone cannot take the role of training one million people. That’s just impossible. So when this whole program came, this was an ideal opportunity. And so, of course, that’s very exciting. And I see ourselves, we also recognize that this needs a certain re -look at the way we teach our coursework.

So, for example, we started teaching courses such as advanced notes from the perspective of fab, and that’s where, in fact, we do teach and make use of this software. and I, for example, teach SPC, which is basically process control. How does one do that? And those are the kinds of things that are actually really required for FAB. And so the way I see it is I think the foundation has been laid down, and I am sure that if this continues along with the support of the government, I’m sure we’ll do just fine. But the ask is not small, by the way. If you just look at it, it’s not just that once you get trained on tools like this that you become really actually immediately ready to go and start working in the FABs.

That’s not the case. And what needs to be, therefore, understood is that there is a second layer of hands -on training that needs to happen. We ourselves, in fact, have started, we have a training FAB that’s currently getting established, and this needs to happen across India far more. We already do these kinds of programs called INUP where people come from all around India and do some sort of FAB in our FABs. But this would be more intended towards training. And so we are gearing ourselves up for that. And I think this needs to happen everywhere else where more FABs need to come up and show this up.

Paul Triolo

Great, great. Those are great. So, I mean, as we’ve heard, I think this integration of government support for both the academic piece of this and the industry piece is really important, a really important three -way relationship. So I’m going to go back to David with a great presentation on Semiverse and say, you know, LAM as I think everybody understands is such a critical. part of the supply chain. I mean, you know, no land, no semiconductors, right? So, David, how do you envision this workforce? As Professor Chandakar has noted, you know, the foundation has been laid, but I think that as AI is taking off and as we look forward to the next three to four years, you know, we’re going to see this huge demand.

And the million -person shortage really sort of blows my mind here. That’s a huge number. So in terms of support from the government to help close that gap, to continue the momentum that land has generated here, what are the gaps you see here? And are there areas you’d like to see expanded in terms of this collaboration between both the government and academia?

David Freed

Okay, so I’ll start just thinking about the gaps, right? This million. This million -person gap. I think it’s important to recognize that that gap is not a single type of person, a single type of skill. Right. There’s gaps across the entire ecosystem. And that ecosystem spans from even just from LAM’s perspective, field service engineers who maintain the tools in the lab and in the fab all the way to process engineers, process developers, equipment engineers. And then if you expand out to the rest of the ecosystem, our customers, they will have demands in metrology engineers. They will have demands in device engineers, simulation and reliability. So the span of disciplines that make up that million person gap is very, very broad.

OK. And so one of the things that we tend to focus more on developing talent and a talent pipeline rather than just. Educating on individual skills. And I think that’s super important for the future of semiconductors in India that we focus on broad talent. And I want to I actually want to touch a word that you said. I think you said it five different times in your response, Professor. Use the word understand. the understanding of what we’re producing the understanding of what our products are is so much more important than a singular skill to go do one thing and so the semi -verse program at IISC as we’ve expanded out across the country is more about teaching students what are we making what are the devices what does process integration mean what are we creating so that those students can go off into various different areas of the ecosystem are they ready for all of those jobs with one class?

no, of course not they need the additional hands -on training they need additional education in those areas but my recommendation I think the recommendation broadly based is focus on talent rather than skill okay combining a broad understanding of the industry and what we’re what we’re trying to accomplish and what we’re building it’s taken the the the countries that have historically led this industry have been working at this and for 50 to 70 years we’ve developed that understanding and that broad swath of knowledge over 50 to 70 years if we’re going to do it here in two years it’s going to take a very different focus on how we develop the understanding of the of the industry so that that’s my expectation but by doing that we can address all of those gaps sort of at

Paul Triolo

the same time great great yeah i mean i think that the the the skill skilling is the sort of popular word here but it may not be the right way to think about this industry given what we discussed about the complexity uh of manufacturing and and the the disciplines that are needed it really is a commitment to a to a you know a huge set of uh to a huge set of uh to a huge set of uh to a huge set of uh to a huge set of uh to a huge set of talent development um that again uh collaboration with IAS and the academic world is so important. So let’s turn back to Professor Chandekar.

I know we’re going to have a little bit of time for questions, I hope, at the end. What is IAS? So I’ve talked about what is IAS looking for from the government. What is IAS looking for the industry as we enter, particularly think in terms of ISM 2 .0, which I think is really important. We may not know all the details. And then are there areas where things can be improved or streamlined? And what are the challenges? Because this is, as we know, that now this is a complex

Professor Saurabh Chandorkar

Right. So from the industry, some of the things that we already are actually in a process of talking with industry in this regard, which is he just mentioned right now that you don’t necessarily have to focus on one particular skill. But still making the coursework tailored to. what is actually essential for some of the skills that are needed is something that needs to happen. And so as an example, we recently started a course for just giving hands -on training to students, sort of people working in labs, on how do pressure gauges work, how do you build PNID systems. Those are the kinds of things that, for example, he just talked about, how you need to be able to maintain tools.

And that’s the kind of training that we are, in fact, giving in our own courses as well. In fact, one of the rather interesting ways in which ISC is currently sort of providing service to the industry is by just training. Our own 50 -odd employees who work in our fabs. those actually are surprisingly in demand are immensely in demand and it’s very hard for us to keep them in so what we would like more from industry is maybe more of this kind of hand holding that so for example we talked with LAM and did this together with them this needs to actually sort of grow across and to some extent we can do it but I think LAM since you guys already are giving out this software to so many other places maybe it would be easier to do the same elsewhere as well and I’m sure that’s something that’s going to be of great use

David Freed

just one comment I’ll make is this is one of the few situations where industry doesn’t need to be convinced to be involved here if we don’t fill that talent gap we will fail Like all of our business objectives and our growth objectives for the next 10 years require the talent pipeline to be developed. So this is not something where you’re trying to crack into industry or trying to convince us to do something we don’t want to do. We fail if this doesn’t happen. And so I think it’s like one of these examples where we have mutually perfectly aligned objectives. And so we’re trying. I’ve had meetings for the last two days with different ministers and different agencies here in India where we’re trying to find the ways we can be more involved.

One way, and I hope I’m not ruining any surprise, an idea that came up over the last couple days is faculty fellowships at these companies. Right? If we could take the faculty and give them a job, if we can figure out a way to get that funded, give the faculty a job for six to nine months inside our companies, in the industry, and really drive more industry -relevant knowledge to the faculty, to the universities, I think this is a brilliant idea. And we’re going to try to pursue this. And this idea only comes when we sit down at the table and we start talking. What do the universities need? What do we need? What can we provide?

How do we make this work? But nobody needs to convince us. We need this to happen.

Professor Saurabh Chandorkar

Right, right. Yeah, along the same lines, maybe more projects that these students do for PhD, if they are aligned with not just LAM, actually, all the entire center.

David Freed

No, no, no, just LAM. Just LAM. Just LAM.

Professor Saurabh Chandorkar

Yeah, so I think that would really work out. And I think that’s kind of important. And I truly believe that unless you do projects along the lines of something like, which is aligned with industry, it’s not necessarily. He did say that, you did say that talent matters. But I think the fact that we have small time window actually means that we don’t have as much time. Yeah. As for example, so. So I think that’s a really good point. So as an example, I myself did my PhD in, you know, men’s. And in industry, when I joined Intel, I started out with no knowledge of all the SPC stuff, no knowledge of, you know, how they do stuff on the floor and whatnot.

But I had to learn it, and I had enough time. I had no problems. This is not the case here. They’re going to have, so for example, sure enough, now once data starts their fab, they’re going to quickly find out how hard it really is, how quickly and how often you fail, and how it’s important to pick yourselves up and to move forward. And sort of that sort of, I think that’s something that PhDs, for example, have a lot in them, sort of built into them, because they fail and mostly just fail and then eventually succeed at some point. And so I think that’s another thing that probably needs to happen at a bigger scale.

I think that’s a big deal within India where PhDs, more PhDs now also start looking into these kinds of jobs and just sort of. having at least some bent towards them. So that would be a thing.

Paul Triolo

And I think it’s important that having the manufacturing, having the fact that there’s going to be fabs, I mean, Japan is going through a similar thing, right, where for a long time they weren’t doing advanced logic, and now that’s one of the reasons they attracted TSMC to come to build a fab. And now within the academic sector, there’s a lot of interest in hardware engineering because it’s a hard discipline, but at the end of the day, if the country is building fabs and there’s a need for engineers, then that makes it more attractive because it has to be, so that’s part of the whole ecosystem building.

David Freed

I was just going to say, I think, I joke around that I only want LAM to benefit from this, but I think we’re seeing other companies in the industry follow us. Obviously, LAM is leading this effort. Obviously, LAM is… benefiting from this already, right? We’re already seeing the talent pipeline develop. We’re scaling the team in Bangalore. We’re already getting the benefits from this. And so because of that, our competitors, but also our partner companies have started doing the same. And so I think we are seeing, you know, I can say ASML, they’re not a competitor. They’re a very good partner. We work with them very closely. We see them following suit. They’re jumping in and trying to do some of the same things that we’re doing here in India because, again, their business objectives are reliant on closing that talent gap.

So I do think we’re seeing, I’m very, very proud of LAM. I’m very proud that we’re leading this, that we’re out in front. But I’m also very proud to see the rest of the industry jumping in, copying what we’re doing because we all need it to happen.

Paul Triolo

Great. Do we want to take a couple questions from the audience? Okay, wow, we got a lot of them. Okay, let’s go right here.

Harish Kumar

Thank you very much Chairman. I am Harish Kumar from CSTV, Access to Energy Systems. Question, first of all I would like to thank the Minister for having a very good start -up in the semiconductor industries in India. So the question is how to make a skilling, skilling India, energizing India. Skilling India, there are two questions. How to make the lamp research, make a skilling activity like in wafer development, wafer in solar technology. The solar cells and solar module came from the wafers. So there is no unit of any kind in India on wafer development. So there is any program on wafer development for the solar manufacturing, solar cell manufacturing and marketing in India, not import anything.

I don’t know if you…

Professor Saurabh Chandorkar

So I can actually answer to some extent and let him take over from there. Actually, there are efforts going on in India for, in fact, polycrystalline silicon growth for wafers, and that’s something that is coming up. I won’t reveal because I don’t know exactly if they want to reveal it, but it’s a big company. They’ll be bringing it in. So it’s happening. It’s going to happen.

Harish Kumar

Because of the skill development, India has a youth, 40 % youth in India. The question is skilling, skilling in India, energizing in India, solar technology. We’re bringing solar technology to marketing.

David Freed

Sure. I think, I mean, one thing I would say is, like, leveraging the connection between, between industry, academia, the government. And it’s been incredibly fruitful. It’s also just been, frankly, pleasant. It’s been such a joy to work together between the government, academia, and our industry. And I think solar should follow a similar model, right, where there’s business opportunity, where there’s an educational opportunity, where there’s an incentive to be successful as a country. We put those pieces together, and wonderful things can happen. And I cannot express how wonderful, how enjoyable this experience has been in India because the faculty we’ve worked with at IASC and the other schools are such consummate professionals, are so invested in this vision of the future, and the government is backing it.

So I would urge, you know, copycat this model of putting the three pieces together and one day… Wonderful things can happen because the demand is here, the supply is here, and the commitment to the vision is here.

Participant

Okay, one question May I? This feels very palpably like a Y2K moment where the demand is there and you have this great opportunity if somebody was listening to this and they have a young person in the family and they’re looking to pivot in a flowchart, what is the first thing that the young person needs to do to get into this market?

David Freed

As a young person problem solving, critical thinking whether they want to be building Legos or doing coding exercises critical thinking, problem solving and then some specialization will occur naturally later but what I would urge against and it goes back to some of my messages before is focusing exclusively on a specific skill because this is the path to success Thank you and just look at what’s happening with our previous focus on coding. Okay, everybody said coding is our way to the future. Coding is the way to success. And now AI is writing all the code. So I would stress, like, avoid the urge to focus on a very single skill, a single solution, and I would focus on a broad -based understanding, problem -solving critical thinking, physics, chemistry, material science, the broad, hard physical sciences lead to these disciplines across the ecosystem.

Now, I say this as a father of two who has failed miserably to get his daughters into STEM. But I tried. I tried really, really hard, and I think that’s where the kids, that’s where the talent is going to come from, by thinking broadly, by thinking critically and thinking about problem -solving, rather than picking one skill to get very good at.

Paul Triolo

I got my daughter into chemical engineering.

Participant

Just a minute. Sir, I have one intervention directly to you, David. I was listening to you with rapt attention. Excuse me. I would come to know that about the talent. I was a student of English Literature of Calcutta University 30 years ago. There is a very famous essay by T .S. Eliot where he mentioned about traditional and individual talent. It is a talent pool which matters a lot. I have a specific question with respect to optimization, which you mentioned. About the semiconductor is AI, AI is semiconductor, and it’s optimization policy. And it says that could you just please just highlight as much as possible.

Paul Triolo

All right. Well, that will be our last question.

David Freed

So the interesting thing, I think, again, optimization and some of these technologies have to be really discipline focused. And so when we’re doing R &D, we’re in a small data environment. We don’t have a lot of data. Optimization isn’t. Isn’t very helpful when we’re in manufacturing. We have lots of data. Optimization is extremely helpful. And so we’re developing machine learning and AI techniques. But you have to bring the right tool to the job. Optimism.

Participant

Sir, I have one intervention directly to you, David. I was listening to you with rapt attention. Excuse me. I would come to know that about the talent. I was a student of English Literature of Calcutta University 30 years ago. There is a very famous essay by T .S. Eliot where he mentioned about traditional individual talent. It is the talent pool which matters a lot. I have a specific question with respect to optimization, which you mentioned, about the semiconductor is AI, AI is semiconductor, and it’s optimization policy, and it says that could you please just highlight as much as possible.

Paul Triolo

All right. Well, that will be our last question.

David Freed

So the interesting thing, I think, again, optimization and some of these technologies have to be really discipline -focused. And so when we’re doing R &D, we’re in a small data environment. We don’t have a lot of data. Optimization isn’t very helpful. When we’re in manufacturing, we have lots of data. Optimization is extremely helpful. And so we’re developing machine learning and AI techniques. But you have to bring the right. You have to bring the right tool to the job. Optimization is a great tool to the job. Organization in a small data R &D mode isn’t always super helpful. Very, very helpful in a big data manufacturing mode. So I think we really have to focus on the discipline.

Paul Triolo

All right. Well, with that, we have to call it an end because we have exceeded the time allotted to us. There are other people waiting to use this room. So thank you very much, David. Thank you, Paul, for hosting. Thank you very much, Professor Chandakar. Appreciate it. Thank you very much, Paul. Thank you. All right. Yeah. He gets a black dress. Come over here for a photo op. Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (12)
Factual NotesClaims verified against the Diplo knowledge base (4)
Confirmedmedium

“The exhibition was extended for an additional day and David Freed of LAM Research was present.”

The knowledge base notes that the event was extended by a day and that David Freed was among the attendees [S5].

Confirmedhigh

“India announced participation in the Pax Silica consortium to build a trusted semiconductor supply chain.”

Both a press briefing and a description of Pax Silica confirm India’s entry into the consortium as a strategic move to become a trusted partner in the global semiconductor supply chain [S27] and [S66].

Additional Contextlow

“Secretary S. Krishnan highlighted the convergence of the India AI Mission and the India Semiconductor Mission.”

The broader discussion in the knowledge base emphasizes convergence among government, industry and academia on India’s semiconductor workforce strategy, providing context for the AI-Semiconductor mission alignment [S1].

Confirmedmedium

“Secretary S. Krishnan is the Secretary for India (government).”

The knowledge base lists S. Krishnan as a Secretary speaking at the India AI Impact Summit 2026, confirming his official role [S18].

External Sources (69)
S1
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — -S. Krishnan- Role/Title: Secretary of METI (Ministry of Electronics and Information Technology)
S2
https://dig.watch/event/india-ai-impact-summit-2026/panel-discussion-ai-cybersecurity-_-india-ai-impact-summit — Sri S. Krishnan, Secretary, Ministry of Electronics and IT, my dear friend, Professor Ravindran, Excellencies, distingui…
S3
Panel Discussion AI &amp; Cybersecurity _ India AI Impact Summit — Sorry, could I make a quick announcement to have all the panelists and the speakers on the stage for a quick photo? Mr. …
S4
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — -Harish Kumar- Role/Title: From CSTV, Access to Energy Systems
S5
https://dig.watch/event/india-ai-impact-summit-2026/ai-powered-chips-and-skills-shaping-indias-next-gen-workforce — Because of the skill development, India has a youth, 40 % youth in India. The question is skilling, skilling in India, e…
S6
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — -Ashwini Vaishnaw- Role/Title: Honorable Minister (appears to be instrumental in India’s semiconductor industry developm…
S7
Announcement of New Delhi Frontier AI Commitments — -Shri Ashwini Vaishnaw: Role/Title: Honorable Minister for Electronics and Information Technology, Area of expertise: El…
S8
AI and Global Power Dynamics: A Comprehensive Analysis of Economic Transformation and Geopolitical Implications — -Ashwini Vaishnaw- Minister for Economic Electronics and Information Technology of India
S9
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — -Harish Kumar- Role/Title: From CSTV, Access to Energy Systems -Participant- Role/Title: Various unidentified audience …
S10
https://dig.watch/event/india-ai-impact-summit-2026/ai-powered-chips-and-skills-shaping-indias-next-gen-workforce — How do we ensure that we have the right talent, the research infrastructure, the technology expertise, the supply chain,…
S11
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — -Professor Saurabh Chandorkar- Role/Title: Professor at Indian Institute of Science (IISc); Key partner in the launch an…
S12
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — -David Freed- Role/Title: Corporate Vice President and leader of LAM Research’s advanced analytical and simulation softw…
S13
https://dig.watch/event/india-ai-impact-summit-2026/ai-powered-chips-and-skills-shaping-indias-next-gen-workforce — thank you very much christian sir uh deepa sir is in such a hurry that you’re in such a hurry uh we want to make sure yo…
S14
WS #280 the DNS Trust Horizon Safeguarding Digital Identity — – **Participant** – (Role/title not specified – appears to be Dr. Esther Yarmitsky based on context)
S15
Leaders TalkX: Moral pixels: painting an ethical landscape in the information society — – **Participant**: Role/Title: Not specified, Area of expertise: Not specified
S16
Leaders TalkX: ICT application to unlock the full potential of digital – Part II — – **Participant**: Role/Title not specified, Area of expertise not specified
S17
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — -Participant- Role/Title: Various unidentified audience members asking questions -Paul Triolo- Role/Title: Partner in t…
S18
Keynote Adresses at India AI Impact Summit 2026 — Supply Chain Security and Trusted Partnerships This approach reflects lessons learned from recent supply chain disrupti…
S19
Partnering on American AI Exports Powering the Future India AI Impact Summit 2026 — “And therefore, we need to have trusted partners with whom we can work and trusted value chains so that technology can w…
S20
Secure Finance Risk-Based AI Policy for the Banking Sector — “Three dominate cloud capacity and a handful command foundation models threatening financial stability and economic sove…
S21
AI and Digital in 2023: From a winter of excitement to an autumn of clarity — Chips are the primary geopolitical battleground between China and the United States, with two key dynamics. The first is…
S22
Seismic Shift — As global supply chains realign and investment shifts from China to other countries, India can be a prime beneficiary if…
S23
AI driving solutions backed by Hyundai and Samsung — Canadian startup Tenstorrent and South Korea’s BOS Semiconductorsunveiled advanced AI chipsdesigned for infotainment and…
S24
AI-driven semiconductor expansion continues despite market doubts — The pace of the AI infrastructure boomcontinues to accelerate, with semiconductor supply chains signalling sustained lon…
S25
Semiconductors — Semiconductors and AI are closely intertwined. Semiconductors are the backbone of modern computing and are present in a …
S26
Parallel Session A5: Achieving Sustainable and Resilient Transport and Logistics including inSIDS — Further, the executive spoke of working with digital partners to devise systems capable of managing suppliers to ensure …
S27
Press Briefing by HMIT Ashwani Vaishnav on AI Impact Summit 2026 l Day 5 — The semiconductor sector represents a parallel track of development, with Vaishnaw specifically mentioning the foundatio…
S28
The Global Power Shift India’s Rise in AI &amp; Semiconductors — And again, AI leadership will not really happen by accident. It will require a deliberate alignment across policy, indus…
S29
The Battle for Chips — In conclusion, India’s strategic approach to developing a comprehensive semiconductor ecosystem demonstrates a commitmen…
S30
Opening Ceremony — This set the foundational tone for the entire forum, establishing the urgency and scope of digital governance challenges…
S31
UNSC meeting: Regional arrangements for peace — 10. Mozambique’s representative: Emphasized bilateralism, regionalism, and multilateralism as mutually reinforcing mecha…
S32
India to boost innovation and digital services — India haslaunchedseveral transformative initiatives to strengthen its digital infrastructure and innovation ecosystem, f…
S33
Indias Roadmap to an AGI-Enabled Future — Much of the semiconductor IP used globally is developed in India, particularly in Bangalore, Pune, and Hyderabad, but va…
S34
Closing Session  — Essential role of collaboration between industry and government stakeholders
S35
Socially, Economically, Environmentally Responsible Campuses | IGF 2023 Open Forum #159 — Collaboration between academia and industry is essential for effective decarbonization strategies. An example is provide…
S36
Artificial Intelligence &amp; Emerging Tech — In conclusion, the meeting underscored the importance of AI in societal development and how it can address various chall…
S37
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — “And I think this integration of government support for both the academic piece of this and the industry piece is really…
S38
The Global Power Shift India’s Rise in AI &amp; Semiconductors — “And if you look at where the AI penetration, AI adoption, AI infrastructure resides globally, you can directly trace th…
S39
How to make AI governance fit for purpose? — Chuen Hong Lew: Well, Gabriela, thank you so much. Really nice to see you again and likewise to all my counterparts here…
S40
Next Steps for Digital Worlds — Recognizing the need for diverse perspectives and concerted efforts, the speakers emphasized the importance of navigatin…
S41
Semiconductors — Fabrication and IDMs Policy measures for cooperation have been proposed, such as theEU Commission’s proposed CHIPS Actt…
S42
Comprehensive Report: AI’s Impact on the Future of Work – Davos 2026 Panel Discussion — Continuous learning and adaptability are essential for future workforce Examples of complex dialysis machines sitting i…
S43
The Battle for Chips — Addressing power consumption concerns in the semiconductor industry, India is actively engaged in research on advanced p…
S44
Towards a Reskilling Revolution — (61%). Skills gaps in local labour markets (57%) and in the leadership within the companies (52%) follow close behind. A…
S45
Parallel Session D3: Supply Chain Disruptions – The Role and Response of NTFCs — In summary, the analysis accentuated TFAs as catalysts for managing and enhancing supply chain efficiency. It also under…
S46
Parallel Session A5: Achieving Sustainable and Resilient Transport and Logistics including inSIDS — A startling 75% of supply chain managers concede that modern slavery—a gross ethical violation—likely exists within the …
S47
High-Level session: Building and Financing Resilient and Sustainable Global Supply chains and the Role of the Private Sector — A key question posed was the creation and utilisation of economic incentives to synchronise efforts between the private …
S48
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — All three speakers emphasize that successful semiconductor workforce development requires close collaboration between in…
S49
The Battle for Chips — In conclusion, India’s strategic approach to developing a comprehensive semiconductor ecosystem demonstrates a commitmen…
S50
The Global Power Shift India’s Rise in AI &amp; Semiconductors — First. First one is for Rahul. In the global race where others are moving fast, what is the one move India must execute …
S51
Opening Ceremony — This set the foundational tone for the entire forum, establishing the urgency and scope of digital governance challenges…
S52
Closure of the session/OEWG 2025 — Chair: Thank you very much, Djibouti, for your contribution. Is there anyone else who wishes to speak or who wishes to…
S53
AI-driven semiconductor expansion continues despite market doubts — The pace of the AI infrastructure boomcontinues to accelerate, with semiconductor supply chains signalling sustained lon…
S54
Indias Roadmap to an AGI-Enabled Future — Much of the semiconductor IP used globally is developed in India, particularly in Bangalore, Pune, and Hyderabad, but va…
S55
India to boost innovation and digital services — India haslaunchedseveral transformative initiatives to strengthen its digital infrastructure and innovation ecosystem, f…
S56
AI Meets Agriculture Building Food Security and Climate Resilien — And that’s truly right. evolutionarily empowering for farmers. But, you know, to make that work for farmers, there’s a l…
S57
Closing Session  — Essential role of collaboration between industry and government stakeholders
S58
Socially, Economically, Environmentally Responsible Campuses | IGF 2023 Open Forum #159 — In conclusion, digital technology has transformative potential in improving energy efficiency and reducing energy consum…
S59
The Purpose of Science / DAVOS 2025 — Collaboration between academic institutions and industry can lead to innovative solutions
S60
https://dig.watch/event/india-ai-impact-summit-2026/leaders-plenary-global-vision-for-ai-impact-and-governance-afternoon-session — No, I’ll talk about something. Prime Minister Modi, first of all, congratulations. Congratulations on a fantastic summit…
S61
WS #266 Empowering Civil Society: Bridging Gaps in Policy Influence — Stephanie Borg Psaila: Thanks, Kenneth. I’ll reflect on a few comments that our colleagues have made, and I’ll start wit…
S62
The Geoeconomics of Energy and Materials/ DAVOS 2025 — Muhammad Taufik: Well, certainly, I think central to any national oil company’s duty is to ensure energy security, a…
S63
Powering the Technology Revolution / Davos 2025 — Antonio Neri: Antonio. Yeah, good morning everyone. So I like the context that has been said by my colleagues. First o…
S64
https://dig.watch/event/india-ai-impact-summit-2026/safeguarding-children-with-responsible-ai — Thank you so much to the panelists as well as the moderators and the audience. Also on behalf of Undersecretary General …
S65
https://dig.watch/event/india-ai-impact-summit-2026/national-disaster-management-authority — defining moment for disaster risk governance. Around the world, the frequency, intensity, and complexity of disasters ar…
S66
https://dig.watch/event/india-ai-impact-summit-2026/keynote-adresses-at-india-ai-impact-summit-2026 — It’s a coalition of capabilities that replaces coercive dependencies with a positive sum alliance of trusted industrial …
S67
Strengthening bilateral technological cooperation: Indian Prime Minister discusses joint projects in US visit — Indian Prime Minister Narendra Modi is currently undertaking a significant state visit to the United States, where he ha…
S68
Keynote-Rishi Sunak — It will be those countries and those companies that adopt, adopt, adopt who will be the biggest winners. Now India can a…
S69
Hardware for Good: Scaling Clean Tech — Jennifer Schenker: Welcome to the session on Hardware for Good, Scaling Cleantech. I’m Jennifer Schenker, Editor-in-Ch…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
S. Krishnan
7 arguments157 words per minute1069 words407 seconds
Argument 1
Convergence of AI and semiconductor missions; need for a trusted, diversified supply chain
EXPLANATION
Krishnan explains that India’s AI and semiconductor missions are merging, making semiconductors central to AI development and vice‑versa. He stresses the necessity of a resilient, diversified supply chain to avoid over‑reliance on any single geography.
EVIDENCE
He notes that the session represents the convergence of the India AI mission and the India semiconductor mission, highlighting how semiconductors are central to the AI story and AI is increasingly central to the semiconductor story [27-30]. He also points to the need for a resilient and reliable supply chain for geopolitical and other reasons, citing supply-chain issues exposed during the COVID pandemic [31-33].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for a trusted, diversified semiconductor supply chain and the convergence of AI and chip missions is highlighted in the India AI Impact Summit keynote discussing supply-chain security and trusted partnerships [S18] and the emphasis on trusted partners in AI exports [S19].
MAJOR DISCUSSION POINT
Supply chain resilience
AGREED WITH
Ashwini Vaishnaw, Paul Triolo
Argument 2
Government expanding focus beyond wafer fabs to full ecosystem, including equipment manufacturing and 10 new plants
EXPLANATION
Krishnan states that the government’s strategy now covers the entire semiconductor ecosystem, not just wafer fabrication, with plans for equipment manufacturing and the commissioning of ten major semiconductor plants across India.
EVIDENCE
He mentions that India will have ten major semiconductor plants, with four commencing production in 2026 and the rest following, and that the India Semiconductor Mission 2.0 will cover the whole ecosystem, including semiconductor equipment manufacturing [33-37].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
India’s plan to cover the full semiconductor ecosystem, including equipment manufacturing and ten new plants, is detailed in the AI-Powered Chips and Skills briefing describing ISM 2.0 and the commitment of ten major plants with four starting in 2026 [S1], and reinforced by the press briefing announcing a new fab in Uttar Pradesh [S27].
MAJOR DISCUSSION POINT
Ecosystem expansion
AGREED WITH
Ashwini Vaishnaw
Argument 3
Projected $100 billion domestic semiconductor market by 2030; export capability essential for competitiveness
EXPLANATION
Krishnan projects that India’s semiconductor market will reach about $100 billion by the end of the decade, representing a sizable share of the global market, and argues that export capability is crucial for India to remain competitive and integrated in global value chains.
EVIDENCE
He cites the estimate that India’s semiconductor market will be about $100 billion by 2030 and stresses the importance of building capacity for both domestic consumption and export to stay competitive in the global supply chain [40-42].
MAJOR DISCUSSION POINT
Market potential and export
Argument 4
Critical shortage of skilled personnel for advanced semiconductor manufacturing and precision equipment
EXPLANATION
Krishnan highlights a gap in the workforce, especially in advanced manufacturing and the precision production of semiconductor equipment, which is essential for scaling up domestic capabilities.
EVIDENCE
He points out that while India has strong talent pools for design and AI, it lacks people in advanced manufacturing and precision equipment manufacturing, and that companies like LAM Research are looking to develop this capability [45-48].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
A million-person talent gap across the semiconductor ecosystem, especially in advanced manufacturing and precision equipment, is documented in the AI-Powered Chips and Skills analysis [S1].
MAJOR DISCUSSION POINT
Talent shortage in advanced manufacturing
AGREED WITH
David Freed, Professor Saurabh Chandorkar, Rangesh Raghavan
Argument 5
ISM 2.0 positioned as a joint platform for skilling, supply‑chain integration, and domestic manufacturing
EXPLANATION
Krishnan describes ISM 2.0 as a comprehensive initiative that will address skilling, supply‑chain integration, and the development of domestic semiconductor manufacturing under a coordinated framework.
EVIDENCE
He notes that the India Semiconductor Mission 2.0 has been announced to cover the entire ecosystem, including equipment manufacturing, and will focus on skilling, supply-chain integration, and manufacturing [38-39].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
ISM 2.0’s role as an integrated platform for skilling, supply-chain integration and domestic manufacturing is outlined in the AI-Powered Chips and Skills briefing [S1].
MAJOR DISCUSSION POINT
ISM 2.0 as integrated policy
Argument 6
AI and semiconductors are mutually reinforcing; AI drives chip demand and chips enable AI advances
EXPLANATION
Krishnan asserts that the growth of AI fuels demand for semiconductors, while advances in semiconductor technology are essential for AI progress, creating a virtuous cycle between the two sectors.
EVIDENCE
He states that semiconductors are central to the AI story and AI is increasingly central to the semiconductor story, emphasizing their convergence [29-30].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The reciprocal relationship between AI and semiconductors, where each drives the other, is discussed in the overview of semiconductors as the backbone of modern AI systems [S25].
MAJOR DISCUSSION POINT
AI‑semiconductor synergy
AGREED WITH
Ashwini Vaishnaw
Argument 7
Signing of Pax Silica to ensure a trusted, resilient semiconductor supply chain
EXPLANATION
Krishnan mentions that India has signed the Pax Silica agreement, which is intended to build a trusted and resilient semiconductor supply chain, reinforcing supply‑chain security and reliability.
EVIDENCE
He refers to the signing of Pax Silica as an important step forward in building a trusted supply chain in the semiconductor space [30-33].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The signing of the Pax Silica agreement to build a trusted semiconductor supply chain is referenced in the AI Impact Summit keynote on supply-chain security [S18].
MAJOR DISCUSSION POINT
Supply‑chain trust
H
Harish Kumar
1 argument140 words per minute158 words67 seconds
Argument 1
Query on developing indigenous solar‑wafer capability as a parallel supply‑chain challenge
EXPLANATION
Harish Kumar asks whether India has any programmes for developing wafer capability for solar cell and module manufacturing, emphasizing the need for a domestic solar‑wafer supply chain rather than relying on imports.
EVIDENCE
He raises the question about the lack of any unit in India for wafer development for solar technology and asks if there is a program on wafer development for solar manufacturing and marketing [258-267].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The same question about a domestic solar-wafer development programme appears in the AI-Powered Chips and Skills transcript, confirming the inquiry on a solar wafer programme [S1].
MAJOR DISCUSSION POINT
Solar wafer capability
A
Ashwini Vaishnaw
7 arguments129 words per minute464 words215 seconds
Argument 1
LAM integrating India’s supply chain into its global network, reinforcing ecosystem links
EXPLANATION
Vaishnaw highlights LAM’s role in linking India’s semiconductor supply chain with its global operations, thereby strengthening the overall ecosystem.
EVIDENCE
He refers to the LAM supplier ecosystem and acknowledges LAM’s contribution to the Indian semiconductor ecosystem, indicating integration with global supply chains [88-92].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
LAM’s integration of India’s semiconductor supply chain into its global network is highlighted in the AI-Powered Chips and Skills discussion of LAM’s ecosystem contribution [S1].
MAJOR DISCUSSION POINT
Global supply‑chain integration
AGREED WITH
S. Krishnan, Paul Triolo
Argument 2
Announcement of a new semiconductor fab in Uttar Pradesh, underscoring rapid capacity growth
EXPLANATION
Vaishnaw announces that a new semiconductor fabrication plant will be inaugurated in Uttar Pradesh, demonstrating the rapid expansion of manufacturing capacity in India.
EVIDENCE
He mentions that tomorrow in Uttar Pradesh a new semiconductor plant will be founded by Prime Minister Narendra Modi, congratulating the effort [123-124].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The inauguration of a new semiconductor fab in Uttar Pradesh by the Prime Minister is reported in the press briefing on the AI Impact Summit [S27].
MAJOR DISCUSSION POINT
New fab inauguration
AGREED WITH
S. Krishnan
Argument 3
Government targets of 60 k clean‑room and 80 k design engineers; expansion to 315 universities with active chip‑design programs
EXPLANATION
Vaishnaw outlines the government’s ambitious talent targets for clean‑room operations and design engineering, and notes the growth from an initial 50 universities to 315 universities where students are already designing and fabricating chips.
EVIDENCE
He cites the 2022 target of 60,000 clean-room and 80,000 design engineers, the expansion to 315 universities, and the existence of student-led chip design and validation across many Indian states [103-108].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Targets of 60 000 clean-room and 80 000 design engineers and the growth to 315 universities with chip-design programmes are detailed in the AI-Powered Chips and Skills report [S1].
MAJOR DISCUSSION POINT
Talent targets and university expansion
AGREED WITH
Professor Saurabh Chandorkar, David Freed
Argument 4
Government backing of the Semiverse program and university collaborations, reinforcing policy commitment
EXPLANATION
Vaishnaw thanks LAM for its initiative and acknowledges governmental support for programs like Semiverse that foster university‑industry collaboration in semiconductor training.
EVIDENCE
He thanks LAM for taking the initiative and acknowledges the involvement of universities, indicating policy support for such collaborations [112-114].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Government support for the Semiverse university-industry collaboration is noted in the AI-Powered Chips and Skills briefing [S1].
MAJOR DISCUSSION POINT
Policy support for Semiverse
Argument 5
Semiconductor layer identified as a foundational tier in AI system architecture
EXPLANATION
Vaishnaw describes semiconductors as a critical layer within a five‑layer AI architecture, underscoring their importance for the future of AI.
EVIDENCE
He states that in the architecture of five layers, semiconductor is a very important layer, and calls for participation in this ecosystem [109-111].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Semiconductors being a foundational layer in AI system architecture is affirmed in the analysis of AI’s reliance on chip technology [S25].
MAJOR DISCUSSION POINT
Semiconductor role in AI architecture
AGREED WITH
S. Krishnan
Argument 6
Expansion to 315 universities with student‑led chip design and validation, creating a nationwide talent pool
EXPLANATION
Vaishnaw reiterates the scale of university participation, noting that students from across India are already designing chips and validating them, which will become a strategic national capability.
EVIDENCE
He notes that 315 universities now have students using world-class design tools, designing chips, and getting them manufactured and validated at SCL Mohali, with participation from states such as Assam, J&K, Kerala, Tamil Nadu [103-108].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The nationwide talent pool created by 315 universities where students design and validate chips is described in the AI-Powered Chips and Skills overview [S1].
MAJOR DISCUSSION POINT
Nationwide chip‑design talent pool
Argument 7
LAM’s state‑of‑the‑art Bengaluru systems‑engineering lab and its role in global supply‑chain integration
EXPLANATION
Vaishnaw points out LAM’s advanced systems‑engineering laboratory in Bengaluru, which contributes to India’s growing share in the global semiconductor industry and helps integrate the domestic supply chain with worldwide networks.
EVIDENCE
He mentions the state-of-the-art systems engineering lab for semiconductors in Bengaluru that is expanding India’s contribution to the global industry and integrating India’s supply chain into LAM’s global supply chain [20-22].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
LAM’s advanced systems-engineering laboratory in Bengaluru and its role in linking India to the global supply chain are highlighted in the AI-Powered Chips and Skills document [S1].
MAJOR DISCUSSION POINT
Advanced lab and global integration
R
Rangesh Raghavan
1 argument123 words per minute1070 words521 seconds
Argument 1
Opening remarks stressing the event’s role in highlighting workforce priorities
EXPLANATION
Raghavan welcomes participants, acknowledges the importance of the event for workforce development, and introduces David Freed to speak on workforce training initiatives.
EVIDENCE
He thanks the audience, notes the arrival of Minister Vaishnoji, and invites David Freed-leader of Semiverse Solutions and key figure in building India’s advanced semiconductor manufacturing workforce-to give comments [63-70].
MAJOR DISCUSSION POINT
Event as platform for workforce focus
AGREED WITH
S. Krishnan, David Freed, Professor Saurabh Chandorkar
P
Professor Saurabh Chandorkar
3 arguments143 words per minute1178 words491 seconds
Argument 1
Need for hands‑on fab training beyond simulation tools; academic fab alone cannot train 1 million workers
EXPLANATION
Chandorkar explains that while academic fabs provide valuable simulation experience, they cannot alone train the massive workforce needed; dedicated hands‑on training fabs are required to scale up to the million‑person target.
EVIDENCE
He describes the academic fab at IIS Bangalore, its high ranking, and states that a single academic fab cannot train one million people, emphasizing the need for additional training fabs and hands-on programs like INUP across India [140-147].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The necessity for hands-on training fabs beyond academic simulations, and the INUP outreach program, are mentioned in the AI-Powered Chips and Skills discussion of scaling semiconductor workforce training [S1].
MAJOR DISCUSSION POINT
Hands‑on training scalability
AGREED WITH
Ashwini Vaishnaw, David Freed
Argument 2
Call for government support to redesign curricula and scale hands‑on training facilities nationwide
EXPLANATION
Chandorkar urges the government to adapt curricula to industry needs and to fund the establishment of more training fabs across the country, enabling practical skill development for semiconductor manufacturing.
EVIDENCE
He notes the necessity to redesign coursework to include essential fab skills, cites the establishment of a training fab, and calls for broader rollout of such facilities nationwide, referencing ongoing programs like INUP [153-158].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for curriculum redesign and expanded training facilities to meet industry needs are echoed in the AI-Powered Chips and Skills briefing on workforce development [S1].
MAJOR DISCUSSION POINT
Curriculum reform and training infrastructure
AGREED WITH
David Freed, Paul Triolo
Argument 3
IIS Bangalore’s academic fab and INUP outreach provide practical fab exposure; plans for additional training fabs across India
EXPLANATION
Chandorkar highlights the existing academic fab at IIS Bangalore and the INUP outreach program that brings students from across India to gain fab experience, while indicating plans to expand training fabs throughout the country.
EVIDENCE
He mentions that IIS Bangalore’s academic fab is among the top globally, that INUP brings participants from across India for fab exposure, and that they are gearing up to establish more training fabs nationwide [140-147] and [155-161].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
IIS Bangalore’s top-ranked academic fab and the INUP program bringing students from across India are described in the AI-Powered Chips and Skills report [S1].
MAJOR DISCUSSION POINT
Academic fab and outreach expansion
D
David Freed
4 arguments164 words per minute1536 words560 seconds
Argument 1
Estimated million‑person talent gap spanning design, process, equipment, metrology, and reliability roles
EXPLANATION
Freed quantifies a talent shortfall of roughly one million people across the semiconductor ecosystem, covering roles from field service engineering to device reliability and metrology.
EVIDENCE
He outlines the million-person gap, noting it includes field service engineers, process engineers, equipment engineers, metrology engineers, device engineers, and reliability specialists, illustrating the breadth of the shortage [172-179].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
A roughly one-million-person talent gap across design, process, equipment, metrology and reliability roles is quantified in the AI-Powered Chips and Skills analysis [S1].
MAJOR DISCUSSION POINT
Scale of talent gap
AGREED WITH
S. Krishnan, Professor Saurabh Chandorkar, Rangesh Raghavan
Argument 2
Proposal for faculty fellowships placing university professors in industry for 6–9 months to transfer practical knowledge
EXPLANATION
Freed suggests creating funded fellowships that allow university faculty to spend six to nine months working within industry, thereby bringing real‑world semiconductor expertise back to academia.
EVIDENCE
He describes the idea of faculty fellowships that would give professors a six-to-nine-month industry placement, funded to facilitate knowledge transfer between companies and universities [208-210].
MAJOR DISCUSSION POINT
Faculty‑industry exchange
Argument 3
Optimization and machine‑learning tools are vital in manufacturing; data‑rich environments make them highly effective
EXPLANATION
Freed explains that while optimization has limited value in data‑scarce R&D settings, it becomes extremely powerful in manufacturing where abundant data enables effective machine‑learning‑driven optimization.
EVIDENCE
He contrasts small-data R&D where optimization is less helpful with big-data manufacturing where optimization is extremely useful, and notes ongoing development of AI and machine-learning techniques for manufacturing [306-311].
MAJOR DISCUSSION POINT
Data‑driven optimization in fab
Argument 4
Semiverse program delivers holistic industry understanding and tool training, forming the talent pipeline
EXPLANATION
Freed describes the Semiverse initiative as providing students with a comprehensive view of semiconductor products and processes, equipping them with a broad industry understanding that prepares them for multiple roles.
EVIDENCE
He emphasizes that Semiverse focuses on teaching the overall understanding of semiconductor devices, process integration, and industry context rather than narrow single-skill training, thereby creating a versatile talent pipeline [172-184].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The Semiverse program’s holistic industry training approach and its role in building the talent pipeline are outlined in the AI-Powered Chips and Skills briefing [S1].
MAJOR DISCUSSION POINT
Holistic training approach
AGREED WITH
Ashwini Vaishnaw, Professor Saurabh Chandorkar
P
Participant
2 arguments170 words per minute277 words97 seconds
Argument 1
Advice to youth: cultivate broad problem‑solving, critical‑thinking, and core science skills rather than a single narrow skill
EXPLANATION
The participant advises young aspirants to develop broad problem‑solving abilities, critical thinking, and a solid foundation in physics, chemistry, and materials science, warning against focusing exclusively on a single skill such as coding.
EVIDENCE
He recommends that young people focus on problem-solving, critical thinking, and interdisciplinary science foundations, and cautions against concentrating on a single skill like coding, noting that AI now writes code [287-292].
MAJOR DISCUSSION POINT
Broad skill development for youth
AGREED WITH
David Freed
Argument 2
Recommendation for aspiring entrants to focus on interdisciplinary science foundations (physics, chemistry, materials) to stay adaptable
EXPLANATION
Echoing his earlier advice, the participant stresses that a strong grounding in core physical sciences equips future workers to adapt across the varied semiconductor ecosystem.
EVIDENCE
He reiterates that physics, chemistry, and material science provide the foundational knowledge needed for diverse semiconductor roles, advising against narrow specialization [287-292].
MAJOR DISCUSSION POINT
Interdisciplinary foundation
P
Paul Triolo
1 argument143 words per minute767 words320 seconds
Argument 1
Moderator’s emphasis that effective talent development requires coordinated effort among government, academia, and industry
EXPLANATION
Triolo underscores that successful semiconductor talent development hinges on a three‑way partnership among government, academic institutions, and industry players.
EVIDENCE
He notes that effective talent development demands a coordinated three-way relationship among government, academia, and industry, and frames the upcoming discussion in that context [162-166].
MAJOR DISCUSSION POINT
Three‑way partnership for talent
AGREED WITH
David Freed, Professor Saurabh Chandorkar
Agreements
Agreement Points
There is a massive semiconductor talent gap that requires broad, holistic workforce development rather than narrow skill training.
Speakers: S. Krishnan, David Freed, Professor Saurabh Chandorkar, Rangesh Raghavan
Critical shortage of skilled personnel for advanced semiconductor manufacturing and precision equipment Estimated million‑person talent gap spanning design, process, equipment, metrology, and reliability roles Need for hands‑on fab training beyond simulation tools; academic fab alone cannot train 1 million workers Opening remarks stressing the event’s role in highlighting workforce priorities
All four speakers highlighted that India faces a huge shortage of skilled workers across the semiconductor ecosystem and that training must be broad and holistic, encompassing both theoretical understanding and hands-on experience, rather than focusing on a single narrow skill [45-48][172-179][140-147][153-158][63-70].
POLICY CONTEXT (KNOWLEDGE BASE)
The need for broad, continuous learning and reskilling to address skill shortages is highlighted in reports on AI’s impact on the future of work and the reskilling revolution, emphasizing holistic workforce development over narrow training [S42][S44].
Artificial intelligence and semiconductors are mutually reinforcing, creating a virtuous cycle between the two sectors.
Speakers: S. Krishnan, Ashwini Vaishnaw
AI and semiconductors are mutually reinforcing; AI drives chip demand and chips enable AI advances Semiconductor layer identified as a foundational tier in AI system architecture
Both speakers emphasized that AI drives demand for chips while advances in semiconductor technology are essential for AI progress, making the two missions converge [29-30][109-111].
POLICY CONTEXT (KNOWLEDGE BASE)
Analyses of India’s rise in AI and semiconductors note that AI adoption drives semiconductor investment and vice-versa, describing a virtuous cycle supported by supercomputing missions and AI-powered chip initiatives [S38][S37].
A resilient, diversified supply chain that links industry, academia, and government is essential for the semiconductor ecosystem.
Speakers: S. Krishnan, Ashwini Vaishnaw, Paul Triolo
Convergence of AI and semiconductor missions; need for a trusted, diversified supply chain LAM integrating India’s supply chain into its global network, reinforcing ecosystem links Moderator’s emphasis that effective talent development requires coordinated effort among government, academia, and industry
Krishnan highlighted the need for a trusted, diversified supply chain (including Pax Silica), Vaishnaw pointed to LAM’s integration of India’s supply chain globally, and Triolo stressed the three-way partnership among government, academia and industry as crucial for ecosystem resilience [30-33][88-92][162-166].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy discussions stress a three-way partnership among industry, academia, and government as a cornerstone for a resilient supply chain, with calls for coordinated private-public incentives and regional collaboration [S37][S45][S47].
The Indian government is expanding its semiconductor focus beyond wafer fabs to the full ecosystem, including equipment manufacturing and new fab construction.
Speakers: S. Krishnan, Ashwini Vaishnaw
Government expanding focus beyond wafer fabs to full ecosystem, including equipment manufacturing and 10 new plants Announcement of a new semiconductor fab in Uttar Pradesh, underscoring rapid capacity growth
Krishnan described ISM 2.0 covering the entire ecosystem and the plan for ten new plants, while Vaishnaw announced a new fab in Uttar Pradesh, showing coordinated government expansion of semiconductor capabilities [33-37][123-124].
POLICY CONTEXT (KNOWLEDGE BASE)
India’s broader ecosystem strategy is reflected in its advanced packaging and equipment research programmes and in government policy signals that extend beyond wafer fabrication to the entire semiconductor value chain [S43][S38][S41].
Universities and academic fabs are central to building the semiconductor talent pipeline across India.
Speakers: Ashwini Vaishnaw, Professor Saurabh Chandorkar, David Freed
Government targets of 60 k clean‑room and 80 k design engineers; expansion to 315 universities with active chip‑design programs Need for hands‑on fab training beyond simulation tools; academic fab alone cannot train 1 million workers Semiverse program delivers holistic industry understanding and tool training, forming the talent pipeline
Vaishnaw cited the growth to 315 universities and student chip-design activities, Chandorkar highlighted the academic fab at IIS Bangalore and the INUP outreach, and Freed described the Semiverse program’s holistic training approach, all underscoring the pivotal role of higher-education institutions [103-108][140-147][172-184].
POLICY CONTEXT (KNOWLEDGE BASE)
Government-backed initiatives emphasize the academic component of the talent pipeline, highlighting university fabs and education as key to developing AI and semiconductor expertise [S37][S42].
Strong collaboration between industry and academia, including faculty‑industry fellowships and curriculum redesign, is essential for effective talent development.
Speakers: David Freed, Professor Saurabh Chandorkar, Paul Triolo
Proposal for faculty fellowships placing university professors in industry for 6‑9 months to transfer practical knowledge Call for government support to redesign curricula and scale hands‑on training facilities nationwide Moderator’s emphasis that effective talent development requires coordinated effort among government, academia, and industry
Freed proposed funded faculty fellowships, Chandorkar called for curriculum updates and expanded training fabs, and Triolo reiterated the need for a three-way partnership, indicating consensus on industry-academia collaboration [208-210][153-158][162-166].
Young people should develop broad problem‑solving, critical‑thinking, and core science skills rather than focusing on a single narrow skill.
Speakers: Participant, David Freed
Advice to youth: cultivate broad problem‑solving, critical‑thinking, and core science skills rather than a single narrow skill Broad skill development for youth
Both the participant and Freed urged aspirants to build interdisciplinary foundations (physics, chemistry, materials science) and avoid over-specialisation, emphasizing broad problem-solving abilities [287-292].
POLICY CONTEXT (KNOWLEDGE BASE)
Future-of-work panels stress continuous learning, adaptability, and broad scientific competencies as essential for emerging technology sectors, reinforcing the call for wide-ranging skill development [S42][S44].
Similar Viewpoints
Both speakers stress that the talent shortage is extensive and cuts across many functional areas of the semiconductor ecosystem, not limited to a single discipline [45-48][172-179].
Speakers: S. Krishnan, David Freed
Critical shortage of skilled personnel for advanced semiconductor manufacturing and precision equipment Estimated million‑person talent gap spanning design, process, equipment, metrology, and reliability roles
Both highlight the deep interdependence between AI and semiconductors, framing them as co‑drivers of each other’s growth [29-30][109-111].
Speakers: Ashwini Vaishnaw, S. Krishnan
Semiconductor layer identified as a foundational tier in AI system architecture AI and semiconductors are mutually reinforcing; AI drives chip demand and chips enable AI advances
Both agree that practical, hands‑on exposure (beyond pure simulation) is essential for preparing a large‑scale semiconductor workforce [172-184][140-147].
Speakers: David Freed, Professor Saurabh Chandorkar
Semiverse program delivers holistic industry understanding and tool training, forming the talent pipeline Need for hands‑on fab training beyond simulation tools; academic fab alone cannot train 1 million workers
Unexpected Consensus
Indigenous solar‑wafer capability as a parallel supply‑chain challenge
Speakers: Harish Kumar, Professor Saurabh Chandorkar
Query on developing indigenous solar‑wafer capability as a parallel supply‑chain challenge There are efforts for polycrystalline silicon growth for wafers
Although the panel focused on semiconductors, both participants converged on the need for domestic solar-wafer development, an issue not central to the main agenda, indicating an unexpected alignment on broader materials supply-chain concerns [258-267][268-272].
Overall Assessment

The discussion shows strong consensus among speakers that India faces a huge semiconductor talent gap, that AI and semiconductors are tightly linked, that a resilient, integrated supply chain is vital, and that universities together with industry must collaborate to build capacity. Additional agreement exists on the importance of broad, interdisciplinary skill development for youth.

High consensus across technical, policy, and educational dimensions, suggesting coordinated action is likely to be pursued and reinforcing the strategic priority of building a self‑sufficient semiconductor ecosystem in India.

Differences
Different Viewpoints
Approach to workforce development – specialized precision‑equipment/manufacturing skills versus a broad, interdisciplinary understanding and flexible talent pipeline
Speakers: S. Krishnan, David Freed, Professor Saurabh Chandorkar
Krishnan stresses the critical shortage of people in advanced manufacturing and precision equipment production and the need to skill people specifically for that work [45-48] Freed argues that the focus should be on a broad industry understanding rather than a single narrow skill, warning against over-specialisation such as coding [184-186][287-292] Chandorkar highlights the need for hands-on fab training and curriculum redesign to give practical manufacturing exposure, rather than relying on narrow skill sets [140-147][152-158]
Krishnan calls for targeted training in advanced manufacturing and precision equipment, while Freed and Chandorkar advocate a wider, concept-driven education model that equips students with a holistic view and hands-on experience rather than narrow, single-skill training [45-48][184-186][140-147][152-158].
POLICY CONTEXT (KNOWLEDGE BASE)
Debates on specialized versus interdisciplinary skill sets echo concerns about skill shortages and the need for flexible, upskilled talent highlighted in workforce-development and reskilling reports [S42][S44].
Assessment of existing manufacturing talent pool – Krishnan claims a shortage of advanced manufacturing talent despite a large overall talent pool, whereas Vaishnaw asserts India already possesses one of the largest manufacturing and AI talent pools
Speakers: S. Krishnan, Ashwini Vaishnaw
Krishnan notes that India lacks people in advanced manufacturing and precision equipment, even though it has strong design and AI talent [45-48][42-44] Vaishnaw states that India is recognised as having one of the largest talent pools for manufacturing and AI in the world [43-44]
Krishnan highlights a gap in advanced-manufacturing expertise, while Vaishnaw emphasizes the overall size and strength of India’s manufacturing and AI talent, leading to a differing view on how sufficient the current talent base is [45-48][43-44].
POLICY CONTEXT (KNOWLEDGE BASE)
Contrasting assessments align with broader discussions that acknowledge both a sizable overall talent base and persistent advanced-manufacturing skill gaps, as documented in AI-impact and national talent analyses [S42][S38].
Unexpected Differences
Solar‑wafer capability question raised in a semiconductor‑focused forum
Speakers: Harish Kumar, Professor Saurabh Chandorkar
Kumar asks whether India has any indigenous programme for solar-wafer development, highlighting a perceived gap [258-267] Chandorkar gives a vague, non-committal answer, mentioning ongoing efforts but refusing to reveal details, without directly addressing the specific solar-wafer programme [268-272]
The discussion was centred on semiconductor workforce and ecosystem, yet a participant introduced a solar-wafer supply-chain issue, leading to an unexpected divergence where the expert could not provide a clear answer, indicating a mismatch between audience expectations and the panel’s focus [258-267][268-272].
Literary reference to T.S. Eliot versus technical optimisation discussion
Speakers: Participant, David Freed
The participant cites an essay by T.S. Eliot on talent and asks for optimisation policy details [298-303] Freed responds with technical remarks on optimisation in manufacturing versus R&D, without engaging the literary angle [306-311]
A participant shifted the conversation from technical semiconductor policy to a literary perspective, which was not addressed by the technical experts, creating an off-topic disagreement on the relevance of the question [298-303][306-311].
Overall Assessment

The panel largely concurs on the strategic importance of building a semiconductor workforce and supply‑chain resilience, but diverges on how best to develop talent – whether through narrowly focused precision‑equipment training, broad interdisciplinary education, or expanded hands‑on fab facilities. A secondary tension exists over the perceived adequacy of India’s existing manufacturing talent pool. Unexpectedly, questions about solar‑wafer capability and literary references introduced off‑topic disagreements.

Moderate – while there is strong consensus on the overarching goals, the differing views on training methodology and talent adequacy could affect policy design and allocation of resources, requiring careful coordination to align approaches.

Partial Agreements
While they share the goal of building a robust workforce, they differ on the primary mechanism: Krishnan focuses on precision‑equipment training, Freed on broad industry understanding and faculty fellowships, Chandorkar on hands‑on fab facilities, and Vaishnaw on university‑scale enrolment targets [45-48][172-179][140-147][103-108].
Speakers: S. Krishnan, David Freed, Professor Saurabh Chandorkar, Ashwini Vaishnaw
All agree that a large, skilled semiconductor workforce is essential for India’s ambitions Krishnan points to the need for advanced-manufacturing skills [45-48] Freed quantifies a million-person talent gap across many roles [172-179] Chandorkar stresses hands-on fab training and curriculum changes [140-147][152-158] Vaishnaw cites government targets of 60 k clean-room and 80 k design engineers and expansion to 315 universities [103-108]
All concur that coordinated action is needed, but Triolo frames it as a partnership model, Freed suggests specific fellowship programmes, and Chandorkar seeks policy‑driven curriculum and infrastructure changes [162-166][208-210][153-158].
Speakers: Paul Triolo, David Freed, Professor Saurabh Chandorkar
Triolo emphasises a three-way partnership among government, academia and industry for talent development [162-166] Freed proposes faculty fellowships and industry-university collaboration [208-210] Chandorkar calls for government-supported curriculum redesign and scaling of training fabs [153-158]
Takeaways
Key takeaways
India’s semiconductor ecosystem is expanding beyond wafer fabs to include equipment manufacturing, supply‑chain integration, and a network of 10 new fab projects, aiming for a $100 billion domestic market by 2030. The convergence of the India AI Mission and the India Semiconductor Mission underscores that AI drives chip demand while semiconductors enable AI advancements. A significant talent gap—estimated at around one million workers across design, process, equipment, metrology, and reliability roles—must be addressed to sustain growth. Hands‑on fab training and practical exposure are essential; simulation tools alone are insufficient for preparing a large workforce. Government, industry, and academia are collaborating through initiatives such as ISM 2.0, the Semiverse program, and faculty‑fellowship concepts to build a holistic talent pipeline. LAM Research is integrating India’s supply chain into its global network, operating a state‑of‑the‑art Bengaluru lab, and leading workforce‑development efforts. Broad, interdisciplinary problem‑solving skills (physics, chemistry, materials science) are emphasized over narrow, single‑skill training for future talent. The Pax Silica agreement and new fab announcements (e.g., Uttar Pradesh) signal a move toward a resilient, trusted global semiconductor supply chain.
Resolutions and action items
Launch faculty fellowships (6–9 months) placing university professors within industry to transfer practical knowledge. Scale up hands‑on training facilities (training fabs) across India, building on IISc’s pilot and the INUP outreach model. Government to support curriculum redesign and funding for expanded fab‑training programs under ISM 2.0. LAM to continue expanding its Bengaluru systems‑engineering lab, integrate Indian suppliers into its global supply chain, and broaden the Semiverse training rollout. Commit to training workers in FRABS, OSATs, and other semiconductor‑related facilities both domestically and internationally. Proceed with the announced semiconductor fab in Uttar Pradesh and the other nine committed fab projects. Encourage universities to increase chip‑design programs (now 315 institutions) and to align PhD projects with industry needs.
Unresolved issues
Specific roadmap, funding mechanism, and timeline for the proposed faculty‑fellowship program. Detailed implementation plan for curriculum changes and scaling of hands‑on fab training nationwide. Clear definition of how the solar‑wafer capability will be developed domestically and integrated with semiconductor supply‑chain efforts. Exact quantitative targets and milestones for closing the estimated one‑million‑person talent gap. Mechanisms for incentivizing PhDs and other advanced researchers to enter manufacturing roles. Comprehensive policy on optimization and AI‑driven manufacturing that was requested but not fully explained.
Suggested compromises
Industry (LAM) offering its software tools broadly to other companies to accelerate ecosystem development. Balancing broad, interdisciplinary talent development with targeted, hands‑on skill courses (e.g., pressure‑gauge, PNID training) to meet immediate fab needs. Aligning academic curricula with industry requirements while still preserving fundamental scientific education, as a middle ground between narrow skill‑training and pure theory.
Thought Provoking Comments
We have two major missions, we have the India AI mission and we have the India semiconductor mission … this session kind of represents how semiconductors are so central to the AI story as AI is increasingly to the semiconductor story.
Highlights the strategic convergence of AI and semiconductor initiatives, framing them as mutually reinforcing rather than separate policy tracks.
Set the thematic foundation for the entire panel, prompting other speakers to discuss cross‑disciplinary talent needs and supply‑chain resilience, and steering the conversation toward integrated ecosystem planning.
Speaker: S. Krishnan
India Semiconductor Mission 2.0 has been announced, which will cover the entire ecosystem, including the manufacture of semiconductor equipment in the country. The real challenge in the next five years is to skill people in advanced manufacturing and precision equipment.
Introduces a concrete policy milestone (ISM 2.0) and pinpoints the critical skill gap in precision equipment manufacturing, moving beyond design talent to the harder‑to‑fill manufacturing side.
Shifted the dialogue from high‑level ambition to a specific workforce‑development problem, leading to detailed suggestions from academics (Prof. Chandorkar) and industry (David Freed) about training models.
Speaker: S. Krishnan
In 2022 we set a target of 60,000 talent for clean‑room operations and 80,000 design engineers. Today we have 315 universities, students using world‑class design tools, designing chips across Assam, J&K, Kerala, Tamil Nadu… semiconductor is a critical layer in the AI architecture.
Provides quantitative evidence of rapid scaling in education and talent pipelines, reinforcing the urgency and breadth of the initiative.
Validated the earlier claims of rapid progress, encouraged the panel to discuss how to sustain and deepen this growth, and underscored the national scale of the effort.
Speaker: Ashwini Vaishnaw
What’s missing is a second layer of hands‑on training. We have an academic fab, but we cannot train a million people alone. We are establishing a training fab and need similar facilities across India.
Identifies the practical bottleneck of moving from theoretical knowledge to real‑world fab experience, and proposes a scalable solution (training fabs).
Prompted a discussion on infrastructure needs beyond software tools, leading to David Freed’s suggestion of faculty fellowships and industry‑academia collaborations.
Speaker: Professor Saurabh Chandorkar
The million‑person gap is not a single type of skill. We need broad talent and a deep understanding of the whole ecosystem, not just narrow, single‑skill training.
Challenges the common “skill‑centric” narrative, reframing workforce development as building holistic understanding and adaptability.
Created a turning point where the panel shifted from listing specific roles to debating the philosophy of talent development, influencing subsequent suggestions about curriculum design and faculty involvement.
Speaker: David Freed
One idea is faculty fellowships: give university faculty a 6‑9‑month job inside our companies so they bring industry‑relevant knowledge back to academia.
Proposes a concrete mechanism to bridge academia and industry, addressing the earlier identified gap in hands‑on experience.
Sparked agreement from Prof. Chandorkar, who echoed the need for industry‑aligned projects, and added momentum to the call for collaborative training models.
Speaker: David Freed
For a young person, focus on problem‑solving, critical thinking, physics, chemistry, material science – a broad, hard‑science foundation – rather than chasing a single skill like coding.
Offers actionable career guidance that aligns with the panel’s broader talent‑development theme, while critiquing the over‑emphasis on narrow skill sets.
Provided a clear takeaway for the audience, reinforced the earlier talent‑vs‑skill argument, and concluded the discussion with a practical recommendation.
Speaker: David Freed
Overall Assessment

The discussion was driven forward by a handful of strategic insights that moved it from generic enthusiasm to concrete, actionable planning. S. Krishnan’s framing of AI and semiconductor missions as intertwined, and the announcement of ISM 2.0, created the initial focus on ecosystem‑wide talent needs. Ashwini Vaishnaw’s data‑rich update validated the rapid progress and underscored scale. Professor Chandorkar’s call for a second layer of hands‑on training highlighted the practical bottleneck, which David Freed reframed as a broader talent‑development issue rather than a narrow skill gap. Freed’s faculty‑fellowship proposal and his advice to youth crystallized the collaborative, holistic approach the panel advocated. Together, these comments redirected the conversation toward integrated policy, education, and industry actions, shaping a narrative that emphasized systemic, long‑term capacity building over short‑term skill fixes.

Follow-up Questions
What specific government support is required under ISM 2.0 for academia (e.g., funding, policy changes, infrastructure) to scale semiconductor skill development?
Clarifying government expectations will enable universities and research institutes to align resources and curricula with industry needs, accelerating the creation of a skilled workforce.
Speaker: Paul Triolo, Professor Saurabh Chandorkar
What is the role and mandate of IAS (Indian Academy of Sciences/Institute of Advanced Studies) and what does it need from the government and industry under ISM 2.0?
Understanding IAS’s function and its requirements is essential for coordinating the three‑way partnership (government, academia, industry) that underpins the semiconductor ecosystem.
Speaker: Paul Triolo
Are there dedicated programs or initiatives for domestic wafer development for solar cell manufacturing in India, aimed at reducing imports?
India’s solar ambitions depend on a homegrown wafer supply chain; identifying existing or planned programs will inform policy and investment decisions.
Speaker: Harish Kumar
How can hands‑on training facilities (training FABs) be expanded across India to meet the projected million‑person talent gap?
Practical fab experience is critical for preparing engineers; scaling training FABs will bridge the gap between theoretical knowledge and industry readiness.
Speaker: Professor Saurabh Chandorkar, Paul Triolo
What mechanisms can be put in place for faculty fellowships within semiconductor companies (6‑9 month industry placements) to transfer industry‑relevant knowledge to universities?
Embedding faculty in industry will refresh curricula, foster research collaborations, and ensure graduates possess up‑to‑date skills.
Speaker: David Freed
How can industry‑led practical courses (e.g., pressure‑gauge operation, P&ID systems) be standardized and offered by more companies beyond LAM?
Broadening access to hands‑on modules will create a more uniformly skilled workforce and reduce reliance on a single provider.
Speaker: Professor Saurabh Chandorkar
What strategies can attract more PhDs to semiconductor manufacturing roles and align doctoral research projects with industry needs?
PhDs bring deep problem‑solving abilities; directing their research toward fab challenges will enhance innovation and talent depth.
Speaker: Professor Saurabh Chandorkar, David Freed
How can optimization and AI/ML techniques be tailored for small‑data R&D environments versus big‑data manufacturing settings in semiconductor production?
Effective optimization requires different toolsets depending on data volume; research is needed to develop appropriate methodologies for each context.
Speaker: David Freed
What is the detailed breakdown of the million‑person talent gap across specific roles (field service engineers, process engineers, metrology, device engineers, etc.)?
A granular view of shortages will allow targeted training programs and policy interventions to address the most critical skill deficits.
Speaker: David Freed
What concrete benefits has membership in Pax Silica brought to India’s semiconductor supply‑chain resilience and trustworthiness?
Evaluating the impact of Pax Silica participation will help justify further engagement and guide future collaborative standards work.
Speaker: S. Krishnan
How can the semiconductor design and manufacturing curricula be uniformly integrated across the 315 universities currently involved in the Semiverse program?
Standardized curricula ensure consistent skill levels nationwide, facilitating smoother transition of graduates into industry roles.
Speaker: Ashwini Vaishnaw
What steps are needed to develop indigenous precision manufacturing capability for semiconductor equipment in India?
Building domestic equipment capacity reduces import dependence and strengthens the overall ecosystem, but requires research, investment, and skill development.
Speaker: S. Krishnan
How effective has the Semiverse program been in delivering industry‑ready talent, and what metrics should be used to assess its impact?
Measuring outcomes will inform program improvements, justify funding, and ensure alignment with industry demand.
Speaker: Multiple (David Freed, Professor Chandorkar, Paul Triolo)

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Artificial General Intelligence and the Future of Responsible Governance

Artificial General Intelligence and the Future of Responsible Governance

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel opened by noting the rapid acceleration of AI since 2020 and the emerging public debate about artificial general intelligence (AGI), warning that societies that ignore these trends may miss the chance to shape future governance [1-8]. Participants agreed that AGI is envisioned as a form of AI that can reason, learn, adapt, transfer knowledge and operate beyond narrow domains, unlike current systems that excel only in specific tasks [11-18].


Simonas Satunas offered a pragmatic definition of AGI as an entity capable of performing any human professional task with comparable accuracy, estimating a 3-to-7-year horizon based on growing public trust in generative AI tools [21-24]. Alexandra Bech Gjørv cautioned against fixing a timeline, emphasizing that progress depends on sustained investment, advanced low-latency hardware, neuromorphic and edge computing, and that privacy constraints on personal data may limit situational awareness [23-30][35-37]. Kenny Kesar highlighted that achieving the “five-nines” level of accuracy-a benchmark that historically required five to ten years per additional nine-will be essential for moving from probabilistic to more deterministic AI behavior [41-48].


Simonas Cerniauskas warned that current massive compute spending may be a bubble and that algorithmic efficiency could curb over-investment, while Simonas Satunas stressed that compute is only one of several critical components, including energy, data, and especially human critical-thinking capacity [70-71][72-90]. He also mapped AI risks into four layers-traditional privacy and security, mental-health impacts, social effects such as empathy erosion, and macro-level threats to democracy-calling for coordinated national and international strategies to mitigate them [131-139].


Alexandra argued that democratic access to compute must be paired with education of policymakers, noting that human oversight is often inadequate in ethical dilemmas and that algorithmic monitoring can reduce bias, as illustrated by a sports-analytics case where video review eliminated discriminatory decisions [96-101]. Kenny proposed institutionalizing “AI operating procedures” (AOP) analogous to current SOPs, training models to avoid bias and establishing external audits to ensure ethical compliance as AI approaches general intelligence [191-199].


The panelists concurred that early “anchor controls” such as robust labeling, regulatory measures, and resilience planning-including rollback mechanisms and diversified energy sources-are needed to limit harmful outcomes while enabling innovation [173][187-189]. They also agreed that collaboration among industry, academia, and governments is vital to embed egalitarian values and prevent profit-driven bias, citing examples like the amplification of violent content in Myanmar through platform algorithms [174-180]. In closing, Vinayak summarized that understanding AGI’s implications for security, privacy, and ethics requires immediate action, and announced the launch of an AI Cyber Security Terminal to support these efforts [202-207]. Overall, the discussion underscored a consensus that multidisciplinary governance, education, and measured investment are essential to steer AGI development responsibly [1-8].


Keypoints


Major discussion points


Defining AGI and estimating its arrival – The panel opened by questioning what “Artificial General Intelligence” actually means and how soon it might appear. Vinayak highlighted the surge in AI breakthroughs since 2020 and the growing talk of AGI [1-4]. Simonas Cerniauskas noted common traits of AGI such as reasoning, learning, adaptation and knowledge transfer [12-18]. Simonas Satunas offered a concrete (though simplified) definition – an AI that can perform any human professional task at human-level accuracy – and projected a 3-to-7-year horizon [21].


Compute, hardware, and investment as enablers (and possible bottlenecks) – Several speakers stressed that massive compute power, new architectures and funding are critical to reaching AGI. Cerniauskas described the current “super-high-cycle” of investment and the risk of a bubble [70-71]. Simonas Satunas used a 19th-century transport metaphor to argue that compute is only one element among energy, data, implementation and human skills [72-85][86-87]. Alexandra added that low-latency, energy-efficient neuromorphic and edge hardware are required for human-like situational awareness [31-34].


Security, privacy and ethical threats of powerful AI – The conversation turned to the dangers that more capable models pose. Kenny warned that the same AI that creates content can also generate sophisticated attacks, and that an AGI could impersonate humans such as CEOs [105-108]. Simonas Satunas broke down AI risks into four layers – classic cyber-security/privacy, mental-health impacts, social-cohesion, and macro-societal threats to democracy [131-138]. Alexandra highlighted the need for human oversight and the difficulty of making ethical decisions in autonomous systems [96-102].


Human factors: critical thinking, education and regulation – Multiple panelists argued that technology alone will not solve the challenges; societies must boost critical thinking and regulatory capacity. Simonas Satunas stressed that raising public critical thinking is as important as investing in compute [88-92]. Kenny pointed out that over-reliance on AI-generated content could erode our own reasoning muscles, creating a “vicious cycle” [164-170]. Alexandra called for educating politicians and the public so that ethical choices are made before machines dominate [96-102][187-190].


Early-stage governance and “anchor-control” concepts – The moderator asked for concrete steps that can be taken now. Cerniauskas suggested technical measures such as watermarking/labeling and hinted at regulatory actions [173-176]. Simonas Satunas advocated for global coordination, industry-academia collaboration, and embedding egalitarian values into AI design [174-180]. Alexandra proposed resilience measures, robust rollback mechanisms, and scenario planning for infrastructure loss [187-190]. Kenny introduced the idea of an “AI Operating Procedure” (AOP) to embed bias-checks, ethical reviews and continuous monitoring into AI deployments [191-199].


Overall purpose / goal of the discussion


The panel aimed to demystify AGI-clarifying its definition, likely timeline, and technical prerequisites-while simultaneously surfacing the security, privacy, ethical, and societal risks that accompany rapid AI advancement. By juxtaposing technical optimism with cautionary perspectives, the participants sought to identify practical “anchor-control” measures and governance frameworks that can be instituted today to steer the emergence of AGI responsibly.


Overall tone and its evolution


Opening (0:00-3:30) – Curious and forward-looking, with speakers outlining possibilities and expressing excitement about breakthroughs.


Middle (3:30-15:00) – The tone shifts to a more cautionary stance, emphasizing the gaps between current narrow AI and true AGI, and flagging looming security and ethical threats.


Later (15:00-35:00) – Concern deepens as concrete risks (misinformation, bias, cyber-attacks) are discussed, but a collaborative, problem-solving attitude emerges.


Closing (35:00-end) – The discussion becomes pragmatic and solution-oriented, focusing on governance, resilience, education and concrete early-stage controls.


Thus, the conversation moves from exploratory optimism to measured concern and finally to actionable recommendations.


Speakers

Ms. Alexandra Bech Gj​ørv – Head of Sintef, Norway’s largest research institute; expertise in AI research, neuromorphic and edge computing, and AI governance.


Mr. Vinayak Godse – Moderator/host of the panel discussion on AGI; involved in AI policy and security discussions.


Mr. Simonas Satunas – Speaker on AGI, provides definitions and timelines; background in AI development and public engagement (Israel).


Mr. Kenny Kesar – Speaker on AI accuracy, compute, and market disruption; experience in AI consulting and implementation for clients.


Simonas Cerniauskas – Speaker focusing on AI investment cycles, compute efficiency, and regulatory perspectives.


Additional speakers:


None (all participants in the transcript are covered by the speakers list).


Full session reportComprehensive analysis and detailed insights

The session opened with moderator Vinayak Godse framing the rapid acceleration of artificial-intelligence research that began around 2020 and intensified after the launch of powerful generative models in early 2023, warning that societies that ignore these developments risk missing the chance to shape the governance of the next technological wave, possibly the arrival of artificial general intelligence (AGI) within the next two to ten years [1-8].


Defining AGI – Cerniauskas said most definitions agree that AGI should be able to reason, learn, adapt and transfer knowledge, and that it must be broader than today’s narrow-domain systems such as customer-service bots [12-18]. Building on this, Satunas offered a pragmatic, human-centric formulation: an AI that can perform any professional task with the same accuracy and professionalism as a human expert. He linked this functional view to a growing public trust in generative tools, noting that roughly half of Israeli respondents already trust AI more than their friends, which he interprets as a step toward AGI [21-24].


Timeline and investment uncertainty – Satunas projected a 3-to-7-year horizon, arguing that the convergence of technical capability and societal trust makes the milestone imminent [21-24]. By contrast, Gjørv rejected a fixed schedule, insisting that progress depends on sustained investment, hardware breakthroughs, data-privacy and regulatory challenges, and warned that policy should ensure broad, democratic access to compute resources rather than concentrating power in a few providers [23-26]. Godse echoed this uncertainty, urging societies to prepare now rather than wait for a precise date [1-7]. Cerniauskas described the current “super-high-cycle” of compute spending as potentially speculative, noting industry chatter about a bubble and the possibility of over-capacity persisting for years, as even Mark Zuckerberg has suggested [70-71].


Technical prerequisites – Gjørv highlighted that human-like situational awareness will require ultra-low-latency, energy-efficient hardware such as neuromorphic and edge-computing architectures, together with massive private data streams – a requirement that immediately raises privacy concerns [31-34][35-37]. Vinayak asked about the latency of System 2-type reasoning and the limitations of language-only models; the panel noted that current large language models (LLMs) excel at fast, intuitive (System 1) pattern-matching but struggle with deep, logical (System 2) contextual understanding, exposing a key bottleneck for AGI [95-98]. Kesar framed progress in terms of accuracy, invoking the “five-nines” benchmark: “to get from 90 % to 99 % accuracy took five to ten years”, and argued that each additional nine adds one to two more years, driving compute demand toward AGI [44-48]. Satunas broadened the picture with a 19th-century transport metaphor, arguing that compute is only one link in a chain that also includes energy, data, implementation, language localisation and, crucially, human critical-thinking capacity [72-90].


Security, privacy and risk taxonomy – Kesar warned that the same generative models that create content can also craft sophisticated cyber-attacks and impersonate senior executives, making future threats “real” once AGI can emulate human behaviour [105-108]. Gjørv added that achieving true situational awareness would require access to personal data, but privacy regulations limit such collection, creating a tension between capability and rights [35-37]. Satunas categorised AI risks into four layers: (1) classic privacy, security and fraud; (2) mental-health impacts; (3) social effects such as erosion of empathy and bullying; and (4) macro-level threats to democracy through manipulation and fake-news campaigns [131-138].


Human factors – Satunas argued that without a strong emphasis on critical-thinking education, societies will be unable to recognise AI-generated manipulation; he noted that 30 % of online content is already AI-generated, creating a feedback loop that could stall human intellectual growth [154-155][165-170]. Kesar echoed this, warning that reliance on AI-generated content may erode the “brain-muscle” needed for innovation, leading to a vicious cycle where AI diminishes the very intelligence it seeks to emulate [164-170]. Gjørv reinforced the need for political and public education, pointing out that human oversight often fails in ethical dilemmas and that policymakers must be equipped to make hard choices before machines do [96-102]. Cerniauskas also noted the importance of the human critical-thinking element as part of the broader ecosystem [12-18].


Early-stage “anchor-control” proposals – The panel offered a spectrum of concrete measures:


Technical and regulatory safeguards (e.g., watermarking, output labeling) – Cerniauskas [173-176];


Resilience planning (robust rollback mechanisms, diversified energy sources, scenario-based risk matrices) – Gjørv [187-189];


AI Operating Procedure (AOP) – a procedural framework embedding bias-audits, ethical training and continuous monitoring, analogous to traditional SOPs – Kesar [191-199];


Global regulatory collaboration – especially for smaller nations, to embed egalitarian values and mitigate bias, citing the Myanmar example where platform algorithms amplified violent content – Satunas [174-180].


Points of agreement – All speakers concurred that education, awareness and critical-thinking skills are essential to counter AI-driven threats [154-155][164-170]; they also agreed on the need for layered risk-management frameworks that combine technical safeguards, resilience planning and procedural oversight [187-189][131-138][191-199]. Both Gjørv and Satunas highlighted privacy as a fundamental constraint on the data required for human-level situational awareness [35-37][131-138]. The panel agreed that the proliferation of AI-generated misinformation poses a serious societal risk [144-149].


Remaining disagreements – The timeline for AGI remained contested (Satunas’ 3-to-7-year estimate vs. Gjørv’s refusal to set a horizon vs. Godse’s call for preparedness). On the primary driver of progress, Kesar foregrounded compute-driven accuracy improvements, while Satunas argued for a holistic ecosystem, and Gjørv emphasised specialised low-latency hardware as the bottleneck. Regarding governance mechanisms, the four speakers advocated different early-stage toolkits, reflecting a lack of consensus on the optimal approach.


Closing remarks and announcement – Godse summarised the collective insight: while the acceleration of AI capabilities creates unprecedented opportunities, immediate action is required to embed security, privacy, safety and ethical safeguards into the emerging paradigm [202-207]. He concluded by announcing the launch of the “AI Cyber Security Terminal” as the session’s final action [208-210].


The panel’s recommendations can be grouped as follows: (i) institute early anchor controls such as output labeling and technical safeguards; (ii) invest in education programmes that foster critical-thinking and AI literacy; (iii) foster cross-sector collaboration to develop global, risk-adaptive regulatory frameworks; (iv) adopt AI Operating Procedures that institutionalise bias-checks and ethical reviews; and (v) design resilience and rollback mechanisms to limit the impact of failures or malicious use. These steps aim to steer the trajectory toward AGI responsibly, balancing compute-driven progress with human-centred governance. [173-176][187-189][191-199][202-207][208-210]


Session transcriptComplete transcript of the session
Mr. Vinayak Godse

Pet Summit and the basic idea and intent behind setting up this session is while all the things were happening in AI in the period of 2020, a lot of development happening and somehow all that is now leading to kind of acceleration that we are seeing in last three years of time and especially this year, since January, all the new launches that we see, we are getting the first sign of a powerful AI, right? And now because of that, there is a discussion about AGI seems to be gaining quite a significant ground, right? And although people still have a lot of doubt and skepticism about whether it is really reality or possibility in coming future or what that means, many people are still skeptical.

They are struggling to define what that means for a cigarette. So as an overall society. and I can tell about India so probably we didn’t pay much attention when AI was coming. If you don’t pay attention now what is coming in next 2, 3, 5 years of time or 10 years of time that is probably the timeline for AGI, then probably we will miss on again thinking, talking, discussing, governing it better basically. So this discussion is about what is to help understand for us and for the audience here basically what do we mean by AGI can we really think about that right now what are different conference that we need to thank you for getting welcome to the panel and try to then find the meaning possible meaning for security, privacy and ethics basically.

So I would like to talk with someone with you, so how do you see this concept of AGI and formulationally how that will be different that we would see what is your understanding about the concept of artificial intelligence and artificial intelligence

Simonas Cerniauskas

So, yeah, thank you very much for having us here. And, yeah, like you said, it’s a really nice topic to wrap up the conference. So, well, so, you know, of course, there are kind of different definitions of AGI. And on the same time, most of them agree that it’s, you know, it’s about smarter AI than we have right now. We were joking a bit that, you know, on the way, the traffic is really, you know, exceptional. And, yeah, that’s a sign that maybe we are still not here today. So, but, yeah, but basically kind of among those common agreements that, let’s say, the smarter AI should reason. It should learn. It should adapt. And also it should transfer knowledge.

And also it shouldn’t be, you know, very. narrow, like, you know, of course, right now we have great, let’s say, areas where AI is really helping a lot, like co -development, customer service, and et cetera, but, you know, it should be much broader. So, and, you know, don’t think that any of us, maybe the colleagues will be able to answer when we will have, you know, and what timing, but definitely, you know, that’s one of the big topics right now.

Mr. Vinayak Godse

Let me come to you and you look at the digital initiative and artificial intelligence as one of the important research areas, so we are grappling with understanding what is right now, but can we think about what would happen in the next three, five years of time, and that seems to be the timeline for each area.

Mr. Simonas Satunas

So I’m the one with the date I’ll do my best So first of all my definition of AGI is very simplistic and I think that we need some simple explanation in this field and my very simple explanation is AGI will be something that can perform every human task at the level of accuracy and professionality of a human professional Now this is not an optimal definition because people can ask every task if a baby is crying will the AGI help him stop crying and people can ask what is the level of professionality but I think that this is something that we can digest and I think that for me I understood that we are getting closer there not from a technology perspective but from the perspective of talking with real Israelis about their problems and five years ago when I was telling this definition of AGI people were like oh it’ll never happen not in our lifetime and right now when I’m speaking with Israelis and I’m telling them this is AGI they’re saying oh aren’t we there yet oh because I thought that Chachi Biddy can help me like a lawyer isn’t it true now I think that we are not there yet okay there is a very sharp line between the AI that we are experiencing today and true AGI but the fact that the audience is already confusing the fact that people give trust to Gen AI tools 50 % of Israelis trust them more than they trust their friends many trust them more than they trust human professionals this puts us closer to AGI so I would say that it’s a matter of 3 years to 7 years until we reach that milestone

Mr. Vinayak Godse

so coming to you Alexandra how do you see this as a concept what is leading to this AGI what would we do that will impact the future of the AI bring this age of Asia in three or seven years of time?

Ms. Alexandra Bech Gjørv

Well, I’m not necessarily subscribing to the time frame. I think that depends on how much money we throw at it. And then there are other things to throw money at as well. Some of this, for example, we had a discussion with my team, you know, are machines able to make complex decisions as fast as humans? And in some areas, like, you know, many operations demand millisecond response and reflex level. You know, you can see that machines are quite good at detecting fire or doing various instinctive things as fast as we are, but the ability to interpret context, emotions, ambiguity, surroundings, body language, etc., that’s still quite far away. They take too long. And in a dynamic environment, you know, a wrong decision or a late decision is really a wrong decision.

So in order to get there, I, you know, there’s both low latency, energy efficient hardware, neuromorphic and edge computing and architectures beyond auto regression. But I think, you know, the researchers in Sintef, I head up the largest research institute in Norway. They, you know, they point to promising like hierarchical reflex reasoning systems, embodied multimodal learning, et cetera, et cetera. And there’s really no real doubt that you will get there. But there’s, in order to have the situational awareness like a human, you have to study a lot of data that would be considered private, personal. So there’s really limits on privacy. And then it triggers a lot of other questions that I’m sure we’ll get into.

Mr. Vinayak Godse

Yeah, we’ll come to that. So, Mr. Kenney, you must be serving many of the clients right now on AI, right? And every of us are getting stunned by… the progress and acceleration of the capability that is happening week by week basically right and that also scares us what is coming next right and when it comes to that level where there is a there is a two words uh somebody defines agi right so one is the consistency across the domain uh that it will be so general in a way that it will be consistently performing across the domain and second part is uh it will be reliable as well so currently probably sometimes it doesn’t have anything and it throws output and that’s why hallucination happens basically so consistency and reliability that’s what the agi will bring to the table basically so it will solve a lot of problems that we see uh uh right now we have been also getting stunned by the things that it can do basically so so there are routes to achieve the agi which will lead us to agi basically so how do you think uh uh your perspective the the journey that probably take us there

Mr. Kenny Kesar

So, you know, I agree with the panel that a couple of things we talked about in terms of where we’re getting to models evolving. But you bring up another component of accuracy. I’ll talk about accuracy first, and then I’ll come back to the disruption which is happening in the market. Now, the epitome of accuracy is five nines. So for AI to get from 90 % to 99%, it took five to ten years. Now, every nine that you add is another year or two years to the point where you get to 99 .99 and nines. So every nine that you’re adding has a time frame to it. And the number of nines that you add, you get closer to general intelligence because that’s what is going to look at the human brain.

I’ll take the topic of photographic regression that you talked about. Any regression, AI is right now built on regression. It’s built on learnings of the neural network. The neural network maturing on information that it sees. but the human brain is also inventing. It’s researching. So when AI really gets to the point of being able to research and bring new ideas to life that a human brain does, you’re getting closer to intelligence. Now, the disruption in the market that you’ve seen with announcements across the different players which dominate the AI market is creating a disruption in the industry and I think it’s the right disruption. It’s the disruption that word processor did to typewriter, what computers did to word processor, and what cloud did to data center.

This is another thing, but it’s much faster because it’s more pervasive and it impacts everybody in life. So the fact is people are talking about how does it translate to me. When I say it translates to me, it’s about how do we structure processes. Everybody and I agree accuracy is work in process. And since accuracy is work in process, we have to be really mature about… the use cases that we put onto it. We have to look at the human pyramid, what components of the pyramid that you’re going to look at. So the way we are advising our clients and what we’re doing ours is maker jobs, which is basically repetitive jobs with little context.

AI does very well, but create a controller for these autonomous. So combination of probabilistic and deterministic is what’s going to be the near future as we get to more and more deterministic when we get to general intelligence, because from a human perspective, it’s mostly deterministic.

Mr. Vinayak Godse

Right. Yeah. So these are and thank you all for putting some level of clarity in terms of what this means. And so at the end of the day, Asia is like so they say attention, right? Ability to give attention to all possible thing that. People, millions and billions of people asking questions. but as you rightly say the context matters so it’s not only attention the it should be contextual to your requirement and your things that you do right and third important part of which they are doing and last six months had been a great months for reasoning that bring to the table basically so my question is and anybody of you can answer this you then for achieving all of these things so why compute becomes very important so why you need this much of compute why there are trillions of dollars that is invested to make sure that it it use attention to each and every problem better and it is contextual and you reasoning and at the same time latency as I talk about so the role of compete what is the role of competitive this any of you

Simonas Cerniauskas

yeah so you know so of course if I may start and of course please accompany so currently we are at super high cycle let’s say of those investments and most of us are also wondering is it a bubble or when it will blow a bit etc is it really in some cases sustainable everyone of us most likely has our own opinion but still this race to be number let’s say one this belief that if you are number one you will remain number one and this momentum I think plus huge appetite all this hype definitely brings much much more money to the table than we could ever imagine and you know on the same time it depends a lot of course on the algorithms how efficient they will be all of us remember most likely last year this deep sea moment and there are also other models which are much more efficient but so So, you know, at some point we might understand that it’s overestimated, overinvested.

At the same time, I remember in Zuckerberg’s quotes that, you know, said, okay, in the worst case scenario, I will, you know, have overcapacity for a couple more years and then I will use it.

Mr. Simonas Satunas

So my humble opinion is that compute is one element in a chain of elements and that sometimes we treat this element as the only one. Let’s explore a metaphor. Let’s imagine that we are in the 19th century and a prophet arrives and he tells us, okay, in five years, a new technology will emerge that will enable you to arrive from Delhi to Bangkok in less than an hour. But I don’t know what the technology is. Maybe it’s a ship, maybe it’s a car, maybe it’s a train, maybe it’s an airplane, but we must be prepared. So everyone is trying to be prepared and to build the right infrastructure. So let’s look at the structure. The problem is everyone thinks about it as something else.

So one will build an airport and the other one will build rails and the other one will build boats. I think that we are in this moment. We know that AGI will arrive. We know that it is soon and we know that we must be prepared. Compute is one of the elements that is necessary, but energy is also important and heating and cold is also important. Data is extremely important. Implementation is important. Language is important in India as well. I think that one of the elements that we are not investing enough is the human element. Think about critical thinking, for example. I don’t know what AGI will arrive, but I know that already now for us it is very important to raise critical thinking among the public.

When you hear something in the news, when you see something, was it made by AI? What is the manipulation that is being forced upon me? So I think that investing in education is not less critical than investing in computing.

Mr. Vinayak Godse

And then another element I want to come to you on this that you talked about. there is very interesting discussion about this system one and system two thinking human is more intuitive in terms of response and system two is more logical and AI is probably helping with that basically but there is a latency that is an important area and that’s why they are putting a lot of effort and improving the competence such that the latency of system two thinking is also less so that your intuitive thinking can improve with that basically but it’s not only the competence the perception, the ambient, the senses, the emotions so all that also matters a lot and that’s where the limitation of language based models are getting exposed basically and you did talk about that in your initial remark can you just throw light on that?

On the language? On the different type of the models right? Ambient, compute for that matter, world model that people talk about so…

Ms. Alexandra Bech Gjørv

Well I just wanted to first agree with the… Mir, sorry that you know if you are a government and this is democratic access to compute is a big topic I think you can really get lost in just investing in compute power so investing in skills and leading edge technology understanding in your own country and participating in the regulatory approach because some of the things that I care about is that everybody says that they should be human oversight but you know that once you get into these dilemma situations like what should happen in a car accident, humans are not very good at understanding risks and humans are not very good at really making ethical discussions they tend to go as far, you know, do your best and then let moral luck decide who gets lost but you have to in machine driven systems you actually have to make decisions about those things so I think becoming, you know, educating also our politicians to be able to to know that you have to make the hard choices because otherwise the machines will make them for you and they will continue our biases and they will, you know, it will not end well.

But then I just wanted to share a little story that I heard. You know, Michael Lewis, the guy with the money ball and everything, he has this anecdote that in the Basketball Association in the States, they started video surveillance and the coaches were all making racist decisions and home team decisions. And by showing the videos and by showing the statistics, next season they couldn’t find any bias at all. So I think that’s a good example of how the machines make people better, whereas we’re not able to better ourselves over time. So I think I just thought this was a nice anecdote for this

Mr. Vinayak Godse

Thank you. And I’ll come to Kenny. So… As we are… trying to solve problems of security, privacy in current big capability of AI and we are struggling to understand what it means for security, what it means for privacy and suddenly there is a significant acceleration that is happening so what we are doing right now for security privacy which could help us to graduate to more and more powerful model comes in or any other things basically so can you just help us

Mr. Kenny Kesar

yeah I think security as we evolve and we talked about compute compute gets bigger, context get bigger, we get smarter in terms of what AI can do and definitely the same AI that can generate, can pose more sophisticated attacks and when we get to AGI right, the biggest thing is I could be emulating a human Let’s say in a company, I could emulate a CEO and make a decision because I’m getting so close to being natural. The threat is real. Now, even today, let’s say without AI, you need to be just a step ahead of the bad actors or the persons who are into cybercrime. You just have to be a step ahead. And similarly, we talked about, you know, we’re mentioning about the human portion, right?

That the human portion needs to get more educated where there are going to be set of humans that are going to use the same AI to build better agents to fight them. So now it’s a question of the tooling that you have at hand. Even today, it’s the tools. It’s a human who’s building tools to fight your cyber threats. Imagine, in the next era, the only thing is… It’ll become nearly close to science fiction when agents try locking humans out. But that’s, I would say, still science fiction. But the fact is as we evolve, we need to right -size the solution and that’s how we will manage compute too. You don’t use I7 computer or to do a simple calculator task of adding two numbers, right?

You use a calculator. So in the context of the world, we’re going to have SLMs which is small language models that will do smaller things so that we can manage compute. You have the bigger models that will solve world hunger in terms of how we do with different levels of machines and processing that we do. I think there will be tiering. Right now, we were talking about it’s a fight to who’s first. So with the fight to first, bigger, better, more elaborate. But now as it evolves, you’ll get the right size fitting to them. Then only it will be commercially viable. AI is not commercially viable today. The costs outweigh the RO.

Mr. Vinayak Godse

Yeah, current cost is quite significantly higher. You can do POC but… once you put into production environment the token cost is too much high to the ROI so so near want to come to you there is a established understanding of security privacy safety or ethics right and that’s what the paradigm that we at least try to understand right now but would the Asia altogether different paradigm and the concepts of security privacy will be foundationally very different than what we discussed right now

Mr. Simonas Satunas

so as I see it when we try to deal with the risks that AI pose we distinguish between four different levels the first level is the classical risks like privacy security cyber fraud every technology that we have since the 90s we need to explain how does it meet the current risk in that matter and AI is much more powerful and it poses a lot of more risks but these are the kinds of risks that we when we design products we know how to deal with them. Above it there is a level of human health and mental health and we find out that AI solutions can be quite problematic for mental health, can cause a lot of damage in some cases and this is something that is not yet well understood and investigated above that there is a social level.

What does it does to the empathy between people? What does it does normally people say oh I see that it’s bad for my kids. They are experiencing bullying or addiction usually what’s bad for your kids is also bad for you and we understand that these are complications that we didn’t think about when we code and the higher level is a macro level what does it do to society? What does it do to democracy? I think that several countries are now experiencing foreign manipulation and it is very easy to run campaigns that are built of fake news and we see that manipulation can become very problematic. So I think that a strategy, a national strategy and an international strategy should access, should address all these levels and all these levels have mitigations but they are costly and they need collaboration.

So we need to be in close collaboration in order to mitigate these risks.

Mr. Vinayak Godse

It’s good that the way you put the structure, right? Things it would do to us, our brain and the thing that will impact us as individually and we discussed that in one of the sessions that we hosted on neuroscience and AI. So what this means to the brain development process if we are using AI for every small thing that we want to do, what that means to society, brain development process plateaus for that matter, what will be in society and then what is the macro kind of impact it. Do you want to add something on that?

Ms. Alexandra Bech Gjørv

Yeah, I just, sorry. I just want to build on that. How it’s not just targeted manipulation or the things that we see in our kids and somebody walking around with a button called friend and that’s your only friend that you need but also the well -structured in the geopolitical context the ability to create completely different information universes you don’t need to be neurologically strange you just see a completely different view we just published a paper in science on these agent swarms and just reading a book about the Ukraine and Russia war going on now and how large populations are overpowered by totally different images of the world from what we are and at least obviously your defense systems need to be hardened against those kinds of manipulations but it’s also you know actually an offensive strategy to find good bots that enter those universes.

It’s an actual battleground in and of itself, and it’s very strange to think about the world in that way, but I think you’re very naive if you don’t start systematically working on how you make your conviction of what the world is like also part of the people that you need to somehow, hopefully not defeat, but relate to and convince that things can be better. So it’s not just a technological challenge. I would say it’s a huge mental leap for most of us.

Mr. Vinayak Godse

So Siman, the question is like the more we use, the more we become dependent on AI system, right? And the more acceleration of the people’s ability to think critically, that will go down basically, right? The speed will increase the more dependence, and then more… More AI become powerful for that matter, right? so what we see in terms of this misinformation, disinformation and defake, so probably there will be different kind of cognitive warfare that may happen so how do you see such kind of challenges in the society, you talked about society or individual for that matter, so what kind of implication it will have for individual society and overall the way the world is organized

Simonas Cerniauskas

yeah so absolutely so basically all those layers and all the dependencies like you rightly stated they also critical thinking of course is one but also awareness, education and you know the skills, abilities for people to understand the things here I think this audience is you know for us it’s more or less everything self obvious but you know when you start talking to people in the streets or different backgrounds then you you know realize that what is self -obvious for you for another person might be completely different. To find those ways I would say to educate to basically help them identify the threats, that’s one of the key priorities and also obligation I would say from our side.

Mr. Vinayak Godse

one of the important challenge of this critical thinking which I come across is critical thinking is nothing but your ability to give attention to various different dimensions nuances, different perspective, different views basically right. Where it is tremendous amount of effort that I would have to become a critical thinker. And AI saws that quite easily for me. It can make me to bring all the attention, all the dimensions, all the nuances, all the viewpoints, you can quickly get access to me, right? So, even for critical thinking, Kenny, for you, this question is, you will be depending too much on AI as well, right? So, we need to know distinction between what do you critical thinking? Critical thinking is not just getting information, giving attention, but critical thinking is what?

So, that question probably is very important question to ask.

Mr. Kenny Kesar

critical thinking that is very necessary for us to innovate further. So the biggest issue that the AI world is facing, 30 % of the content is consuming is AI generated already. So basically you’re feeding back and it’s learning on the same model. When originally it was learning on artifacts that were built through different thinking processes. So I would say one of the, it’s a risk, it’s a boon because it gets work done. But in overtime it’s a risk that we will stop evolving because if we don’t exercise the brain as a muscle, if we don’t exercise it and don’t build those neurons which really influence critical thinking, it will be actually a very big loss to society.

So I would say general intelligence, everybody is asking for it. Now how do we make sure as AI. computers get general intelligence we’re not losing our intelligence to create that general intelligence again so it’s a it’s a it’s a vicious cycle it’s a question which we’re debating we’re trying to answer in ourselves everybody has perspectives but it’s a it’s something that I think about do I have an answer to it no but I feel that critical thinking on both sides is something that we really need to critically think about

Mr. Vinayak Godse

yeah so that’s what may every thing that you think as a solution and kind of thing so there is always this challenge of what it means right in this new paradigm is an important so now a little bit concluding part of this discussion is can we when this is question to each of you briefly we can discuss about it can we still think about I know we know we have been doing security privacy and particular safety privacy particular way right but as this paradigm is new can we think about some anchor control right now that we should be mindful of right that when it comes it happened right when AI was getting built after 3 years we are talking about AI governance and all these things so is there a way for us to think about some kind of anchor control some idea some concept basically that could help us to browse through challenges the AGI could throw I can start with you briefly and each of you can comment on this

Simonas Cerniauskas

yeah so well of course you know there are some technical things like you know the same what are marks or something you know labeling and other technical features that could help us a bit to identify at least some threats … then also we can talk about regulator measures but you know that’s a broader topic for the further discussion but especially here we in Europe we tend to regulate and overregulate everything so but in a way I think also at least some measures here also can be really viable and really reasonable

Mr. Simonas Satunas

well I come from a very small country Israel is so small that you can put it it’s like a pin on the map and therefore our regulative approach is that we are unable to determine the global regulation and in this AI race I think that what is more important is the global regulation so since we are a very tiny country we must work with positive tools say, okay, we cannot affect the regulation, but how can we work together with the AI developers in order to make the personality of the AI more moral, more ethic? How can we put egalitarian and equality into the consideration? How can we avoid bias? And I think that it makes us work together with the industry and together with the academia in order to find out about new consequences.

I think that in many cases the giants, the big tech doesn’t point towards unethical conclusions, but they work towards financial incentives that make AI behave in a very immoral way. If I’ll take, for example, the conflict in Myanmar, in Burma, we saw that Meta was not actively promoting violence in Myanmar, but the algorithm of Meta was designed to attract attention in a way that make the AI the more violent post much more viral and make violence flourish. So if we’ll be able to promote a dialogue and if we’ll be able to be together with the industry in development of new AI, sometimes we’ll be able to make AI more ethical.

Mr. Vinayak Godse

So Alexandra, your view. So one is the anchor control idea concept, but second part is how do you get into early? How do you get into? Early in the game, right? So when AI happened, now we are discussing in 25, 26 about the responsibility and alignment and adoption and governance basically, right? So in Asia discussion is the anchor control ways, ideas and ways for us to get into early discussion of it.

Ms. Alexandra Bech Gjørv

Well, I think at least you need to work on resilience and robust rollback mechanisms. A little bit like what we’re experiencing now in Europe, where we all have to practice on living without electricity. You know that it’s a realistic option that somebody. sabotages your electricity and then looking at well how dependent are we really and what are the alternative you know and and planning from a point of view where you not only work to reduce risk but you really work to reduce consequences of those risks occurring so if you work on the traditional risk matrix it’s always you know avoiding bad outcomes but then making the bad outcomes less bad that’s something that at least we think is well the new realities are propelling that kind of thinking and I think that’s important

Mr. Vinayak Godse

Kenny your voice on this

Mr. Kenny Kesar

sure actually the way we look at it in terms of AI from ethical AI to biases to data privacy it’s very similar akin to what a human would do even today what today we have a standard operating procedure that we review for biases, we review for content. You know, in our organizations, we have organizations that manage this. Now, and the other thing is we train people on ethical practices, on non -bias and things like that. So ultimately, AI is very similar to that, where we will have, you know, in today’s world, for the lack of a better word, I call it AOP instead of SOP, agent operating procedure or AI operating procedure, where we have to train AI in terms not to be biased.

So I feel that there is a big industry which is in the offing, which is going to manage and create models, LLMs, to manage or to validate that the responses from, you know, your common models are ethically right, non -biased. Because today, as organizations, we invite experts from outside to come and see our practices, whether we are following ethical, we are transparent, a number of those things. Very similarly as we mature towards more general intelligence and the more ways of working, I feel that these control structures will come in cyber security, will come in ethical use of AI, unbiased use of AI. So ultimately it will be a checks and balances system and we will see innovation in these areas.

That is how we feel it. It’s an evolving area. Let’s see how it happens.

Mr. Vinayak Godse

Thank you all of you to really help us understand the meaning of this concept of AGI and how that will pan out from now and what kind of challenges it will throw to us. There are definitely opportunities that we don’t have time to discuss about what it will bring to us. But then what could we start doing right now? And this was definitely one of the important conversations. Help this would help you understand what we are talking about the AGI today. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Join me to give big hand to my co -panelists for helping us understand. Thank you. Thank you, Simon. Thank you, Nir.

Thank you. We have some photo shoot. Alexandra, we need to come here for photo shoot. I also request the fireside panels, Hendrikus sir and Narendra sir to please join us for the photo shoot. Thank you. Thank you. Before we commence the session for the Fireside I would like to announce the launch I would like to announce the launch of AI Cyber Security Terminal This is published today Thank you. Thank you. you you Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (33)
Factual NotesClaims verified against the Diplo knowledge base (5)
Confirmedhigh

“Kesar introduced the concept of accuracy progression through “five nines,” explaining that AI evolved from 90 % to 99 % accuracy over several years and each additional nine requires increasingly longer timeframes.”

The knowledge base explicitly describes Kesar’s “five-nines” accuracy benchmark and the increasing time required for each additional nine of accuracy [S1].

Additional Contextmedium

“Progress toward AGI depends on sustained investment, hardware breakthroughs, data‑privacy and regulatory challenges.”

Long-term sustained investment is highlighted as essential for fundamental research breakthroughs, providing context for the claim about investment dependence [S92].

Additional Contextmedium

“Industry chatter about a speculative bubble in compute spending, with concerns that over‑capacity may persist for years.”

An Alibaba Group chairman argued that AI investment is not a speculative bubble, offering a contrasting perspective that adds nuance to the bubble discussion [S99].

Additional Contextlow

“Mark Zuckerberg has suggested concerns about the compute spending cycle and over‑capacity.”

Zuckerberg publicly stated Meta’s long-term vision is to develop AGI and make it open source, confirming his active involvement in AGI discourse, though the source does not mention a bubble comment [S31].

Confirmedmedium

“The moderator framed AI research as accelerating rapidly since around 2020 and intensifying after early‑2023 generative model releases.”

The knowledge base notes that artificial intelligence is advancing at a rapid pace, supporting the moderator’s framing of accelerated AI progress [S82].

External Sources (99)
S1
Artificial General Intelligence and the Future of Responsible Governance — – Mr. Kenny Kesar- Ms. Alexandra Bech Gjørv – Mr. Simonas Satunas- Ms. Alexandra Bech Gjørv – Ms. Alexandra Bech Gjørv…
S2
Artificial General Intelligence and the Future of Responsible Governance — -Mr. Vinayak Godse- Moderator/Host of the panel discussion on AGI (Artificial General Intelligence)
S3
Subrata K. Mitra Jivanta Schottli Markus Pauli — Gandhi was vehemently opposed to Partition, an outcome which other senior Congress leaders like Jawaharlal …
S4
Artificial General Intelligence and the Future of Responsible Governance — – Ms. Alexandra Bech Gjørv- Mr. Simonas Satunas – Simonas Cerniauskas- Mr. Simonas Satunas
S5
Artificial General Intelligence and the Future of Responsible Governance — – Mr. Kenny Kesar- Ms. Alexandra Bech Gjørv – Ms. Alexandra Bech Gjørv- Mr. Kenny Kesar
S6
Artificial General Intelligence and the Future of Responsible Governance — – Simonas Cerniauskas- Mr. Simonas Satunas- Mr. Kenny Kesar – Simonas Cerniauskas- Mr. Simonas Satunas- Ms. Alexandra B…
S7
National Disaster Management Authority — “One is the infrastructure layer”[9]. “Second is the operating system layer which runs on top of infrastructure”[62]. “f…
S8
https://dig.watch/event/india-ai-impact-summit-2026/artificial-general-intelligence-and-the-future-of-responsible-governance — It’s an actual battleground in and of itself, and it’s very strange to think about the world in that way, but I think yo…
S9
Expert workshop on the right to privacy in the digital age — Ms Fanny Hidvégi, European policy manager at Access Now, Brussels, highlighted the actions taken by states. She started …
S10
High-Level session: Building and Financing Resilient and Sustainable Global Supply chains and the Role of the Private Sector — Such an assembly of varied views yields a well-rounded array of approaches, potentially leading to more nuanced and robu…
S11
Launch / Award Event #52 Intelligent Society Development &amp; Governance Research — AI changes the way in which knowledge is created, transmitted, and verified. Misinformation and disinformation will beco…
S12
Breaking the Fake in the AI World: Staying Smart in the Age of Misinformation, Disinformation, Hate, and Deepfake — ## Government Perspectives – **Carol Constantine** – Human resources technology company representative AHM Bazlur Rahm…
S13
Parallel Session A9: Climate Change Adaptation, Resilience-Building and DRR for Ports (continued) — In summary, the positive sentiment surrounding the shared experiences and strategies represents a constructive, forward-…
S14
AI for Democracy_ Reimagining Governance in the Age of Intelligence — Nor do we hold identical views on democratic institutions. Thank you. We face a choice, either we step back or allow the…
S15
https://dig.watch/event/india-ai-impact-summit-2026/ai-safety-at-the-global-level-insights-from-digital-ministers-of — continue rapidly for policymakers across the globe to rely on an independent scientific assessment of what AI can do and…
S16
Revisiting 10 AI and digital forecasts for 2025: Predictions and Reality — Demis Hassabis on AGI Development:Demis Hassabis, CEO of Google DeepMind, predicts that Artificial General Intelligence …
S17
Indias Roadmap to an AGI-Enabled Future — Absolutely. In power sector, we use a lot of electronics. For example, I gave you a small example of IGBT. IGBT is again…
S18
World Economic Forum Panel Discussion: Global Economic Growth in the Age of AI — Professional experience analyzing various risks including cyber, environmental, and health risks, with observation that …
S19
Folding Science / DAVOS 2025 — Artificial General Intelligence (AGI) Development Hassabis believes that one or two major breakthroughs are still neede…
S20
Keynote-António Guterres — “Our target is 3 billion US dollars.”[29]”That is why, encouraged by the General Assembly of the United Nations, I am ca…
S21
Keynote-Sundar Pichai — Or in India, where a work -together is helping farmers. protect their livelihoods in the face of monsoons. Last summer, …
S22
Ethics and AI | Part 4 — Damage to information integrity (mis/disinformation, impersonation) Human rights violations Violation of intellectual …
S23
9821st meeting — The UK highlights the potential risks associated with AI, particularly in the areas of autonomous weapons and cyber atta…
S24
WS #123 Responsible AI in Security Governance Risks and Innovation — Cybersecurity | Network security Technical Challenges and Risks
S25
Rethinking learning: Hope, solutions, and wisdom with AI in the classroom — Suppose AI (as with previous technologies) frees educators from focusing solely on repetitive memorisation and routine p…
S26
Education meets AI — It was acknowledged that critical thinking enables individuals to analyse information critically, question assumptions, …
S27
Smart Regulation Rightsizing Governance for the AI Revolution — The discussion began with a notably realistic and somewhat pessimistic assessment of global cooperation challenges, but …
S28
Global AI Policy Framework: International Cooperation and Historical Perspectives — So global principles are very important, but implementation must account for national contexts and capacities, as you we…
S29
What is it about AI that we need to regulate? — A recurring theme was the need for shared principles rather than uniform solutions.Paula Gori articulated this approach:…
S30
Indias Roadmap to an AGI-Enabled Future — And at a certain volume of production that it has to be done. So, which means that resources have to be deployed in a ma…
S31
Meta joins the tech giants’ race for AGI — Meta, the parent company of Facebook, has entered the race for Artificial General Intelligence (AGI).Meta CEO Mark Zucke…
S32
Artificial General Intelligence and the Future of Responsible Governance — Massive compute investment is driven by the race to be first, though efficiency improvements may reduce requirements Sp…
S33
Presentation of outcomes to the plenary — This aligns with SDGs 13 and 14, which call for climate action and the conservation of marine life. Overall, the compreh…
S34
TECHNICAL SPECIFICATION — This Technical Specification examines electronic patient record systems at the clinical point of care that are also inte…
S35
EU Digital Diplomacy: Geopolitical shift from focus on values to economic security  — ‘Human‑centric’ language still appears, but under resilience. Explicit human rights advocacy, such as protections for di…
S36
Crypto hiring snaps back as AI cools — Tech firms led crypto’s hiring rebound, adding over 12,000 roles since late 2022, according toA16z’s State of Crypto 202…
S37
Wrap up — These key comments fundamentally reframed the discussion from typical technology policy debates to deeper philosophical …
S38
INCREASING ACCESS TO DATA ACROSS THE ECONOMY — Estimating the economic activity potentially in scope allows us to rank the levers according to their potential impact. …
S39
Leveraging AI to Support Gender Inclusivity | IGF 2023 WS #235 — The analysis of the speeches reveals several significant findings. Firstly, it highlights that AI can eliminate unintent…
S40
Artificial intelligence (AI) – UN Security Council — In addition, there is a call forcontinuous education and awareness raisingabout AI’s capabilities and limitations. Educa…
S41
Emerging Shadows: Unmasking Cyber Threats of Generative AI — Furthermore, influence operations have been conducted to spread discord and misinformation. The rapid evolution of techn…
S42
Launch / Award Event #52 Intelligent Society Development &amp; Governance Research — AI changes the way in which knowledge is created, transmitted, and verified. Misinformation and disinformation will beco…
S43
AI: The Great Equaliser? — Another key point highlighted is the need for good governance to effectively manage the risks associated with AI. The ri…
S44
Folding Science / DAVOS 2025 — Mentions that AGI development may take a five-year timescale rather than the one or two years some are predicting. Time…
S45
Comprehensive Discussion Report: The Future of Artificial General Intelligence — The session examined critical questions surrounding the timeline for achieving Artificial General Intelligence (AGI) and…
S46
Practical Toolkits for AI Risk Mitigation for Businesses — In conclusion, the analysis recognizes the immense potential of AI technology but stresses the need to govern and regula…
S47
Education meets AI — In addition to the above topics, the significance of critical information and critical thinking in education was also di…
S48
Artificial intelligence (AI) and cyber diplomacy — The conversation expanded to highlight the universal need for digital literacy and capacity building in AI, urging gover…
S49
Revisiting 10 AI and digital forecasts for 2025: Predictions and Reality — Reality check for the artificial general intelligence (AGI) narrative:Since the launch of ChatGPT in November 2022, ther…
S50
Artificial General Intelligence and the Future of Responsible Governance — Mr. Kenny Kesar introduced the concept of accuracy progression through “five nines,” explaining that while AI evolved fr…
S51
The Dawn of Artificial General Intelligence? / DAVOS 2025 — In summary, the discussion emphasized the complex challenges and opportunities presented by AGI development, with no cle…
S52
Keynote-Jeet Adani — Adani announced that “earlier this week, the chairman of the Adani Group made one of the most transformative announcemen…
S53
Driving Indias AI Future Growth Innovation and Impact — Energy infrastructure investment critical for compute infrastructure development
S54
Ethics in the Age of AI — The ethical concerns raised by AI technology are diverse and far-reaching. The four main concerns discussed in the provi…
S55
Ethics and AI | Part 4 — Damage to information integrity (mis/disinformation, impersonation) Human rights violations Violation of intellectual …
S57
Rethinking learning: Hope, solutions, and wisdom with AI in the classroom — Suppose AI (as with previous technologies) frees educators from focusing solely on repetitive memorisation and routine p…
S58
WSIS Action Line C6: Digital Ecosystem Builders in action: Redefining the role of ICT regulators — This comment provides a crucial balance to the technology-focused discussion by emphasizing that human elements remain c…
S59
Smart Regulation Rightsizing Governance for the AI Revolution — The discussion began with a notably realistic and somewhat pessimistic assessment of global cooperation challenges, but …
S60
WS #98 Towards a global, risk-adaptive AI governance framework — During the Q&A session, the importance of standards in AI governance was discussed. Speakers highlighted the need for te…
S61
Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law (Explanatory Report) — 59.          The provision also provides for measures with regards to the identification of AI-generated content in orde…
S62
What is it about AI that we need to regulate? — A recurring theme was the need for shared principles rather than uniform solutions.Paula Gori articulated this approach:…
S63
Opening of the session — Canada: Thank you, Chair. We thank you for your efforts in seeking to devote tomorrow to the discussions that are necess…
S64
Opening of the session — – Ensuring the mechanism is action-oriented and needs-driven – Focusing on policy-oriented and cross-cutting thematic g…
S65
Opening remarks — Good morning, esteemed guests and participants. Today, we are gathered at the NET Mundial Plus 10 event to celebrate the…
S66
Any other business /Adoption of the report/ Closure of the session — The statement offers a sense of success and a forward-looking optimism, referencing a soon-to-occur resumed session. Thi…
S67
Opening of the session — Convergence necessary for progress with limited time. In summary, the analysis distils into a narrative that intertwine…
S68
AI and Digital Developments Forecast for 2026 — The tone begins as analytical and educational but becomes increasingly cautionary and urgent throughout the conversation…
S69
Conversational AI in low income &amp; resource settings | IGF 2023 — Sameer Pujari:Thank you, Rajendra. And thanks for sitting on this forum. I think it’s a very interesting discussion, esp…
S70
Workshop 8: How AI impacts society and security: opportunities and vulnerabilities — Remote moderator: We actually have two questions online. The first one is from Antonina Cherevko. But security essential…
S71
Comprehensive Summary: AI Governance and Societal Transformation – A Keynote Discussion — The tone begins confrontational and personal as Hunter-Torricke distances himself from his tech industry past, then shif…
S72
WS #187 Bridging Internet AI Governance From Theory to Practice — The discussion maintained a thoughtful but increasingly cautious tone throughout. It began optimistically, with speakers…
S73
Agenda item 5: discussions on substantive issues contained in paragraph 1 of General Assembly resolution 75/240 (continued)/3/OEWG 2025 — North Macedonia: Distinguished Chair, esteemed delegates, North Macedonia aligns itself with the statement of European…
S74
Agenda item 5: Day 1 Afternoon session — A victim-focused framework will highlight the humanitarian impact of cyberattacks, fostering a more empathetic and compr…
S75
Agenda item 5: Day 2 Morning session — Belarus pledged steadfast backing for the Group’s initiatives and lauded the leadership’s competency in guiding the Grou…
S76
Agenda item 5: discussions on substantive issues contained inparagraph 1 of General Assembly resolution 75/240 part 5 — Pakistan: Thank you, Chair. Let me take this opportunity to commend the work done by you and your team in confidence-…
S77
Wrap up — High level of consensus on core principles with nuanced understanding of implementation challenges. The agreement spans …
S78
Transforming Agriculture_ AI for Resilient and Inclusive Food Systems — The tone was consistently optimistic yet pragmatic throughout the conversation. Speakers maintained an encouraging outlo…
S79
Parliamentary Closing Closing Remarks and Key Messages From the Parliamentary Track — High level of consensus with constructive engagement. While there were some specific reservations raised (particularly a…
S80
How to Project Europe’s Power / Davos 2025 — The tone was largely pragmatic and solution-oriented, with speakers acknowledging challenges but focusing on concrete st…
S81
Swiss AI Initiatives and Policy Implementation Discussion — The discussion maintained a professional, collaborative tone throughout, with speakers presenting both opportunities and…
S82
Open Forum: A Primer on AI — Artificial Intelligence is advancing at a rapid pace
S83
Keynote-Rishad Premji — “And they are the pioneers and the thought leaders of artificial intelligence.”[13] Artificial intelligence Opening fr…
S84
Opening of the session/OEWG 2025 — El Salvador: Thank you, Chairman. In line with the opening words, El Salvador hopes to provide comments to all the dif…
S85
A Global Human Rights Approach to Responsible AI Governance | IGF 2023 WS #288 — Pratek Sibal:Thanks Ian. How much time do I have? You have five to six minutes, but there’s no rush. I wanna hear your c…
S86
Skilling and Education in AI — The tone was cautiously optimistic throughout. Speakers acknowledged both the tremendous opportunities AI presents for I…
S87
Scaling Trusted AI_ How France and India Are Building Industrial &amp; Innovation Bridges — And it’s very useful. It’s used to benchmark applications and performance on quantum computers and using AI techniques a…
S88
Scaling AI for Billions_ Building Digital Public Infrastructure — “Because trust is starting to become measurable, right, through provenance, through authenticity, as well as verificatio…
S89
Election integrity in the digital age: insights from IGF 2024 — Election integrity and disinformation have been closely followed topics during the session ‘Internet governance and elec…
S90
Democratizing AI Building Trustworthy Systems for Everyone — And so there are different in quotes, markets here at UL. People who can pay at different levels. Even within a country …
S91
ETHIO PA 2025 — How likely is it that jobs will be lost to automation in the manufacturing sector in the Fourth Industri…
S92
Science as a Growth Engine: Navigating the Funding and Translation Challenge — Long-term sustained investment is essential for fundamental research breakthroughs
S93
HIGH LEVEL LEADERS SESSION I — Microsoft’s deep investment in this area demonstrates the company’s commitment to harnessing the power of data for posit…
S94
Main Topic 2 –  European approach on data governance  — Emphasising data’s critical role as the lifeblood of the digital economy, the speaker cautioned about the risks associat…
S95
Main Session on Sustainability &amp; Environment | IGF 2023 — Maike Lukien:So policymakers, same as us, can never have too much information to base evidence-based decisions on. The o…
S96
Pre 10: Regulation of Autonomous Weapon Systems: Navigating the Legal and Ethical Imperative — This comment transformed the tone of the entire discussion, legitimizing disagreement and uncertainty as valuable rather…
S97
Workshop 3: Quantum Computing: Global Challenges and Security Opportunities — Mattingley-Scott stresses the urgency of taking action immediately, even though the exact timeline for when quantum thre…
S98
HUMANITARIAN NEGOTIATION — Some societies tolerate higher levels of ambiguity and uncertainty than others. In negotiations, this means that, while …
S99
AI investment shows strong momentum beyond bubble fears — AI investmentis not showingsigns of a speculative bubble, according to theAlibaba Groupchairman. Instead, he argued at t…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
M
Ms. Alexandra Bech Gjørv
5 arguments148 words per minute942 words380 seconds
Argument 1
Need for massive, low‑latency, energy‑efficient hardware (neuromorphic, edge) to achieve human‑like situational awareness
EXPLANATION
She argues that achieving AGI requires specialized hardware that can process information with very low latency and high energy efficiency, such as neuromorphic and edge computing architectures, to match human reflexes and situational awareness.
EVIDENCE
She describes that many operations demand millisecond-level response, noting machines can already detect fire quickly but still lack the ability to interpret context, emotions, and body language, which requires low-latency, energy-efficient hardware like neuromorphic and edge computing [26-33].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Technical requirements for low-latency, energy-efficient hardware are highlighted in [S1]; infrastructure-layer considerations for such hardware are discussed in [S7].
MAJOR DISCUSSION POINT
Hardware requirements for AGI
DISAGREED WITH
Mr. Kenny Kesar, Mr. Simonas Satunas
Argument 2
Access to personal data needed for true situational awareness creates privacy limits
EXPLANATION
She points out that to give AI human‑like situational awareness, massive amounts of personal and private data must be collected, which raises significant privacy concerns and limits.
EVIDENCE
She states that achieving situational awareness requires studying a lot of data that would be considered private, personal, and that this creates real limits on privacy [35-37].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The tension between extensive data collection for AGI and privacy constraints is described in [S1]; concrete legislative examples concerning privacy and backdoors are provided in [S9].
MAJOR DISCUSSION POINT
Privacy constraints on data collection for AI
AGREED WITH
Mr. Simonas Satunas
Argument 3
Emphasis on robust rollback mechanisms and system resilience to mitigate failures
EXPLANATION
She emphasizes the need for resilience strategies such as rollback mechanisms and risk‑matrix planning to reduce the impact of AI failures, drawing an analogy to living without electricity as a test of system robustness.
EVIDENCE
She suggests working on resilience and robust rollback mechanisms, likening it to practicing living without electricity to understand dependence and planning for alternative solutions, thereby reducing the severity of bad outcomes [187-189].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Resilience and rollback mechanisms are emphasized in [S1]; parallels to broader infrastructure resilience are drawn in [S13]; a holistic approach to supply-chain and system resilience is outlined in [S10].
MAJOR DISCUSSION POINT
Resilience and rollback in AI governance
AGREED WITH
Mr. Simonas Satunas, Mr. Kenny Kesar
DISAGREED WITH
Mr. Simonas Cerniauskas, Mr. Kenny Kesar, Mr. Simonas Satunas
Argument 4
AI‑driven misinformation, cognitive warfare, and creation of divergent information universes threaten societal cohesion
EXPLANATION
She warns that AI can be used to generate large‑scale misinformation campaigns that create separate reality bubbles, which can be weaponised geopolitically and affect public perception.
EVIDENCE
She references a paper on agent swarms that shows how AI can create completely different information universes, citing the Ukraine-Russia war as an example of populations being overpowered by divergent narratives, and notes the need for defensive measures against such manipulation [144-149].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI-enabled misinformation and the creation of separate information universes are discussed in [S11]; agent-swarm information warfare is detailed in [S12]; these concerns are also referenced in [S1].
MAJOR DISCUSSION POINT
Misinformation and information warfare via AI
AGREED WITH
Mr. Simonas Satunas, Mr. Kenny Kesar
Argument 5
Building resilience through risk‑matrix planning, rollback strategies, and reducing consequence severity is a proactive governance approach
EXPLANATION
She reiterates that proactive governance should focus on minimizing the impact of risks by planning for contingencies, using risk matrices, and ensuring that any adverse outcomes are less severe.
EVIDENCE
She repeats the importance of a risk-matrix approach that not only avoids bad outcomes but also makes the consequences of any failures less severe, describing this as a new reality-driven way of thinking [187-189].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The risk-matrix planning approach is presented in [S1]; systemic resilience strategies are explored in [S10]; practical resilience planning examples are given in [S13].
MAJOR DISCUSSION POINT
Proactive risk management for AI systems
M
Mr. Vinayak Godse
1 argument104 words per minute1988 words1138 seconds
Argument 1
Uncertainty and need for societal preparedness
EXPLANATION
He stresses that while AI advancements have accelerated, there remains uncertainty about when AGI will arrive, and societies must prepare now to avoid missing the opportunity to govern it effectively.
EVIDENCE
He notes the rapid AI developments since 2020, the growing discussion around AGI, and warns that failing to pay attention now could cause us to miss the chance to discuss, govern, and manage AGI over the next 2-10 years [1-7].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The call for proactive, science-based assessment and governance of emerging AI capabilities is made in [S15]; broader governance imperatives are echoed in [S14].
MAJOR DISCUSSION POINT
Preparedness for AGI
DISAGREED WITH
Mr. Simonas Satunas, Ms. Alexandra Bech Gjørv
M
Mr. Simonas Satunas
7 arguments161 words per minute1149 words426 seconds
Argument 1
Simple functional definition: AI that can perform any human task at professional level
EXPLANATION
He defines AGI as an AI system capable of executing every human task with the accuracy and professionalism of a human expert.
EVIDENCE
He states that AGI would be able to perform every human task at a professional level, acknowledging that the definition is not optimal but serves as a digestible baseline [21].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
A concise functional definition of AGI matching this description is provided in [S1].
MAJOR DISCUSSION POINT
Definition of AGI
Argument 2
Timeline estimate: AGI could emerge within 3–7 years
EXPLANATION
He predicts that AGI may be achieved in a timeframe of three to seven years based on current trends and public perception of generative AI.
EVIDENCE
He mentions that many Israelis now trust generative AI more than friends, indicating a shift toward AGI, and estimates a 3-7 year horizon for reaching the milestone [21].
MAJOR DISCUSSION POINT
Projected timeline for AGI
DISAGREED WITH
Ms. Alexandra Bech Gjørv, Mr. Vinayak Godse
Argument 3
Compute is one element among many (data, energy, human skills) in the AGI supply chain
EXPLANATION
He argues that while compute is essential, other factors such as data, energy, and especially human critical‑thinking skills are equally important for achieving AGI.
EVIDENCE
He uses a 19th-century metaphor about preparing infrastructure for an unknown technology, then lists compute, energy, data, implementation, language, and the under-invested human element like critical thinking as crucial components [72-90].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
A holistic view of the AGI supply chain that includes compute, data, energy, and human critical-thinking skills is outlined in [S1].
MAJOR DISCUSSION POINT
Holistic view of AGI requirements
AGREED WITH
Mr. Kenny Kesar
DISAGREED WITH
Mr. Kenny Kesar, Ms. Alexandra Bech Gjørv
Argument 4
Classical risks (privacy, cyber‑fraud) plus higher‑level risks to mental health, social cohesion, and democracy must be addressed
EXPLANATION
He categorises AI risks into four layers: traditional security and privacy concerns, mental‑health impacts, social‑level effects on empathy and bullying, and macro‑level threats to democracy and manipulation.
EVIDENCE
He outlines four risk levels-classical (privacy, cyber-fraud), mental health, social (empathy, bullying), and macro (democracy, foreign manipulation) and calls for national and international strategies to mitigate them [131-138].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
A four-layer risk taxonomy covering privacy, mental-health, social, and macro-societal threats is described in [S1] and reinforced by the risk-level discussion in [S18].
MAJOR DISCUSSION POINT
Multi‑layered AI risk taxonomy
AGREED WITH
Ms. Alexandra Bech Gjørv, Mr. Kenny Kesar
Argument 5
High public trust in generative AI may erode critical thinking and mental health
EXPLANATION
He observes that a large proportion of people trust generative AI more than human peers, which could diminish critical thinking abilities and affect mental well‑being.
EVIDENCE
He cites that 50 % of Israelis trust generative AI tools more than their friends, suggesting a shift that brings society closer to AGI but may also reduce critical thinking [21].
MAJOR DISCUSSION POINT
Impact of AI trust on cognition
Argument 6
Small nations should pursue global regulation and collaborate with industry to embed ethics, equality, and bias mitigation
EXPLANATION
He argues that tiny countries like Israel cannot dictate global AI rules alone, so they must work with industry and academia to promote ethical, egalitarian AI development and avoid bias.
EVIDENCE
He explains Israel’s limited regulatory power, the need for global regulation, and the importance of collaborating with AI developers to embed morality, equality, and bias mitigation, giving the Myanmar example where Meta’s algorithm amplified violent content [174-180].
MAJOR DISCUSSION POINT
Role of small states in AI governance
Argument 7
Education, awareness, and critical‑thinking skills are essential to recognise and counter AI‑induced threats
EXPLANATION
He stresses that educating the public and raising awareness are vital for people to identify AI‑driven threats and develop critical‑thinking capabilities.
EVIDENCE
He notes the need to educate people to identify threats, emphasizing that what is self-obvious to one may be unknown to another, and calls for education as a key priority [154-155].
MAJOR DISCUSSION POINT
Importance of AI literacy
AGREED WITH
Mr. Kenny Kesar, Mr. Simonas Cerniauskas
M
Mr. Kenny Kesar
4 arguments156 words per minute1299 words497 seconds
Argument 1
Accuracy progression (from 90 % to five‑nine levels) drives compute growth and moves toward AGI
EXPLANATION
He explains that improving AI accuracy from 90 % to five‑nine levels requires incremental compute investments, and each additional ‘nine’ adds years, bringing AI closer to human‑like intelligence.
EVIDENCE
He describes the five-nine accuracy goal, noting that moving from 90 % to 99 % took five-to-ten years, and each additional nine adds another one-to-two years, linking higher accuracy to progress toward AGI [44-48].
MAJOR DISCUSSION POINT
Accuracy as a driver for compute and AGI
AGREED WITH
Mr. Simonas Satunas
DISAGREED WITH
Mr. Simonas Satunas, Ms. Alexandra Bech Gjørv
Argument 2
AI can generate sophisticated attacks and impersonate humans, raising new security threats
EXPLANATION
He warns that as AI becomes more capable, it can be used to launch advanced cyber‑attacks and mimic human decision‑makers, creating serious security challenges.
EVIDENCE
He states that AI capable of generating content can also produce sophisticated attacks and emulate a CEO’s decisions, highlighting the real threat of AI-driven impersonation [105-108].
MAJOR DISCUSSION POINT
Emerging AI‑enabled security threats
Argument 3
Education, awareness, and critical‑thinking skills are essential to recognise and counter AI‑induced threats
EXPLANATION
He argues that critical thinking is necessary to avoid a feedback loop where AI‑generated content dominates, which could stall human cognitive development.
EVIDENCE
He notes that 30 % of content is already AI-generated, creating a risk of a vicious cycle that hampers human critical thinking and innovation, and calls for education to maintain human intelligence alongside AI [164-170].
MAJOR DISCUSSION POINT
Critical thinking as a safeguard against AI over‑reliance
AGREED WITH
Mr. Simonas Satunas, Mr. Simonas Cerniauskas
Argument 4
Development of AI Operating Procedures (AOP) analogous to SOPs, including bias audits and ethical training, will become standard practice
EXPLANATION
He proposes that organisations will adopt AI‑specific operating procedures—AOPs—to systematically audit bias, ensure ethical use, and manage AI lifecycle similarly to traditional SOPs.
EVIDENCE
He describes current SOP-like reviews for bias and content, the training of staff on ethical practices, and envisions future AOPs that validate AI responses for ethics and bias, predicting an emerging industry around such controls [191-197].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The move toward standardized AI governance frameworks, including bias audits and ethical training, aligns with emerging ethical AI guidelines discussed in [S14].
MAJOR DISCUSSION POINT
Institutionalizing AI governance through AOPs
AGREED WITH
Ms. Alexandra Bech Gjørv, Mr. Simonas Satunas
DISAGREED WITH
Mr. Simonas Cerniauskas, Ms. Alexandra Bech Gjørv, Mr. Simonas Satunas
S
Simonas Cerniauskas
4 arguments132 words per minute632 words286 seconds
Argument 1
Broad definition: smarter AI that reasons, learns, adapts and transfers knowledge, not narrow
EXPLANATION
He outlines a common agreement that AGI must be a more general form of AI capable of reasoning, learning, adapting, and transferring knowledge across domains, unlike today’s narrow AI applications.
EVIDENCE
He lists the attributes-reasoning, learning, adaptation, knowledge transfer, and breadth beyond narrow domains-as core aspects of AGI [12-18].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Core AGI characteristics-reasoning, learning, adaptation, and knowledge transfer-are enumerated in [S1].
MAJOR DISCUSSION POINT
Core characteristics of AGI
Argument 2
Current investment surge may be over‑invested; risk of a bubble
EXPLANATION
He observes that massive funding into AI may be unsustainable, questioning whether the hype will lead to a bubble or over‑investment.
EVIDENCE
He remarks that we are in a super-high investment cycle, many wonder if it’s a bubble, and notes that past over-capacity (citing Zuckerberg) may lead to over-investment concerns [70-71].
MAJOR DISCUSSION POINT
Potential AI investment bubble
Argument 3
Early “anchor controls” such as labeling, technical safeguards, and regulatory frameworks are needed to guide AI development
EXPLANATION
He suggests that initial control mechanisms—like labeling AI outputs and establishing regulatory measures—are essential to identify threats and steer AI development responsibly.
EVIDENCE
He mentions technical tools such as labeling and other safeguards, and notes that European regulatory approaches could serve as viable examples for early controls [173-176].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Early control mechanisms like labeling and regulatory safeguards are advocated in [S1]; European regulatory approaches exemplify such early controls in [S11].
MAJOR DISCUSSION POINT
Pre‑emptive AI governance tools
DISAGREED WITH
Mr. Simonas Cerniauskas, Ms. Alexandra Bech Gjørv, Mr. Kenny Kesar, Mr. Simonas Satunas
Argument 4
Education, awareness, and critical‑thinking skills are essential to recognise and counter AI‑induced threats
EXPLANATION
He emphasizes that educating the public and raising awareness are crucial for people to detect AI‑generated threats and develop critical‑thinking abilities.
EVIDENCE
He stresses the need to educate people to identify threats, pointing out that what is self-obvious to one may be unknown to another, and calls for education as a priority [154-155].
MAJOR DISCUSSION POINT
AI literacy as a defensive measure
AGREED WITH
Mr. Simonas Satunas, Mr. Kenny Kesar, Mr. Simonas Cerniauskas
Agreements
Agreement Points
Education, awareness and critical‑thinking skills are essential to recognise and counter AI‑induced threats
Speakers: Mr. Simonas Satunas, Mr. Kenny Kesar, Mr. Simonas Cerniauskas
Education, awareness, and critical‑thinking skills are essential to recognise and counter AI‑induced threats Education, awareness, and critical‑thinking skills are essential to recognise and counter AI‑induced threats Education, awareness, and critical‑thinking skills are essential to recognise and counter AI‑induced threats
All three panelists stress that building AI literacy, raising public awareness and fostering critical-thinking are prerequisite measures to identify and mitigate AI-driven threats, from misinformation to security risks [154-155][164-170][154-155].
POLICY CONTEXT (KNOWLEDGE BASE)
The UN Security Council emphasizes continuous AI education and awareness-raising to empower stakeholders [S40]; IGF-related discussions highlight the need for critical-thinking curricula in schools [S47]; and cyber-diplomacy forums call for digital-literacy programmes to build societal resilience to AI threats [S48].
Structured risk management, resilience and rollback mechanisms are needed to mitigate AI‑related harms
Speakers: Ms. Alexandra Bech Gjørv, Mr. Simonas Satunas, Mr. Kenny Kesar
Emphasis on robust rollback mechanisms and system resilience to mitigate failures Classical risks (privacy, cyber‑fraud) plus higher‑level risks to mental health, social cohesion, and democracy must be addressed Development of AI Operating Procedures (AOP) analogous to SOPs, including bias audits and ethical training, will become standard practice
The speakers converge on the need for formal risk-management frameworks – from resilience and rollback planning (Alexandra) to a layered risk taxonomy (Satunas) and institutionalised AI Operating Procedures (Kesar) – to keep AI systems safe and accountable [187-189][131-138][191-197].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy briefs on AI governance stress the importance of resilience and rollback capabilities as part of risk-management frameworks [S33]; AI governance reports call for robust structures to address misinformation and surveillance risks [S43]; and practical toolkits for businesses recommend regulatory safeguards and contingency plans [S46].
Privacy constraints limit the data needed for true situational awareness in AGI
Speakers: Ms. Alexandra Bech Gjørv, Mr. Simonas Satunas
Access to personal data needed for true situational awareness creates privacy limits Classical risks (privacy, cyber‑fraud) plus higher‑level risks to mental health, social cohesion, and democracy must be addressed
Both panelists highlight privacy as a fundamental barrier to collecting the personal data required for human-like situational awareness, framing it as a classic AI risk that must be managed [35-37][131].
POLICY CONTEXT (KNOWLEDGE BASE)
Technical specifications for health data underline privacy-by-design limits on data sharing that affect situational awareness [S34]; broader analyses of data-access levers note that privacy regulations can restrict the flow of high-quality data needed for AGI development [S38]; EU digital diplomacy documents also discuss the tension between privacy and security objectives [S35].
Compute power is crucial for AI progress but must be complemented by data, energy and human skills
Speakers: Mr. Simonas Satunas, Mr. Kenny Kesar
Compute is one element among many (data, energy, human skills) in the AGI supply chain Accuracy progression (from 90 % to five‑nine levels) drives compute growth and moves toward AGI
Both agree that while increasing compute is a driver of higher AI accuracy and a step toward AGI, it is only one piece of a broader ecosystem that includes data, energy and human expertise [72-90][44-48].
POLICY CONTEXT (KNOWLEDGE BASE)
Experts in responsible AI governance argue that while massive compute drives progress, a holistic mix of energy efficiency, data quality and skilled personnel is essential [S32]; India’s AGI roadmap stresses coordinated resource deployment beyond raw compute [S30]; and economic analyses of data levers highlight the complementary role of data and energy resources [S38].
AI‑generated misinformation and manipulation pose serious societal risks
Speakers: Ms. Alexandra Bech Gjørv, Mr. Simonas Satunas, Mr. Kenny Kesar
AI‑driven misinformation, cognitive warfare, and creation of divergent information universes threaten societal cohesion Classical risks (privacy, cyber‑fraud) plus higher‑level risks to mental health, social cohesion, and democracy must be addressed 30 % of the content is already AI‑generated … risk of a vicious cycle that hampers human critical thinking
All three point to the danger that generative AI can flood the information ecosystem with false or biased content, eroding trust, mental health and democratic processes [144-149][131-138][165-170].
POLICY CONTEXT (KNOWLEDGE BASE)
Security analyses identify generative AI as a vector for influence operations and misinformation campaigns [S41]; policy discussions on AI’s impact on knowledge ecosystems flag disinformation as a primary societal challenge [S42]; and governance frameworks call for measures to curb AI-driven misinformation and protect vulnerable groups [S43].
Similar Viewpoints
Both the moderator and the panelist call for the introduction of early‑stage governance tools – like output labeling and regulatory safeguards – to steer AI development before AGI arrives [172][173-176].
Speakers: Mr. Vinayak Godse, Mr. Simonas Cerniauskas
Early “anchor controls” such as labeling, technical safeguards, and regulatory frameworks are needed to guide AI development
Unexpected Consensus
AI can be leveraged to improve human decision‑making and reduce bias
Speakers: Ms. Alexandra Bech Gjørv, Mr. Simonas Satunas
AI‑driven misinformation, cognitive warfare, and creation of divergent information universes threaten societal cohesion Small nations should pursue global regulation and collaborate with industry to embed ethics, equality, and bias mitigation
While Alexandra focuses on hardware and privacy, she also shares an anecdote showing AI reducing human bias in sports officiating; Simonas Satunas argues that collaboration with industry can embed ethics and curb bias. The convergence on AI as a tool for bias reduction is not obvious given their differing primary concerns [99-101][174-180].
POLICY CONTEXT (KNOWLEDGE BASE)
Studies on gender-inclusive AI demonstrate that algorithmic systems can mitigate unconscious human bias and promote fairer outcomes [S39]; governance literature also notes AI’s potential to support unbiased decision-making when coupled with proper oversight [S43].
Overall Assessment

The panel shows strong convergence on four pillars: (1) education and critical‑thinking as a defence against AI misuse; (2) comprehensive risk‑management frameworks including resilience, rollback and procedural safeguards; (3) recognition of privacy as a limiting factor for data‑intensive AGI; (4) acknowledgement that compute is essential but must be balanced with data, energy and human expertise. There is also broad agreement that AI‑generated misinformation threatens societal cohesion.

High consensus on governance, risk management and capacity‑building measures, moderate consensus on technical pathways (compute, hardware). This suggests that future policy discussions can build on a shared foundation of education, risk controls and privacy safeguards while still debating timelines and specific technical solutions.

Differences
Different Viewpoints
Timeline for achieving AGI
Speakers: Mr. Simonas Satunas, Ms. Alexandra Bech Gjørv, Mr. Vinayak Godse
Timeline estimate: AGI could emerge within 3–7 years I’m not necessarily subscribing to the time frame. I think that depends on how much money we throw at it. Uncertainty and need for societal preparedness
Satunas predicts a concrete 3-7-year horizon for AGI based on current public trust in generative AI [21]. Alexandra rejects a fixed timeline, arguing that progress depends on funding and other factors [23-25]. Godse stresses that the exact arrival date is unknown and urges societies to prepare now to avoid missing governance opportunities [1-7].
POLICY CONTEXT (KNOWLEDGE BASE)
Recent Davos discussions suggest a five-year horizon for AGI rather than the shorter timelines some predict [S44]; a comprehensive report on AGI futures also documents divergent views on expected timelines and their societal implications [S45].
What factor is the primary driver for reaching AGI – compute accuracy, specialised hardware, or a broader mix of resources
Speakers: Mr. Kenny Kesar, Mr. Simonas Satunas, Ms. Alexandra Bech Gjørv
Accuracy progression (from 90 % to five‑nine levels) drives compute growth and moves toward AGI Compute is one element among many (data, energy, human skills) in the AGI supply chain Need for massive, low‑latency, energy‑efficient hardware (neuromorphic, edge) to achieve human‑like situational awareness
Kesar links higher accuracy (five-nine) directly to increased compute and treats this as the main path toward AGI [44-48]. Satunas argues that compute is only one piece of a larger puzzle that also includes data, energy and critical-thinking skills [72-90]. Alexandra focuses on the necessity of specialised low-latency, energy-efficient hardware to replicate human reflexes and situational awareness [26-33]. The three speakers therefore disagree on which element should be prioritised.
POLICY CONTEXT (KNOWLEDGE BASE)
Responsible-governance panels highlight that compute alone is insufficient and that energy, hardware efficiency, data and human factors jointly drive AGI progress [S32]; India’s roadmap similarly stresses a balanced resource mix for goal-directed research [S30].
Preferred early‑stage governance mechanisms (anchor controls, resilience/rollback, AI‑operating‑procedures, education/global regulation)
Speakers: Mr. Simonas Cerniauskas, Ms. Alexandra Bech Gjørv, Mr. Kenny Kesar, Mr. Simonas Satunas
Early “anchor controls” such as labeling, technical safeguards, and regulatory frameworks are needed to guide AI development Emphasis on robust rollback mechanisms and system resilience to mitigate failures Development of AI Operating Procedures (AOP) analogous to SOPs, including bias audits and ethical training, will become standard practice Education, awareness, and critical‑thinking skills are essential to recognise and counter AI‑induced threats; small nations should pursue global regulation and collaborate with industry
Cerniauskas proposes technical anchor controls like labeling and early regulation as the first line of defence [173-176]. Alexandra stresses building system resilience through rollback plans and risk-matrix thinking [187-189]. Kenny envisions institutionalised AI Operating Procedures (AOP) that embed bias checks and ethical training as the core governance tool [191-197]. Satunas highlights education, public awareness and the need for global regulatory cooperation, especially for small states, as the key response [174-180]. These differing prescriptions reveal a lack of consensus on the most effective early-stage control strategy.
POLICY CONTEXT (KNOWLEDGE BASE)
Toolkits for AI risk mitigation propose anchor controls, operational procedures and global regulatory coordination as early-stage safeguards [S46]; governance reports also prioritize resilience and rollback mechanisms alongside education initiatives [S33]; EU digital diplomacy notes the shift toward procedural and ethical controls in AI strategy [S35].
Unexpected Differences
Compute as the central lever versus a broader resource mix
Speakers: Mr. Kenny Kesar, Mr. Simonas Satunas
Accuracy progression (from 90 % to five‑nine levels) drives compute growth and moves toward AGI Compute is one element among many (data, energy, human skills) in the AGI supply chain
Kesar treats compute (and the associated accuracy gains) as the primary engine propelling AI toward AGI, whereas Satunas explicitly downplays compute’s primacy, insisting that data, energy and especially human critical-thinking are equally indispensable. This divergence is surprising given their shared technical background [44-48][72-90].
POLICY CONTEXT (KNOWLEDGE BASE)
Debates in AI governance circles stress that focusing solely on compute overlooks critical dependencies on data, energy and talent, advocating for a broader resource portfolio [S32]; policy analyses from India echo this balanced perspective [S30].
Hardware‑centric resilience versus procedural/ethical controls
Speakers: Ms. Alexandra Bech Gjørv, Mr. Kenny Kesar
Emphasis on robust rollback mechanisms and system resilience to mitigate failures Development of AI Operating Procedures (AOP) analogous to SOPs, including bias audits and ethical training
Alexandra focuses on physical and systemic resilience (rollback, risk‑matrix) as the main safeguard, while Kenny proposes a procedural, standards‑based approach (AOP) centred on bias and ethics audits. The contrast between hardware‑focused risk mitigation and process‑focused governance was not anticipated.
POLICY CONTEXT (KNOWLEDGE BASE)
Resilience-focused policy briefs emphasize hardware robustness as a pillar of AI safety [S33]; however, practical governance toolkits and EU strategies highlight procedural and ethical safeguards as complementary or alternative approaches [S46][S35].
Overall Assessment

The panel shows substantial divergence on three core fronts: (1) the expected timeline for AGI, with one speaker offering a short‑term estimate and others rejecting a fixed horizon; (2) the relative importance of compute versus hardware versus a holistic resource mix; (3) the optimal early‑stage governance toolkit, ranging from technical anchor controls to resilience planning, procedural AOPs, and education‑driven regulation. While there is consensus on the need for education, critical thinking and multi‑layered risk awareness, the lack of alignment on strategic priorities could hinder coordinated policy responses and investment decisions.

High – the disagreements touch on fundamental strategic choices (timing, resource allocation, governance architecture) that shape national and international AI policy. Without a shared roadmap, stakeholders may pursue conflicting initiatives, leading to fragmented regulation, duplicated investments, and potential gaps in security and ethical safeguards.

Partial Agreements
All three speakers agree that building AI literacy and critical‑thinking capacity is crucial to mitigate AI‑driven risks, even though they frame it differently (Satunas focuses on public education, Kenny on preventing a feedback loop of AI‑generated content, Cerniauskas on broader awareness) [154-155][164-170].
Speakers: Mr. Simonas Satunas, Mr. Kenny Kesar, Mr. Simonas Cerniauskas
Education, awareness, and critical‑thinking skills are essential to recognise and counter AI‑induced threats Critical thinking that is very necessary for us to innovate further
Both acknowledge that AI introduces new security threats beyond traditional privacy and cyber‑fraud concerns, though Satunas frames them within a layered risk taxonomy while Kenny highlights specific attack vectors such as CEO impersonation [131-138][105-108].
Speakers: Mr. Simonas Satunas, Mr. Kenny Kesar
Classical risks (privacy, cyber‑fraud) plus higher‑level risks to mental health, social cohesion, and democracy must be addressed AI can generate sophisticated attacks and impersonate humans, raising new security threats
Takeaways
Key takeaways
AGI is broadly defined as AI that can reason, learn, adapt, transfer knowledge and operate beyond narrow tasks; a functional view describes it as performing any human professional task at comparable accuracy. Panelists estimate AGI could appear within roughly 3–7 years, though there is considerable uncertainty and a need for societal preparedness. Achieving AGI will require massive, low‑latency, energy‑efficient compute hardware (neuromorphic, edge) together with data, energy, and human expertise; compute is only one element of a larger supply chain. Current investment in AI compute is huge and may be over‑invested, raising concerns about a potential bubble. Security and privacy risks will intensify: AI can generate sophisticated attacks, impersonate humans, and exploit personal data needed for true situational awareness. Beyond classical risks, AI poses higher‑level threats to mental health, social cohesion, and democratic processes through misinformation and cognitive warfare. Public trust in generative AI is high, which can erode critical‑thinking skills; education, awareness, and critical‑thinking training are essential safeguards. Governance will need early “anchor controls” such as labeling, technical safeguards, and regulatory frameworks; AI Operating Procedures (AOP) analogous to SOPs are envisioned. Collaboration across nations, industry, and academia is crucial to embed ethics, equality, and bias mitigation, especially for smaller countries lacking global regulatory influence. Resilience measures—robust rollback mechanisms, risk‑matrix planning, and tiered model deployment (small vs large models)—are recommended to limit the impact of failures.
Resolutions and action items
Develop and adopt early anchor controls (e.g., model labeling, technical safeguards) as part of AI governance. Invest in education and critical‑thinking programs to prepare the public for AI‑driven information environments. Encourage collaboration between governments, industry, and academia to shape global regulation and embed ethical principles in AI development. Create AI Operating Procedures (AOP) for bias audits, ethical training, and continuous monitoring of AI systems. Implement resilience strategies such as rollback mechanisms and tiered deployment of small and large language models to manage compute costs and risk.
Unresolved issues
Exact timeline for AGI emergence remains uncertain; no consensus on when it will be realized. How to balance massive compute investment with efficiency and sustainability without creating a bubble. Specific technical pathways to achieve human‑level situational awareness (e.g., multimodal embodied learning) are still open questions. Concrete regulatory frameworks and international agreements for AGI governance have not been defined. Methods to protect privacy while providing the data needed for advanced AI reasoning are not yet resolved. Strategies to prevent erosion of critical thinking and mitigate cognitive warfare lack detailed implementation plans.
Suggested compromises
Adopt a tiered model approach: use small, task‑specific language models for low‑risk functions while reserving large models for high‑value, high‑risk applications. Combine probabilistic AI methods with deterministic controls to improve reliability and move toward AGI without sacrificing safety. Balance heavy compute investment with research into more efficient algorithms and hardware to avoid over‑investment. Blend regulatory oversight with industry self‑governance (e.g., AOPs, bias audits) to create flexible yet accountable AI ecosystems.
Thought Provoking Comments
AGI will be something that can perform every human task at the level of accuracy and professionalism of a human professional. 50 % of Israelis trust generative AI tools more than they trust their friends, which brings us closer to AGI.
Provides a concrete, human‑centric definition of AGI and backs it with a sociological metric (trust) that signals a shift in public perception, turning the debate from abstract timelines to observable behavior.
Shifted the conversation from speculative timelines to measurable societal adoption. Prompted other panelists to discuss trust, adoption curves, and the gap between current AI capabilities and true AGI.
Speaker: Simonas Satunas
Machines can already make millisecond‑level decisions (e.g., fire detection), but interpreting context, emotions, ambiguity, and body language remains far away. Achieving human‑like situational awareness will require low‑latency, energy‑efficient neuromorphic and edge hardware, plus massive private data – which raises privacy limits.
Links technical hardware challenges directly to the core AI limitation of contextual understanding, while foregrounding privacy as a fundamental barrier, thus expanding the discussion beyond pure algorithmic progress.
Introduced a new dimension—hardware and privacy constraints—causing the panel to explore compute needs, data governance, and the trade‑off between performance and personal data protection.
Speaker: Alexandra Bech Gjørv
The epitome of accuracy is five‑nines. Moving from 90 % to 99 % accuracy took 5‑10 years; each additional nine adds another 1‑2 years. True AGI will require AI that can not only learn from data but also invent new ideas, similar to the human brain.
Offers a quantitative framework (five‑nines) to gauge progress and reframes AGI as a transition from regression‑based learning to genuine invention, adding a measurable benchmark to an otherwise vague concept.
Guided the discussion toward concrete performance targets and the notion of AI as a creative agent, influencing later remarks about the need for deterministic models and the timeline for achieving AGI.
Speaker: Kenny Kesar
Compute is just one element in a chain; we also need energy, cooling, data, implementation, language, and especially the human element—critical thinking education—so people can recognise AI‑generated manipulation.
Broadens the focus from a compute‑centric race to a holistic ecosystem, emphasizing education and human cognition as equally vital for AGI readiness.
Redirected the panel from a purely technical race to a societal preparedness narrative, prompting others (e.g., Simonas Cerniauskas) to stress education and awareness as part of risk mitigation.
Speaker: Simonas Satunas
We can categorize AI risks into four levels: (1) classic privacy, security, fraud; (2) human mental health; (3) social impacts like empathy and bullying; (4) macro‑level effects on democracy and societal manipulation. Each level needs its own mitigation and international collaboration.
Provides a clear, layered risk taxonomy that moves the conversation from generic ‘risk’ talk to a structured, actionable framework.
Served as a turning point that organized subsequent dialogue around specific domains (security, mental health, societal manipulation), leading to concrete suggestions on regulation and collaboration.
Speaker: Simonas Satunas
When video surveillance was introduced in basketball, coaches’ racist decisions vanished because the data made bias visible. This shows machines can make people better, not just introduce new biases.
Counters the common narrative that AI inevitably amplifies bias, offering a concrete example where technology corrected human prejudice, thereby enriching the ethical debate.
Encouraged a more nuanced view of AI ethics, influencing later comments about the role of oversight and the potential for AI to improve human decision‑making.
Speaker: Alexandra Bech Gjørv
AI will create a tiered ecosystem: tiny, efficient models for simple tasks and massive models for complex challenges like world hunger. Right‑sizing models will make AI commercially viable and curb the current cost‑to‑ROI imbalance.
Introduces a pragmatic solution to the scalability and cost problem, framing the future AI market as a spectrum rather than a monolithic race for the biggest model.
Shifted the discussion from a “bigger is better” mindset to strategic deployment, prompting considerations of sustainability, compute allocation, and business models.
Speaker: Kenny Kesar
The current hype may be a bubble; we risk over‑investing in compute that could become overcapacity. Even Zuckerberg admits we might have excess compute for years.
Provides a critical market perspective that questions the sustainability of the current investment frenzy, adding a cautionary note to the optimism.
Tempered the enthusiasm of earlier speakers, leading to a balanced conversation about responsible investment and the need for efficiency improvements.
Speaker: Simonas Cerniauskas
30 % of content online is already AI‑generated, feeding the same models and risking a feedback loop that could stall human intellectual growth. We must preserve critical thinking to avoid a vicious cycle where AI erodes the very intelligence it seeks to emulate.
Highlights a paradox where AI’s own output may diminish the human capacity that fuels future AI development, raising a profound ethical and societal concern.
Deepened the dialogue on long‑term societal effects, prompting further remarks on education, awareness, and the necessity of maintaining human cognitive skills.
Speaker: Kenny Kesar
Overall Assessment

The discussion evolved from a broad framing of AGI’s emergence to a multi‑layered analysis of technical, societal, and economic dimensions. Key comments—especially those that introduced concrete definitions, quantitative benchmarks, risk taxonomies, and real‑world examples—served as turning points that redirected the conversation toward actionable insights. By juxtaposing optimism about rapid progress with cautionary notes on over‑investment, privacy, and human cognition, the panel collectively moved from speculative timelines to a nuanced roadmap that balances compute, hardware, regulation, education, and ethical safeguards. These pivotal remarks shaped the dialogue into a structured, forward‑looking discourse on how to responsibly navigate the path toward AGI.

Follow-up Questions
What is the specific role of compute in achieving AGI, and why is such massive investment in compute resources justified?
Understanding compute’s importance helps allocate resources efficiently and assess whether current spending is sustainable or a bubble.
Speaker: Mr. Vinayak Godse
How can we achieve contextual, low‑latency, and reasoning‑capable AI—specifically regarding language models, ambient computing, and world‑model architectures?
Addressing these technical challenges is crucial for building AI that can operate safely in dynamic, real‑time environments.
Speaker: Mr. Vinayak Godse
What security and privacy measures should be adopted now to prepare for increasingly powerful AI models?
Proactive safeguards are needed to prevent misuse of AI as capabilities grow, especially in the context of AGI‑level threats.
Speaker: Mr. Vinayak Godse
What could serve as an ‘anchor control’—early governance mechanisms or concepts—to steer AGI development responsibly?
Establishing foundational controls early can shape the trajectory of AGI and mitigate future risks.
Speaker: Mr. Vinayak Godse
How will growing dependence on AI affect human critical thinking, and what forms of cognitive warfare might emerge?
If AI erodes critical thinking, societies become vulnerable to misinformation and manipulation at scale.
Speaker: Mr. Vinayak Godse
How can we ensure that reliance on AI does not diminish human intelligence and critical thinking abilities?
Maintaining human cognitive skills is essential for innovation and for preventing a feedback loop where AI trains on AI‑generated content.
Speaker: Mr. Vinayak Godse
What global regulatory frameworks and collaborative approaches are needed to embed ethics, bias mitigation, and moral behavior into AI systems?
AI impacts cross‑border societies; coordinated regulation can address bias, misinformation, and unethical deployments.
Speaker: Mr. Simonas Satunas
What research directions (e.g., hierarchical reflex reasoning, embodied multimodal learning, neuromorphic and edge computing) show the most promise for reaching AGI?
Identifying promising technical pathways guides funding and research priorities toward viable AGI architectures.
Speaker: Ms. Alexandra Bech Gjørv
How can education systems be strengthened to improve public critical‑thinking skills and resilience against AI‑driven manipulation?
An informed populace is a key defense against deception, bias, and loss of agency in an AI‑rich world.
Speaker: Mr. Simonas Satunas
What resilience and rollback mechanisms should be designed to mitigate the impact of AI failures or malicious use?
Preparing for worst‑case scenarios reduces societal disruption and ensures continuity when AI systems malfunction.
Speaker: Ms. Alexandra Bech Gjørv
How should organizations develop AI Operating Procedures (AOP) analogous to traditional SOPs to ensure ethical, unbiased AI deployment?
Standardized operational guidelines can embed ethical checks into AI lifecycles, promoting responsible use.
Speaker: Mr. Kenny Kesar
What are the macro‑level societal impacts of AGI (e.g., on democracy, misinformation, agent‑swarm manipulation), and how can they be studied and mitigated?
Understanding large‑scale effects is vital for national security and for preserving democratic institutions.
Speaker: Ms. Alexandra Bech Gjørv
What are the energy‑efficiency and hardware constraints (e.g., low‑latency, neuromorphic chips) that must be overcome to realize AGI?
Hardware limitations directly affect feasibility, cost, and environmental impact of scaling AI.
Speaker: Ms. Alexandra Bech Gjørv

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.