Building Scalable AI Through Global South Partnerships

Building Scalable AI Through Global South Partnerships

Session at a glanceSummary, keypoints, and speakers overview

Summary

The session opened with Ankur Vora asking Sunil Wadhwani to describe how his Badwani Institute has used AI to address pressing social problems in India [1][2][7]. Wadhwani explained that after launching the institute in 2018-when AI was still nascent and most investment ignored societal needs-he and his brother shifted focus to partnering directly with government ministries to identify priority use-cases such as tuberculosis and early-grade reading proficiency [17-20][23-30][31-34][35-38][41-45].


By analysing the TB care cascade, the team created a smartphone-based cough-analysis tool that provides a probabilistic risk score and has become the national standard, an AI-driven sputum-analysis system that returns results within a day, and a predictive model that flags patients likely to abandon treatment, collectively raising detection rates by 25 % and reaching tens of millions [56-63][64-70][71-74]. In education, they built an AI suite that generates personalized reading exercises for each child, which was piloted in Rajasthan and subsequently mandated for all three million primary students in the state, demonstrating rapid scale [75-88][89-90].


Wadhwani highlighted three key lessons: impact requires early and deep government engagement, solutions must be designed for national-scale deployment from the outset, and leveraging existing digital public infrastructure such as Aadhaar, UPI, the TB case-management platform Nikshay, and the school platform Rakshak is essential for rapid rollout [91-100][101-108][109-118][119-124]. He also stressed that tools must ease frontline workers’ workflows, otherwise adoption will stall [125-127].


Ankur noted that moving from innovation to impact is not linear and praised the Gates Foundation’s new “Advantage India for AI” initiative that will support such government-led scaling [131-136]. Sunil reported that the institute now serves about 100 million Indians annually through more than 25 AI platforms, is fielding requests from other Global South nations, and has begun operations in Rwanda, Ethiopia and Kenya, aiming to reach 500 million people by 2040 [139-148][149-156][160-168].


Panelists Shalini Kapoor and Lacina Kone described AI diffusion as a “playbook” that can be shared across borders, emphasizing the role of Smart Africa’s AI Council and the need for coordinated public-private ecosystems to avoid reinventing solutions [179-188][189-197][198-226]. Kone argued that regulatory harmonisation and a “collaboration tax”-the resources needed to cooperate-must be addressed, with philanthropy acting as a de-risking layer for private investment [227-241].


Shikoh Gitau added that political goodwill and a shared sense of purpose are crucial for turning AI into a societal and economic lever across the Global South [259-272][273-277]. S. Krishnan reinforced the theme of democratizing AI, citing India’s frugal AI mission that offers compute at one-third global cost, sovereign models and datasets that can be shared, and the Gates Foundation’s partnership in showcasing thousands of AI-driven startups at the summit [289-319][320-328][329-342].


The participants concluded that the summit demonstrated tangible South-South collaboration, with attendees feeling inspired by the collective commitment to scale AI for people, planet and progress [382-400][401-408].


Keypoints

Major discussion points


AI-driven health and education solutions in India and how they were scaled – The Wadhwani Institute built AI tools for tuberculosis (cough-sound detection, automated sputum analysis, medication-adherence prediction) that are now national standards, and an AI-based reading-proficiency suite that has been mandated for millions of children in Rajasthan [41-70][74-90].


Key lessons for achieving impact at scale – Success required early and humble engagement with government ministries, designing for national-level rollout from day one, plugging solutions into existing digital public infrastructure (e.g., Nikshay, Rakshak), and ensuring the tools make frontline workers’ lives easier [91-127].


South-South collaboration and the diffusion of AI “playbooks” – Panelists highlighted the need to share pathways, leverage India’s digital public infrastructure experience, and create joint mechanisms (Smart Africa, Africa AI Council) so that innovations can move between India, Africa and other Global-South nations [177-194][198-226][259-270][363-381].


Strategic partnerships and the summit’s broader agenda – The Gates Foundation’s “Advantage India for AI” pledge, the India-Gates partnership, and the summit’s goal of democratizing AI, involving youth and emphasizing people-planet-progress, were repeatedly referenced as the framework enabling these collaborations [135-138][289-332].


Overall purpose / goal of the discussion


The conversation aimed to showcase how AI can be democratized and deployed at massive scale to solve pressing health and education challenges in India, extract the lessons learned, and position those experiences as a blueprint for South-South cooperation. By highlighting partnerships (especially with the Gates Foundation) and the summit’s vision, participants sought to catalyze cross-regional collaboration that accelerates AI diffusion across the Global South.


Overall tone and its evolution


– The dialogue began with an informative and enthusiastic tone as Sunil described concrete AI solutions and their impact [41-70].


– It shifted to a reflective, advisory tone when outlining the strategic lessons for scaling [91-127].


– The panel then moved to a collaborative and optimistic tone, emphasizing mutual learning, shared pathways, and collective ambition across continents [177-194][198-226][259-270][363-381].


– Throughout, there was a consistent undercurrent of positivity and forward-looking optimism, punctuated by brief acknowledgments of challenges (e.g., “working with government isn’t easy” [94-98]) but ultimately reaffirming confidence in partnership-driven AI democratization.


Speakers

Ankur Vora


Area of expertise: Global health and education initiatives, AI for social impact, philanthropy.


Role / Title: Chief Strategy Officer and President of the Africa and India Office, Gates Foundation[S13][S14]


Sunil Wadhwani


Area of expertise: Artificial intelligence research, AI for health and education, scaling AI solutions in low-resource settings.


Role / Title: Founder & Co-Chair (with brother Ramesh) of the Wadhwani Institute for Artificial Intelligence


Shalini Kapoor


Area of expertise: AI strategy, partnership building, ecosystem development.


Role / Title: Chief Strategist, XSTEP Foundation[S10]


Lacina Kone


Area of expertise: Continental AI policy, digital public infrastructure, public-private partnership in Africa.


Role / Title: Director General and CEO, Smart Africa[S1][S2][S3]


S. Krishnan


Area of expertise: National AI policy, digital public infrastructure, AI mission implementation.


Role / Title: Secretary, Ministry of Electronics and Information Technology (MeitY), Government of India[S4][S5][S6]


Shikoh Gitau


Area of expertise: AI product development, scaling AI in education and health, private-sector leadership in the Global South.


Role / Title: CEO, Kala[S7][S8][S9]


Additional speakers:


None (all speaking participants are covered in the list above).


Full session reportComprehensive analysis and detailed insights

The session opened with Ankur Vora asking Sunil Wadhwani to describe the work of the Badwani Institute in India, noting the organisation’s reputation for “democratising” AI and applying it to problems such as oral-reading fluency and tuberculosis screening [1-7].


Wadhwani explained that he and his brother founded the Badwani Institute for Artificial Intelligence in 2018, at a time when AI was still a niche field and before the advent of ChatGPT [17-20]. While serving on the Carnegie Mellon University board, he observed billions of dollars flowing into AI research [21-23] but recognised that none of this investment was directed toward societal challenges affecting three-four billion people lacking adequate health-care and education [24-26]. This realisation prompted the decision to launch a dedicated institute in India [27-30].


The early years were marked by technical development without scale. After a few years of “neat” prototypes that failed to reach users, the team reassessed its approach and concluded that having a good algorithm was insufficient; impact required a broader system of engagement [31-34]. The key shift was to work directly with government ministries, aligning AI projects with national priorities [35-38].


In health, the institute tackled tuberculosis, which the Ministry of Health identified as a top priority [41-44]. By mapping the TB care cascade they pinpointed three bottlenecks-lack of functional X-ray machines, slow sputum-lab turnaround, and poor medication adherence-and responded with a smartphone-based cough-analysis tool that delivers an instant probabilistic risk score, an AI-driven automated sputum-analysis pipeline that reduces results to one day, and a predictive model that flags patients likely to default on treatment, enabling 2 000 caseworkers to focus on the most at-risk individuals [56-60][63-66][68-70]. Together these interventions lifted TB detection by 25 % in the last year and now touch tens of millions of people [71-74].


In education, the institute addressed the high dropout rate among primary-school children by building an AI-powered suite that generates personalised reading exercises and stories for each child; after a successful pilot in a large Indian state, the Rajasthan government mandated the tool for all three million primary pupils in the target age group [75-84][86-90]. The design ensured that frontline teachers and workers found the tool easy to use, reinforcing the lesson that adoption stalls if a solution does not simplify users’ jobs [125-127].


From these experiences Wadhwani distilled three overarching lessons. First, scaling is impossible without early, humble partnership with senior civil servants; the government must be involved from day one and held accountable alongside the technical team [91-100][107-110]. Second, solutions must be engineered for national-scale deployment at the outset, with explicit plans for training, distribution and field use [101-108]. Third, leveraging existing digital public infrastructure-such as Aadhaar, UPI, the TB case-management platform Nikshay, the school platform Rakshak, and the broader DPI ecosystem-provides the data pipelines and user bases needed for rapid roll-out [111-124][311-314]. Finally, tools must make frontline workers’ jobs easier; otherwise adoption stalls [125-127].


Ankur reflected that the path from innovation to impact is “not a straight road” and praised the Gates Foundation’s new “Advantage India for AI” pledge, which aims to fund AI-for-social-good initiatives in the Global South [130-135].


Wadhwani then reported that the institute now reaches roughly 100 million Indians each year through more than 25 AI platforms, and that it has begun fielding requests from other Global-South governments [139-148]. A dedicated deployment team of about 100 staff supports these efforts [149-151]. In the past year the institute dispatched teams to Africa and launched operations in Rwanda, Ethiopia and Kenya [154-156], with the goal of impacting 500 million people worldwide by 2040 [160-168]. He linked this ambition to Prime Minister Modi’s vision of “design in India for the world, develop in India for the world and deliver these solutions to the world” [165-168].


The subsequent panel expanded the discussion to AI diffusion across borders. Shalini Kapoor described diffusion as the “rails” that must be laid for AI, likening it to the digital-public-infrastructure tracks that enabled earlier internet expansion, and argued that documented “playbooks” should be shared so that a solution built in Kenya can be reused in India and vice-versa [179-194]. Lacina Kone, representing Smart Africa, explained that the continent’s AI Council brings together governments, private firms and philanthropies to create a regulatory “cloud” that precedes financing; she stressed that finance is the “last thing” to consider once the policy environment is stable [198-226][231-238]. Kone also introduced the notion of a “collaboration tax” – the effort and resources required to coordinate multi-stakeholder projects – and suggested that philanthropy can act as a de-risking layer [236-238]; Shikoh Gitau echoed Kone’s point, noting that reducing the collaboration tax is essential for cross-regional partnerships [259-272][273-277].


S. Krishnan outlined India’s AI Mission, a frugal model that supplies compute at roughly one-third of global prices, builds sovereign AI models with taxpayer funding, and makes both the compute and models openly available for other Global-South nations [307-319][312-317][318-321]. He highlighted the “people, planet, progress” pillars that framed the summit [300-304], noted that close to 900 startups showcased AI applications across the halls [340-345], and described the “African village” set up to demonstrate solutions that work in different parts of the world [336-339]. The shared compute-and-model pool was referred to as the “AI Kosh” (also called the “AI treasury”) [322-327]. Krishnan also emphasized that DPI is the backbone for democratising AI [311-314]. He mentioned the partnership with the Gates Foundation in curating the summit, the establishment of a Centre for International Cooperation under the National Institute of Smart Governance, and the broader goal of opening the AI treasury to the world [329-342][332-337].


Across the discussion there was strong consensus that (i) government partnership and alignment with national priorities are indispensable for scaling AI; (ii) existing DPI is the backbone that enables rapid, cost-effective deployment; and (iii) South-South collaboration-through shared pathways, playbooks and mutual learning-offers the most efficient route to diffusion [91-100][111-124][179-194][198-226][259-272][307-319]. While the perspectives varied, Wadhwani stresses early government partnership, whereas Kone emphasizes that a predictable regulatory environment is the prerequisite for private-sector execution and that finance follows [92-100][231-238].


Key take-aways from the session include:


* AI-driven tools for TB (cough-sound detection, automated sputum analysis, adherence prediction) and for early-grade reading have demonstrably improved health and education outcomes at scale.


* Early, humble engagement with ministries, integration with national DPI, and user-centred design are essential for impact.


* South-South collaboration should be organised around documented “pathways” and playbooks, with bodies such as Smart Africa’s AI Council providing the regulatory “cloud” that enables private investment.


* India’s frugal AI Mission offers a replicable model of sovereign, open-source compute and datasets that can be shared internationally.


* Partnerships with foundations-particularly the Gates Foundation’s “Advantage India for AI” pledge-are viewed as critical bridges between innovation and impact.


Thought-provoking remarks that shaped the dialogue were:


* “The only way to scale is government… you have to work with government from day one” [92-94];


* “If the tool does not make the frontline worker’s life easier, it will not be adopted” [125-127];


* The diffusion metaphor of “rails” and “playbooks” for AI [179-184];


* “Finance is not the issue; the regulatory cloud is the rain that makes finance fall” [233-235];


* The introduction of the “collaboration tax” as a hidden cost of partnership [236-238];


* India’s compute being offered at a third of global cost, illustrating a frugal, open-source approach [307-319].


Action items emerging from the discussion include launching Badwani AI operations in Rwanda, Ethiopia and Kenya [154-156]; targeting 500 million people impacted globally by 2040 [160-168]; sharing India’s AI treasury (compute, models, datasets) with other Global-South nations once capacity thresholds are met [322-327]; establishing the Centre for International Cooperation to support DPI implementation abroad [332-337]; deepening the Gates Foundation partnership for funding and knowledge exchange [329-332]; and developing and disseminating “AI pathway” playbooks to lower the collaboration tax [179-194][236-238].


Unresolved issues highlighted were the lack of concrete mechanisms and timelines for transferring sovereign models and compute to partner countries, the challenge of harmonising regulatory frameworks across diverse African jurisdictions to create a continent-wide “cloud”, the precise financing models required for large-scale deployments beyond the statement that finance is a later concern, and the need for robust metrics to monitor the impact of exported AI solutions in new contexts.


Krishnan closed by celebrating India’s resilience and the collective spirit of the summit, underscoring the commitment to keep democratizing AI for the Global South [350-354].


Session transcriptComplete transcript of the session
Ankur Vora

The first question around India. One of the things you’ve done and your organization has done is you found ways of taking the power of AI, democratizing it, and making sure it solves problems that we all care about. In my speeches, I’ve talked about you. I’ve talked about oral reading fluency, the tool whereby for less than, and if I’m stealing your thunder, sorry, but he’ll tell you a little bit more. But I’ve been talking about it because it’s just amazing. I’ve been talking about the fact that your TB screening, you can do things that we couldn’t imagine being done before. So can you tell us more about your work in India?

Sunil Wadhwani

Sure. Hi, everyone. Thank you. Welcome. Thanks for being here. Thank you for having me. I suspect the way I got over here was they needed, Gates Foundation needed someone for this chat. They looked around. They found this guy wandering around with two badges. They figured that means he’s important. Let’s get him. And next thing I’m sitting over here. But thank you so much, Ankur. So, you know, my brother and I launched the Badwani Institute for Artificial Intelligence here in India about eight years ago, 2018. Back then, AI wasn’t a thing. ChatGPT hadn’t come out. But I happened to be serving on the board of trustees of Carnegie Mellon University in the U .S., where I had studied, gotten my master’s.

So and CMU was ranked then as ranked number one in the world for artificial intelligence research and teaching. So being on the board, I could see all the billions of dollars coming into AI from Google and so on and so forth, even in those days. And it always pained me that none of this money was going into AI for society. You know, three, four billion people in the world out of eight billion don’t have access to decent health care, decent education. AI could be transformative. And that’s what we’re talking about today. But at that time, nothing was going on. So I spoke to my brother Ramesh. We decided, let’s launch this Institute for AI in India.

Prime Minister Modi came, inaugurated it, etc. So we hired a really good team of AI machine learning people, spoke to government, identified use cases, started working, and nothing happened. A couple of years went by, we were developing this, what we thought was really neat stuff, but it wasn’t scaling up. So we took a look at the issues, etc. And then we started realizing, look, we’re not approaching it quite right. We’ve got great AI solutions, but there is a lot more, a lot more to actually having impact than just having a nice technical solution. So I’ll, in a couple of minutes, tell you what we’re doing. So the key lessons that we’ve learned. But once we started figuring out, OK, what we were not doing and that we needed to be doing, then things started happening.

So just to give you two or three examples, as Ankur mentioned, we identify our problems, our challenges that we want to focus AI on by working directly with government. We talk to the health ministry about their national priorities for the next three, four, five years. What should we do? We talk to the education ministry and so on. So three years ago, the health ministry told us that tuberculosis is a very high priority for us. It’s the largest infectious disease killer in the world, kills close to two million people a year. Largest infectious disease killer in India kills close to half a million people over here. And for each person that dies, there are 20 others that don’t die, but they live miserable lives and they are infecting lots of other people as they go on.

So the government, the health ministry said, can you help? So we took a look at the whole. cascade of care in tuberculosis? What’s the patient journey like? Where are the three or four or five key pain points? And we identified, okay, diagnosis is number one, because in these economically vulnerable communities where TB happens, you need x -ray machines, you need sputum analysis, and in these communities, you don’t have all this stuff. You don’t have x -ray machines that work and are calibrated and so on. Problem number one. Problem number two, sputum analysis is another way of diagnosing TB, but these samples go to 64 government labs around India where they are ranulized, et cetera, and it takes time for the results to come back to the patient.

And for those that have TB, you’ve lost a lot of valuable time. Third big challenge is there’s a number of patients with TB who are on the medication regimen, but these are very toxic medicines. They really destroy your body while they’re trying to cure you of TB. So a lot of people stop taking these medicines. and they developed drug -resistant TB, which is much worse, 50 % mortality rate, etc. So we started applying AI to each of these issues. On the diagnosis, we’ve come up with a way of detecting tuberculosis from the sound of a cough into a smartphone. It’s instant. It’s quick. We don’t just say yes or no. We give the risk of this person having TB, what’s the probability, etc.

That is now rolling out nationally, and it is becoming the national standard. And by the way, we’re the only country that has this. It doesn’t exist anywhere. World Health Organization has told us this could be a game changer globally. For the sputum analysis, we’ve developed an AI model. So now the sputum analysis in the 64 government labs, totally automated. Results come out within a day, go back to the patient, treatment starts. Perhaps the most challenging thing. These patients who will fall off their medication. We’ve developed AI algorithms that predict well ahead of time which TB patients are likely to fall off the medication. So then the 2 ,000 TB caseworkers in India, which is a very limited number for 4 million TB patients, they can focus on the right people.

This is impacting now tens of millions of people. Just in the last year, the rate of TB detection, thanks to our cough against TB, has gone up by 25%. You may think that’s bad news, you know, higher numbers, but now we can treat these patients. We can get them on the right, you know, clinical care protocols. That’s one example. Education. Throughout the global south, there is a very high dropout rate of young children from schools, very high, in grades 1 through 5. Problem in India, problem everywhere. We got a call from a very large state government in India that said, we’ve got this issue, can you help? We sent a team in. We’ve got a call from a very large state government in India that said, we’ve got this issue, can you help?

and we had to analyze what’s causing this high dropout rate. We learned that the single biggest reason for this high dropout rate is an inability of these very young children, 7, 8, 9 years old, to be able to read If you can’t read, it affects how you do in every subject, right? Science, history, geography, you struggle, you start failing, you get frustrated, and these are, again, poor communities. Your parents say, forget school, what’s the point? Come work in the field or work in the kitchen, and that affects the rest of their lives. We’ve come up with an AI -based suite of tools that… …and for the child that goes to the teacher, and for the child, we come up with personalized exercises…

stories that they can read at home, but which help them to get better at their specific area of weakness. Each child is different. We were in pilot with the state. They were so impressed, they made it mandatory for all 3 million school kids in that state, in that age group. State of Rajasthan saw it recently. Here, in that age group. So that’s the kind of scale one can get. What’s the difference between what we were not doing in our first 2 or 3 years versus what we’re doing now? What we learned is, number one, the only way to scale is government. You have to work with government from day one. Working with government isn’t easy, right? It’s easy to say it’s challenging, it can be frustrating at times.

but you have to understand how to navigate it. How do you work with senior civil servants? You know, start with an, you know, approach them with humility, not like you have the answers. You’re trying to understand the problem. You want to work with them. Secondly, think scale from day one. You can’t develop an AI solution and then say, oh, I want to use it on one million people. There are issues you have to think in right in the beginning as to how will the scale out? How will large scale training happen in the field? How will frontline health workers or, you know, teachers or other governments? How will they use this? So thinking that way is very upfront.

And in fact, with government, what we do now is once we identify a problem, even before we work on the technical solution, we plan that deployment to scale as to what will happen. And we make government accountable for a lot of it. Just as we’re accountable for the technical side. The other really key learning has been. And that government, and this relates, Ankur, to what you were just saying, has developed a lot of digital public infrastructure. Aadhaar is like the great example that we’re all aware of, right? UPI, United Payment Interface, incredible example of that. So there are lots of things in health care, in education and agriculture where the government has developed this digital public infrastructure.

And it’s critical. We didn’t know this. This was probably our key finding. It’s critical to find a government platform that you can integrate into. So the examples I’ve given of TB, government has a wonderful platform called Nikshay. It’s like a case management system for tuberculosis patients. We’ve integrated everything. We developed algorithms into that platform. The education, this early childhood reading proficiency, each state has a platform. Rajasthan, as an example, has a platform called Rakshak. Rakshak for 70 ,000 schools, 400 ,000 teachers, 8 million students. we plugged our algorithms into that platform. So if these platforms hadn’t been there, we’d be struggling to scale any of this up. The final learning that we’ve had, and maybe this is the most important, is all these technical solutions are great at a macro level to bring down TB, to improve reading proficiency, etc.

But at the end of the day, if the person using this tool, the frontline health worker, the teacher, if it doesn’t make life easier for them, in addition to improving education for the child or healthcare for the patient, it won’t happen. You can push all you want from the top that, oh, you must use this, but there’s got to be pull. They’ve got to want to use it, and that happens only when you make life easier for them.

Ankur Vora

Thank you very much, Sunil. We’ll do the next question a little bit quickly. But I do want to just… acknowledge a few things, call out a few things. One is this journey between innovation and impact. I love what the learnings you talked about, because we keep on sometimes focusing on the innovation part, and we think that the road from innovation to impact is a straight road, and it’s not. It’s possible, it’s probable, but it’s not guaranteed, and we need to work hard at it. And so your learnings get to it. It’s also one of the reasons why we love our seven -year partnership, and hopefully it’ll be much more as we think about, as some of you know, the Gates Foundation yesterday announced a new initiative, a new pledge around AI for AI, which is Advantage India for AI.

And the idea is to make investments in India for the global south, and we’re looking forward to partnering with you. So, Sunil, one of the places where we do partner, and we’re talking about things, in fact, earlier today, we were talking about work in Ethiopia and Rwanda and Kenya. Can you talk a little bit more about… how you think about your work in the context of South -South partnership and how do you take the learnings you have from one place to the other place?

Sunil Wadhwani

Sure. So when we got started, our goal was only India, right? My brother and I, we are from India, our hearts are still here. So we weren’t thinking about any place else. But what’s happened is as our AI solutions have been scaling up in India quite dramatically, and we are today impacting over 100 million people a year, we’ve developed over 25 AI platforms in partnership with government. We’ve started over the last year getting a lot of inquiries from governments around the world in the global South saying what you’re doing in India, we need in Kenya or in Rwanda or in Indonesia or Egypt or Mexico. By the way, in India, we don’t just develop these solutions. We also do a lot of capacity building, meaning training of senior civil servants on how you can use AI.

What it’s good for, not good for, etc. We help ministries develop data governance standards, use case frameworks and so on. Then we do the actual solutions development. That’s the biggest chunk of what we do. But then we have a big deployment team. We have close to 100 people making sure that these things get deployed. So we were thinking only India. But over the last year, we started getting all these inquiries. And we finally said, look, we set up this foundation to have impact globally. So now we’ve we’ve have we sent a team out to Africa to meet with several countries. We are starting operations this month in Rwanda, Ethiopia and Kenya. And I’m glad to see a colleague over here from Smart Africa.

We will be partnering in this work. We’re very excited about that. And then beyond that, we expect to be going to a number of other places today. As I said, we’re impacting maybe 100 million people in India. Our goal is by the year 2040. To be impacting 500 million people. We are very excited about our partnership with you at the Gates Foundation. So the Prime Minister Modi, if you heard him yesterday in his speech, he gave a brilliant speech. As part of that, he said he said for the last several years, I’ve been saying make in India for the world. He said, no, I want to add to that in this age of AI design in India for the world, develop in India for the world and then deliver these solutions to the world.

And that is what we’re trying to do. That’s the evolution now thinking. And again, we’re excited, very excited about the partnership with you.

Ankur Vora

Thank you, Sunil. Are we I think we have a change of plans. Thank you so much. And Sunil, if you could please stay on stage and if I could invite our panel up. Shalini Kapoor, Chief Strategist from the XSTEP Foundation. Lacina Kone, Director General and CEO of Smart Africa. And Shikoh Gitau, CEO of Kala. Thank you.

Shalini Kapoor

yeah good good to start okay thank you so much thank you anchor. Thanks for your time uh and here we are and thanks uh Sunil for you know spending some more time with us, uh thanks Shikoh we have been meeting bumping into each other and thanks thanks Lacina thanks for being here So we’ll get into some of the discussions on the South -South collaboration that is spurring innovation and that is diffusing AI into all the sectors. AI diffusion is about the routes and the rails which need to be laid in AI the similar way digital rails were laid in the DPI time. And now they can be shared. They are playbooks. They can be shared.

And AI diffusion actually concept came from it started with generation. Jeffrey Hinton talking about it. He’s a professor in Georgetown in D .C. When he talked about that, how actually electricity was created in. Germany, but it was diffused across India, across the USA, where USA made so much of strides into it. So like electricity, AI is a general purpose technology. It’s a GPT, which is there. And between invention and impact, there’s a big layer of adoption and diffusion, which needs to be there so that AI gets diffused into society. And when it gets diffused into society, something which can be built in Kenya can come to India, something which is built in India can go to Kenya, because there are playbooks which could be leveraged.

Not everybody needs to build everything. Not everybody needs to build the entire stack. How do we learn from each other? And how does that South -South collaboration happen? That’s what is the focus. So I’ll start with the pathways. What are the pathways to scale? and I’ll start with Lesina, that you are leading Smart Africa and you help coordinate across a lot of nations, different stages of AI, somebody in pilot, somebody in production, somebody has solved data, somebody has solved language, somebody has solved voice AI. What do you think are the opportunities for the South -South collaboration in building these pathways together?

Lacina Kone

Yeah, thank you very much for inviting me. In fact, that collaboration, first of all, before we even talk about South -South, the collaboration is a sense of the creation of a Smart Africa. Because if you look at Africa through just on the Kenya, which is 50 million, Ghana, which is 30 million, or Nigeria, which is 240 million, you’re missing the point. But if you look at Africa as a 1 .4 billion people, but be able to leverage that 1 .4, you need a collaboration. and the scale. So coming from that, when you look at the continent of Africa, which is technically speaking the global south, I don’t want to get there, in the global south, and the south -south collaboration is very important because we do not need to reinvent the wheel.

India has shown the world what the DPI actually means for 1 .4 billion people. It’s working digital ID, a country which is able to organize an election for 850 million people to vote. You have to actually kudos. So we don’t need to reinvent the wheel. Africa can learn a lot from India, even with the use cases. But why use cases particularly? Because there’s a lot of similarity between India and Africa in terms of culture, in terms of value, that we all know we are into the AI, we are into digital transformation. It’s not just… You create a… But the luxury of the luxury of the society is to be able to have a population, inclusion of the population, ethical of the technology, exactly.

So coming from that, that’s one of the reasons to reboost, actually, Smart Africa Initiative. The creations of the Africa AI Council came into play where last April, on April 4th, 2025, 49 countries came together to actually sign the declaration. And subsequent to that, the AI Council came to life on November 12th, 2025 in Guinea after our board of directors, which are represented by the head of the state, announced that, accepted that we could. We had a first meeting already. Then the council consists of 15 members. is not only driven by public, but it has seven ministers coming from seven different countries and eight private sector members. Why? Because in our constitution of Smart Africa, private sector first. We do believe that the government should be creating a conducive environment for the private sector to excel.

It cannot be dominated by public sector. And underneath of the council, we have, of course, six thematic groups, mainly computing, so we can look at the collaboration of South Asian computing power. We can look at it in the data set. We can look at the skills. We can look at the regulation, which is the governance. We can look into the market and we can look at investment. And when it comes in terms of investment, something we need to know. The investments in the prior technologies, the investment cycle is too slow for the AI. Just look back 12 months ago. Where were we? And where we are today? So this is something we need to look at carefully.

We are looking at the three aspects. One, the government needs to be creating a conducive environment for private sector to chip in. The private sector needs to be executing, but they can only execute, as I said, everyone’s cry because they said finance is the issue. I always said finance is not the issue. Finance is the last thing you should think about. You know why? Because I said financing is like the rain. For rain to fall, you need certain condition of the cloud. Those clouds are the regulatory environment, the conducive environment for business private sector because private sector does not like unpredictability. So the third thing, the philanthropics. The reason why I want to speak about it, the philanthropics need to serve as a 2D risk area because these are some of the things government is the last thing to invest in a technology because they want to make sure it’s going to work.

But if you don’t throw business, it’s time for them to accelerate, to use that as a de -risking while the private sector can chip in and so on and so forth. Thank you.

Shalini Kapoor

Yeah, thank you so much. And you talked about DPI, you talked about the private sector, public coming together. It’s the entire ecosystem. And on 18th, actually, Mr. Nandan Hillikini actually announced 100 Pathways to 2030, which is a clarion call. If you ask me, it’s a clarion call for people to join to create pathways simply because, you know, if you climb Mount Everest, suppose Edmund Hillary has climbed Mount Everest. And do you think he’ll come back and he’ll say, I’m not going to tell you how I climbed? What is the route I took? Where did I go? Where did I go? Where did I not go? What did I see? He will talk about, right? He will talk about them so that it is easy for other travelers to come in.

So pathway is like that. that if someone has done the AI pathway, others should learn from it, benefit from it. So, Shiko, you were with us on the stage when you joined and you said that, you know, from the Global South, 100 Pathways to Scale, you would like to join us. Please tell us, how can that diffusion help? How can this, you know, we can work, collaborate to, you know, to get the AI use cases from pilot to production?

Shikoh Gitau

I can finish? Okay. It’s that collaboration. As I was saying, this idea of how do we bring this multiplicity of thinking together, given that we have the same exact challenges. We have challenges around, we have multiple languages. We have culture and diversity. We have things that we need to be able to work together. How do we collaborate together? And for us, the biggest takeaway is how do we make AI not just a technology, but a political and economical issue? Yeah? That was the biggest one, because the people are there. The builders were there. The researchers were there. The policy makers were there. But we need that political goodwill to be able to make this work together.

And something that CV Madoka from I’ve forgotten the organization. CBC said that struck really a chord to everybody including myself is we need to start having a conversation on what is called the collaboration tax and it’s something DG when we were coming in we were talking about I’ll define collaboration tax as this effort resources and things that you need to put together to be able to collaborate with each other. It’s what the government should be doing it’s what that political part of AI should be doing to bring the collaboration together and how do we make these people come together without the effort of I mean not the effort, the pain of collaboration and that’s what we need to be talking about because the resources, the people are there people are willing to collaborate and work together we saw it this week and as the minister said I said while you’re chasing the Guinness World Record of having the most number of people also chase for the diversity that these countries are seeing thank you so much for bringing Africa and the world to India Yeah.

Shalini Kapoor

Thank you, Shikoh. Thank you so much. We’ll take a small break in the panel discussion and we’ll have Mr. Krishnan come here and talk about how scale and collaboration can help in the South -South and what is transferable from India. I know he’s like busy and across all. So over to you, Mr. Krishnan.

Ankur Vora

I was just going to do one more thing, which is thank you, Shalini, and thank you to the panel for allowing us this small break. For those of you who don’t know, Secretary Krishnan from the Ministry of Métis is over here. He’s had probably one of the most amazing, successful weeks this week. So please join me and give him a big round of applause for. Secretary Krishnan, thank you so much for being here. As a proud Indian, I’m quite excited about the fact what happened this week. I’m also, as somebody who cares about the agenda of this global South. everything that happened in India this week put the Global South agenda right front and center.

So thank you for doing that. There were so many announcements made this week about how we’re going to make progress in the months and years to come together. Would love to welcome you to give a little bit more context of the announcements that were made and the achievements that were achieved this week. Thank you. Welcome.

S. Krishnan

Let me first apologize to the panel for having sort of stepped in abruptly, but between juggling many things going on across the summit, I think this is a very important session as far as I was concerned, because if this particular summit was about one thing, it was about the Global South. The fact that… India representing the Global South… could actually dare to host this event and also dare to host it on this scale. The one thing we were very clear about is summits thus far have basically been about country leaders. It’s been about CEOs and it’s been about some experts getting together in closed rooms and not really having the opportunity to do what or to actually showcase to people as to what the possibility of the technology is.

And this particular event gave us that opportunity. I think we were very clear that what we wanted to do was to let people into the rooms. We wanted to make sure that people, especially youth, had this opportunity to come and listen to the best minds, there are on artificial intelligence as technology. and to every possible perspective on how this technology can work for everybody. And the second thing, of course, as you’re well aware, we said people, planet, progress. And two aspects of it were very important. One was, or three, if I can. One is democratizing access to AI, all the AI infrastructure and resources. That was one key aspect. The second key aspect was including those who are not ordinarily given access to this, those who are excluded.

And the third key aspect is putting humans at the center of this process to make sure that this is a technology that works for people. And I think the prime minister was very clear and emphatic in his address yesterday where he put people, or manna, right at the heart of AI. So to enable this to happen, of course, multiple things have to happen. We have to find frugal ways to innovate. In order to make these resources available, we have to make sure that we have the resources. We believe that our own AI mission model, the India AI mission model, is one of those frugal ways in which both the compute infrastructure and the model infrastructure and the data set infrastructure can be created for each country because some of this needs to be on a specific basis for regions and for countries.

In India, we are a subcontinental scale. There are 22 official languages and many other languages which need to be taught or which need to be understood. And we understand this cultural and linguistic diversity better than any other region in the world. And we can, at a continental scale, we can contribute in that effort. That is one key element. The second key element, of course, is to create compute in a way so that it doesn’t, I mean, people are not enabled. I mean, people are not able to build moats. around it. That, you know, the implication that you need the kind of resources to do this, that nobody else can do it and only we can do it is not an approach we wanted to take.

So we created this model where the private sector is encouraged to invest. We created this model where access to it is something which the government subsidizes. In the process today, AI compute in India is available at a third of the price that it is available in the rest of the world. I think that has been the significant achievement. The United Nations asked us saying, would it be possible to, once you build it out on scale, would it be possible to share with the rest of the world? This is something that we have committed to them saying, as in when the size is adequate so that we can meet other requirements, we will be happy to share them.

We are happy to share the model even now. And the AI Kosh model, as we call it, the AI treasury is something that we are happy to share even now. we are happy to share the fact that the models that we have supported in India and which have been built out as sovereign models in India that again is technology that we are happy to share with the global south it’s something we can enable some of it is something we’ve built with our own resources so it is in a sense completely sovereign unlike in many other places it’s something that the government has paid for from taxpayer resources and we can use and the third element of course is the data sets and how they are shared now that framework is again something we can certainly share the most important thing and I think that’s what is also showcased so eloquently in the expo is the range of applications which have been created out of this and there are close to 900 startups across all those halls who have done a variety of things even in the main hall we with the Gates Foundation we have set up a lot of applications and we have set up a lot of applications and we have set up the African village which is such a showcase even to the leaders fundamentally about applications which can work there and fundamentally for people to see applications which have worked in different parts of the world which can be taken elsewhere.

So all of those are available. Those are resources which we want to share. These are resources we want to actually give. And most importantly, as I said, I think if there is one thing that this summit reflects, for the first time, we’ve actually democratized AI. We’ve shown you what democratic AI looks like when people are let into the rooms, when people are let into the halls and they can see for themselves as to how this would work. So it gives me immense pleasure that, you know, and a very, very key partner for us in all of this has been the Gates Foundation right from the very outset, right from the planning stage of how we wanted to do this.

This particular part of the set of sessions on the Global South is something, that we work closely with, curated carefully. We have put together sessions which will be relevant to this group, and we have always made sure that in addition to this, of course, on every occasion, whether it is in the space of DPI or whether it is in the space of any of the other applications, we are in a position to support it. Under one of our organizations, the National Institute of Smart Governance, we have now put in a center which is fundamentally focused on international cooperation so that they can actually provide support to other countries where DPIs are to be actually implemented, how to ground them.

And we believe that probably the most effective way of dealing with this is to actually be able to cooperate amongst ourselves. So. So that we are able to take it out. We are able to learn from each other. We are able to contribute to each other. And that is something we are now really ready to do. India knows what it is to be deprived of or denied technology. India knows what it is to actually try and work your way past it. We have managed to do that. We have managed to democratize it. We have managed to make it available to people at scale. We have tried to keep it open source. We have tried to protect it in a number of ways from cyber attacks in each of those areas.

So in this entire technology stack, there is experience. There is the way that we leapfrog different stages. So I think if we work together in the AI space, likewise, there is so much that can be accomplished. And we undertake as a nation, I think I can say with responsibility that we will have devices and we will have structures through which we can sort of deepen this cooperation. We can deepen the support. We can enable this in a number of ways and continue to stay engaged through the Gates Foundation, through the other institutions to actually make this happen. So thank you very much. Thank you to Gates Foundation for curating this particular event. And thanks to all of you for participating in this.

I mean, it’s one thing to arrange it, one thing to organize it. But another thing for all of you to actually come up here, put up with some of the inconvenience which would have been caused. India is not a very convenient country at best of times. but India is a country with spirit and India is a country which fixes things

Shalini Kapoor

Okay, we have some time with us. How much? Five minutes. And we need to have a question to Sunil. He has been working at, you know, so many interesting, I mean, I listened to the stories of Wadwaning AI. and they inspire you thoroughly. So Sunil, I’ll give it to you. I want people to hear your message that how can these work that has been done in India, how can it help Global South?

Sunil Wadhwani

So as Mr. Kone of Smart Africa said 15 minutes ago, the challenges that we have in the Global South generally, really Africa, India, other countries are similar. The values we have, more importantly, are very similar. The strengths that we bring to the table in terms of our talent, our youth, et cetera, are similar. So I think it’s mutual learning. It’s not one way. It’s not that we’ve developed great stuff in India that can just be, you know, taken over. It’s mutual learning on both sides. It’s a mutual sharing of ideas. There are lots of very good things happening in Africa and Asia. There are lots of things happening in Africa and Asia that we can learn from over here.

On the technology side, as you were saying, Shalini, we’ve been fortunate. We’ve had a government that is very pro -technology. There’s a tremendous range of digital public infrastructure that we can access in India that enables, that provides data pipelines. It provides digital distribution systems without which none of this AI can scale. There’s been a very clear regulatory framework for AI that’s been developed in India that really helps. And most important, there’s an openness in government. And I think it’s driven by the prime minister’s vision and belief that technology and AI can truly transform societal development. So those, to me, are the big things, more than individual AI solutions that really make a difference. And I see that happening in Africa in many countries.

Shalini Kapoor

Sure. Thank you so much. I think we are literally… We are literally at the eve of the summit getting over. It’s been a fantastic week. meeting the best of the people, listening to the best of the sessions, navigating the traffic, yes. But like Secretary Krishnan said that, you know, we fix everything. So I just want to have one last question to each of the panelists. What’s the best thing you liked of the summit? And to you, Sunil, first. I mean, one moment, one feeling that you will carry forward.

Sunil Wadhwani

I will give you a counterintuitive answer. AI is making the world move faster and faster and faster. And all the traffic challenges we’ve had over here are teaching us patience. You will get there. Things will happen. Life will go on.

Shalini Kapoor

Thank you so much. Shikol, what’s the one feeling you’ll travel back with, back to Africa?

Shikoh Gitau

I think my best moment is, and I’m going to pick and be selfish, in the moment in Oberoi when I stood and saw this diverse sea of faces, I think about 300 people, and we’re all celebrating like, we can do this as the Global South. This is happening in our TAF, and this belief that the Global South has something to offer into this AI conversation. So it’s no longer a two -horse race, it’s a multiple -horse race. Thank you.

Lacina Kone

For me, it’s, you know, our vision is to transform Africa into a single digital market. Coming here this time, it shows me already our future. Having 1 .4 billion people on the one regulation. How does it feel like? So you know what I’m talking about. That’s one of our obstacles, says regulatory harmonization in India, you do not have that. multicultural, you have a multicultural multilingual, you have multicultural what does it feel like, including the traffic in the morning as well of course thank you

Shalini Kapoor

And my best moment for the summit was that back in Oberoi on 18th evening, several partners across Italy, Kenya Anthropic, Google Carnegie ORF, Gates Aikstep stood next to Nandini, Kenya and we all came together for 100 pathways till 2030 and we all were together and we were not doing non -collaboration pictures which is going on in Insta, we were all together so it was AI is about collaboration not competition, that’s the theme thank you thank you really enjoyed thank you thank you thank you you are the best man you are the best it’s a pleasure more photos more photos Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (32)
Factual NotesClaims verified against the Diplo knowledge base (6)
Confirmedhigh

“Sunil Wadhwani and his brother founded the Badwani Institute for Artificial Intelligence in 2018.”

The knowledge base states that the founder set up Vadbani AI (the institute) back in 2018, confirming the founding year and founders’ involvement [S19].

Confirmedhigh

“Tuberculosis was identified by the Indian Ministry of Health as a top national priority.”

S17 explicitly notes that eliminating tuberculosis has been a national health priority for the Indian government for several years, confirming the claim.

Confirmedhigh

“Scaling AI solutions in India requires early, humble partnership with senior civil servants and government involvement from day one.”

Both S12 and S18 emphasize that government partnership from day one is essential for achieving scale in AI deployments in the Global South, supporting this lesson.

Additional Contextmedium

“Adoption of AI tools stalls if they do not make life easier for frontline health workers or teachers.”

S18 highlights that if a tool does not simplify the work of frontline health workers or teachers, it will not be used, adding nuance to the adoption challenge described in the report.

Additional Contextmedium

“The institute shifted its approach to work directly with government ministries, aligning AI projects with national priorities.”

S12 describes a systematic re‑evaluation that led to the insight that deep collaboration with government ministries is required for successful AI implementation, providing context for the reported strategic shift.

Additional Contextlow

“The institute’s work exemplifies “democratising” AI by applying it to oral‑reading fluency and tuberculosis screening.”

S15 discusses AI’s role in addressing hard problems in education, such as assessing each child’s learning journey, while S17 covers TB as a health priority; together they provide supporting context for the institute’s focus areas, though they do not directly use the term “democratising”.

External Sources (114)
S1
Open Forum #47 Demystifying WSis+20 — – **Lacina Kone** – CEO and Director General of Smart Africa, a Pan-African organization based in Kigali Lacina Kone pr…
S2
WS #214 AI Readiness in Africa in a Shifting Geopolitical Landscape — **Lacina Kone**, Director General and CEO of Smart Africa, provided a continental perspective that proved influential th…
S3
What policy levers can bridge the AI divide? — – **Lacina Kone**: Director General and Chief Executive Officer, Smart Africa LJ Rich: H.E. Dr. Tatenda Anastasia Mavat…
S4
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — -S. Krishnan- Role/Title: Secretary of METI (Ministry of Electronics and Information Technology)
S5
Empowering India & the Global South Through AI Literacy — -Shri S. Krishnan: Secretary, Ministry of Electronics and Information Technology (MeitY), Government of India
S7
WS #214 AI Readiness in Africa in a Shifting Geopolitical Landscape — **Shikoh Gitau**, CEO of KALA, participated virtually and brought private sector perspectives. Her pointed question abou…
S8
IGF 2025: Africa charts a sovereign path for AI governance — African leaders at theInternet Governance Forum (IGF) 2025 in Oslocalled for urgent action to build sovereign and ethica…
S9
What is it about AI that we need to regulate? — InWS #214, Shikoh Gitau asked:”But who is drafting these policies? What agenda do they have? Do they have Africa at hear…
S10
https://app.faicon.ai/ai-impact-summit-2026/building-scalable-ai-through-global-south-partnerships — Thank you, Sunil. Are we I think we have a change of plans. Thank you so much. And Sunil, if you could please stay on st…
S11
Safe and Responsible AI at Scale Practical Pathways — – Ashish Srivastava- Prem Ramaswami- Shalini Kapoor – Rohit Bardawaj- Shalini Kapoor
S12
Building Scalable AI Through Global South Partnerships — – Sunil Wadhwani- Shalini Kapoor
S13
Keynote-Ankur Vora — – Ankur Vora: Works at the Gates Foundation, overseeing the foundation’s work across Africa and India offices. Previousl…
S14
Responsible AI for Shared Prosperity — -Ankur Vora- Chief Strategy Officer and President of the Africa and India Office at the Gates Foundation
S15
Keynote-Ankur Vora — AI is not a leap into the unknown for India. It is the next chapter in a journey of building solutions that serve everyo…
S16
Capacity Building in Digital Health — Well, so here is the, there’s a, that’s a spicy question, but let me, let me, let me handle it. Well, this is in the U ….
S17
AI for Social Good Using Technology to Create Real-World Impact — – James Manyika- Sunil Wadhwani – Sangbu Kim- Sunil Wadhwani
S18
Building Scalable AI Through Global South Partnerships — Evidence:The solution is now rolling out nationally, TB detection rates have increased by 25% in the last year, and the …
S19
AI for Social Good Using Technology to Create Real-World Impact — 1463 words | 166 words per minute | Duration: 528 secondss Thanks, James. Good morning. Just so we’re all clear, there’…
S21
https://app.faicon.ai/ai-impact-summit-2026/building-trusted-ai-at-scale-cities-startups-digital-sovereignty-panel-discussion-moderator-amitabh-kant-niti — Absolutely. I think there’s already work going on. Specifically, there are three big areas where there’s thinking going …
S22
Building Indias Digital and Industrial Future with AI — Good morning, everyone. Warm welcome, distinguished guests, colleagues and partners and speakers who have joined us toda…
S23
Panel Discussion AI in Digital Public Infrastructure (DPI) India AI Impact Summit — So I think, firstly, India’s journey in DPIs has been a fascinating one. It makes me immensely proud that whichever coun…
S24
Building Indias Digital and Industrial Future with AI — Thanks, Rahul. Those were very key messages which you gave in which the network is being used for citizen -centric servi…
S25
https://app.faicon.ai/ai-impact-summit-2026/ai-for-agriculture-scaling-intelegence-for-food-and-climate-resiliance — It is being designed as a replicable public infrastructure model for India and the entire global south. In partnership w…
S26
Artificial Intelligence & Emerging Tech — Victor Lopez Cabrera:Thanks so much. I really appreciate the invitation and I thank IGF Secretary for inviting us to be …
S27
AI for Democracy_ Reimagining Governance in the Age of Intelligence — Lord Rawal emphasizes that one of the core tenets of the Gayatri Parivar organization is adaptability to change, which h…
S28
Effective Governance for Open Digital Ecosystems | IGF 2023 Open Forum #65 — Changing mindsets while implementing at scale is crucial for effective implementation. Existing initiatives like the Gl…
S29
Donor roundtable: Enabling impact at scale in supporting inclusive and sustainable digital economies — Collaboration and partnerships are necessary to overcome the obstacles associated with such initiatives. There is an arg…
S30
Advancing Scientific AI with Safety Ethics and Responsibility — Global South Perspectives and Adaptation: A significant focus was placed on how emerging scientific powers can shape AI …
S31
Democratizing AI Building Trustworthy Systems for Everyone — -Participant- Works with the Gates Foundation in India, focuses on strategic partnerships between Indian researchers and…
S32
Partnering on American AI Exports Powering the Future India AI Impact Summit 2026 — Balancing national champion support with American technology foundation to enable local innovation while maintaining str…
S33
Democratizing AI Building Trustworthy Systems for Everyone — The Gates Foundation representative focused on ground-level challenges, particularly around sustainability and accessibi…
S34
Keynote Adresses at India AI Impact Summit 2026 — Strategic partnership between democracies: Multiple speakers emphasized the alliance between the world’s oldest and larg…
S35
Impact & the Role of AI How Artificial Intelligence Is Changing Everything — The discussion maintained a cautiously optimistic tone throughout, balancing enthusiasm for AI’s potential with realisti…
S36
Transforming Agriculture_ AI for Resilient and Inclusive Food Systems — The tone was consistently optimistic yet pragmatic throughout the conversation. Speakers maintained an encouraging outlo…
S37
Driving Indias AI Future Growth Innovation and Impact — The discussion maintained an optimistic and forward-looking tone throughout, characterized by enthusiasm for India’s AI …
S38
Inclusive AI Starts with People Not Just Algorithms — The tone was consistently optimistic and empowering throughout the discussion. Speakers maintained an enthusiastic, forw…
S39
Collaborative AI Network – Strengthening Skills Research and Innovation — “We’re talking of AI being a possible DPI, a digital public infrastructure.”[1]. “I think those are aspects which a DPI …
S40
A digital public infrastructure strategy for sustainable development – Exploring effective possibilities for regional cooperation (University of Western Australia) — However, there are concerns that need to be addressed when implementing DPI. One major concern is the risk of exclusion …
S41
Panel Discussion AI in Digital Public Infrastructure (DPI) India AI Impact Summit — Thanks for the question. You’re right, I think those three words are very key. When you’re talking from a government per…
S42
Building Scalable AI Through Global South Partnerships — The institute’s breakthrough came through systematic re-evaluation, leading to three critical insights. First, governmen…
S43
Building Scalable AI Through Global South Partnerships — Wadhwani learned that working with government from the beginning is the only way to achieve scale with AI solutions. He …
S44
Multistakeholder Partnerships for Thriving AI Ecosystems — Jain outlines the critical success factors for AI deployment based on practical experience. He emphasizes that governmen…
S45
Artificial Intelligence & Emerging Tech — Jörn Erbguth:Thank you very much. So I’m EuroDIG subject matter expert for human rights and privacy and also affiliated …
S46
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Alex Moltzau: Yes, so one thing that I didn’t mention that we are working on currently is also these AI regulatory sandb…
S47
AI Collaboration Across Borders_ India–Israel Innovation Roundtable — This addresses the need for systematic innovation pathways that can serve both sectors effectively
S48
How the Global South Is Accelerating AI Adoption_ Finance Sector Insights — Compute infrastructure and research talent shortages present bigger obstacles than regulatory constraints Sharma identi…
S49
How the Global South Is Accelerating AI Adoption_ Finance Sector Insights — Summary:Sharma identifies compute resources and research talent as the main barriers, suggesting regulatory issues are l…
S50
Secure Finance Risk-Based AI Policy for the Banking Sector — Compliance functions increasingly rely on automated pattern recognition, while adaptive cybersecurity models respond to …
S51
WS #82 A Global South perspective on AI governance — Lufuno T Tshikalange: Thank you, Dr. Melody, and thank you for having us here today. In Africa, we do now have a reg…
S52
From India to the Global South_ Advancing Social Impact with AI — Low level of disagreement with high convergence on AI’s transformative potential. Differences are primarily tactical rat…
S53
From India to the Global South_ Advancing Social Impact with AI — Disagreement level:Low level of disagreement with high convergence on AI’s transformative potential. Differences are pri…
S54
WS #484 Innovative Regulatory Strategies to Digital Inclusion — Carlos Rey-Moreno: So someone that was said at a session yesterday was in relation to to fair trade. Right. Was in relat…
S55
High-Level sessions: Setting the Scene – Global Supply Chain Challenges and Solutions — By aligning their financial services and efforts, these institutions aim to avoid confusion and conflicting initiatives …
S56
WS #262 Innovative Financing Mechanisms to Bridge the Digital Divide — – The need for innovative financing mechanisms and enabling policy/regulatory environments
S57
Democratizing AI Building Trustworthy Systems for Everyone — Evidence:Example of pregnancy risk stratification tools needing to work differently in Uttar Pradesh versus Telangana du…
S58
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — Rather than following historical patterns of automation that replace workers, AI development should prioritize applicati…
S59
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — And this requires proactive and coherent policy responses. First, people must be at the center of AI strategy, as we hea…
S60
Collaborative Innovation Ecosystem and Digital Transformation: Accelerating the Achievement of Global Sustainable Development Goals (SDGs) — While participants agreed on core objectives, they differed on implementation approaches and priorities. Some speakers e…
S61
WS #231 Address Digital Funding Gaps in the Developing World — The discussion revealed relatively low levels of fundamental disagreement among speakers, with most conflicts arising ar…
S62
NATIONAL INFORMATION AND COMMUNICATION TECHNOLOGY POLICY — A challenge has now arisen for the country to implement this policy and I therefore call upon all stakehold…
S63
AI for Social Good Using Technology to Create Real-World Impact — Sunil Wadhwani shared concrete examples from Wadhwani AI’s work, including AI systems that diagnose tuberculosis from co…
S64
AI for Social Good Using Technology to Create Real-World Impact — This discussion at the India AI Impact Summit focused on how open networks and digital public infrastructure (DPI) can e…
S65
Building Scalable AI Through Global South Partnerships — It’s like a case management system for tuberculosis patients. We’ve integrated everything. We developed algorithms into …
S66
Donor roundtable: Enabling impact at scale in supporting inclusive and sustainable digital economies — To accelerate impact in key sectors such as agriculture, manufacturing, and services, a systems approach to digitalizati…
S67
Effective Governance for Open Digital Ecosystems | IGF 2023 Open Forum #65 — Cooperation, sharing of technology, and learning are important for effective implementation at scale. Changing mindsets …
S68
Building Scalable AI Through Global South Partnerships — India’s AI mission offers several innovations for global sharing. The country has created compute infrastructure availab…
S69
India’s AI Future Sovereign Infrastructure and Innovation at Scale — Ganesh describes successful collaboration through a consortium of 9 academic institutions working via a Section 8 not-fo…
S70
AI Meets Agriculture Building Food Security and Climate Resilien — This discussion focused on using artificial intelligence to enhance food security and climate resilience in agriculture,…
S71
Democratizing AI Building Trustworthy Systems for Everyone — -Participant- Works with the Gates Foundation in India, focuses on strategic partnerships between Indian researchers and…
S72
Partnering on American AI Exports Powering the Future India AI Impact Summit 2026 — Balancing national champion support with American technology foundation to enable local innovation while maintaining str…
S73
Keynote Adresses at India AI Impact Summit 2026 — Strategic partnership between democracies: Multiple speakers emphasized the alliance between the world’s oldest and larg…
S74
Partnering on American AI Exports Powering the Future India AI Impact Summit 2026 — Balancing national champion support with American technology foundation to enable local innovation while maintaining str…
S75
Keynote Adresses at India AI Impact Summit 2026 — -Strategic partnership between democracies: Multiple speakers emphasized the alliance between the world’s oldest and lar…
S76
The Innovation Beneath AI: The US-India Partnership powering the AI Era — The tone was consistently optimistic and forward-looking throughout, with panelists expressing excitement about opportun…
S77
Driving Indias AI Future Growth Innovation and Impact — The discussion maintained an optimistic and forward-looking tone throughout, characterized by enthusiasm for India’s AI …
S78
Impact & the Role of AI How Artificial Intelligence Is Changing Everything — The discussion maintained a cautiously optimistic tone throughout, balancing enthusiasm for AI’s potential with realisti…
S79
Transforming Agriculture_ AI for Resilient and Inclusive Food Systems — The tone was consistently optimistic yet pragmatic throughout the conversation. Speakers maintained an encouraging outlo…
S80
Inclusive AI Starts with People Not Just Algorithms — The tone was consistently optimistic and empowering throughout the discussion. Speakers maintained an enthusiastic, forw…
S81
WS #302 Upgrading Digital Governance at the Local Level — The discussion maintained a consistently professional and collaborative tone throughout. It began with formal introducti…
S82
WS #236 Ensuring Human Rights and Inclusion: An Algorithmic Strategy — The tone of the discussion was largely serious and concerned, given the gravity of the issues being discussed. However, …
S83
Safeguarding Children with Responsible AI — The discussion maintained a tone of “measured optimism” throughout. It began with urgency and concern (particularly in B…
S84
AI Algorithms and the Future of Global Diplomacy — These key comments collectively transformed what could have been a technical discussion about AI tools into a sophistica…
S85
The Geopolitics of Materials: Critical Mineral Supply Chains and Global Competition — The tone was professional and analytical, with participants generally optimistic about technological solutions while ack…
S86
Open Forum #18 World Economic Forum – Building Trustworthy Governance — The tone was largely collaborative and optimistic, with panelists from different sectors sharing perspectives on how to …
S87
GermanAsian AI Partnerships Driving Talent Innovation the Future — The discussion maintained a consistently optimistic and collaborative tone throughout. Speakers demonstrated mutual resp…
S88
Science as a Growth Engine: Navigating the Funding and Translation Challenge — The discussion maintained a consistently thoughtful and collaborative tone throughout. While panelists acknowledged seri…
S89
Dynamic Coalition Collaborative Session — The discussion began with an optimistic, collaborative tone as panelists shared their expertise and perspectives. Howeve…
S90
Powering the Technology Revolution / Davos 2025 — The tone was generally optimistic and forward-looking, with panelists highlighting opportunities for innovation and prog…
S91
Global AI Policy Framework: International Cooperation and Historical Perspectives — The discussion maintained a constructive and optimistic tone throughout, despite acknowledging significant challenges. S…
S92
The Purpose of Science / DAVOS 2025 — The tone was largely optimistic and excited about AI’s potential to accelerate scientific progress. Speakers emphasized …
S93
Welfare for All Ensuring Equitable AI in the Worlds Democracies — Despite the optimistic tone, participants acknowledged persistent challenges. An audience member, Rita Soni from the Dig…
S94
Inclusive AI Starts with People Not Just Algorithms — The tone was consistently optimistic and empowering throughout the discussion. Speakers maintained an enthusiastic, forw…
S95
OpenAI is set to launch GPT-5 this summer — OpenAI, the renowned AI research lab and owner of ChatGPT, is poised tounveilits latest breakthrough in AI with the immi…
S96
ChatGPT: A year in review — As ChatGPT turns one, the significance of its impact cannot be overstated. What started as a pioneering step in AI has s…
S97
Building Sovereign and Responsible AI Beyond Proof of Concepts — yeah I think you’re right and I think you have your own kind of description of this problem but I was in the US a few mo…
S98
GPAI: A Multistakeholder Initiative on Trustworthy AI | IGF 2023 Open Forum #111 — Alan Paic:Yes, it was not about further countries joining. Well, I can also mention that. So we do have a membership pro…
S99
Keynote-Martin Schroeter — Despite significant financial investments in AI technology by most organizations globally, there is a substantial discon…
S100
Open Forum: A Primer on AI — He believes the future of work should be designed around a better society rather than greed The investment for artifici…
S101
Agenda item 5: discussions on substantive issues contained inparagraph 1 of General Assembly resolution 75/240 (continued)/ part 4 — The most contentious issue emerged around the structural organisation of the future permanent mechanism, particularly th…
S102
India accelerates semiconductor ambitions with launch of India Semiconductor Research Centre (ISRC) — The Indian government is making significant advancements in the semiconductor sector, with the approval of two major inv…
S103
https://dig.watch/event/india-ai-impact-summit-2026/from-india-to-the-global-south_-advancing-social-impact-with-ai — In a few months. We have a few innovators present here today. In fact, three of the inspiring young innovators who are h…
S104
Global Governance of Digital Technologies: A Contemporary Diplomacy Challenge — Adesina OS (2017) Foreign policy in an era of digital diplomacy Summers J [ed]. Cogent Social Sciences , 1 January 3(1),…
S105
Economic and Commercial Diplomacy in Micro-states: A case study of the Maldives and Mauritius — The first tourist resort in the Maldives opened in the country in 1972 with minimal facilities. During the early year…
S106
From concept to cornerstone, Ethereum turns ten — Ethereum has officially turned ten, marking a decade since the launch of its mainnet, Frontier, on 30 July 2015. Conceiv…
S107
https://app.faicon.ai/ai-impact-summit-2026/keynote-bejul-somaia — You are the protagonists for it. And that is not a comfortable position because protagonists carry weight. They make dec…
S108
Re-evaluating the scaling hypothesis: The AI industry’s shift towards innovative strategies — In recent years, the AI industry has heavilyinvestedin the ‘scaling hypothesis,’ which posited that by expanding data se…
S109
How nonprofits are using AI-based innovations to scale their impact — And we had a certain approach in mind. But through the cohort, we realized that people were trying to solve similar thin…
S110
https://dig.watch/event/india-ai-impact-summit-2026/inclusive-ai_-why-linguistic-diversity-matters — And then obviously, we started growing up as a team looking at various use cases. People started initially looking at th…
S111
Multistakeholder Partnerships for Thriving AI Ecosystems — I think there is the role of governments coming in. because, yes, there are tremendous advantages of artificial intellig…
S112
Agents of Change AI for Government Services & Climate Resilience — So I’m probably going to jump on the train here. You know, what we were seeing last year was narrow agents able to solve…
S113
The Millennium Development Goals Report 2015 — Also in 2013, 6.1 million people diagnosed with TB were officially reported to public health authorities. Of these, 5.7 …
S114
https://dig.watch/event/india-ai-impact-summit-2026/ai-for-social-good-using-technology-to-create-real-world-impact — So fortunately, the government has a DPI called Nixia. It’s a very large data platform. It’s a patient management system…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
Sunil Wadhwani
10 arguments162 words per minute2517 words932 seconds
Argument 1
AI‑based cough analysis for rapid TB screening, increasing detection rates by 25% (Sunil Wadhwani)
EXPLANATION
Sunil describes an AI model that analyses the sound of a cough captured on a smartphone to detect tuberculosis instantly, providing a probability score rather than a simple yes/no. This tool has been rolled out nationally and has contributed to a 25% rise in TB detection rates.
EVIDENCE
He explains that the AI system converts cough sounds into a risk probability for TB, works instantly on a smartphone, and has become the national standard, with the World Health Organization calling it a potential global game-changer [56-62]. He also notes that the rate of TB detection has increased by 25% in the last year due to this cough-based tool, allowing more patients to receive treatment [71-73].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
External evidence shows the AI cough analysis tool has been rolled out nationally, raising TB detection by 25% and being highlighted by WHO as a game-changer [S12][S18].
MAJOR DISCUSSION POINT
TB cough AI
Argument 2
Automated sputum‑analysis AI reduces lab turnaround to one day (Sunil Wadhwani)
EXPLANATION
Sunil reports that an AI model has been integrated into India’s 64 government TB labs to fully automate sputum analysis, cutting the result turnaround time to a single day and enabling faster treatment initiation.
EVIDENCE
He states that the AI model now automates sputum analysis across all 64 government labs, delivering results within a day and sending them back to patients for prompt treatment [63-66].
MAJOR DISCUSSION POINT
Sputum AI automation
Argument 3
Predictive AI alerts caseworkers to patients likely to abandon TB medication (Sunil Wadhwani)
EXPLANATION
Sunil outlines an AI algorithm that predicts which TB patients are at risk of dropping out of their medication regimen, allowing a limited pool of caseworkers to focus their outreach on those high‑risk individuals.
EVIDENCE
He notes that the predictive AI identifies patients likely to fall off medication, enabling 2,000 TB caseworkers to target the right people among 4 million patients, thereby improving adherence [68-70].
MAJOR DISCUSSION POINT
Medication adherence prediction
Argument 4
Personalized AI reading‑proficiency tools lower primary‑school dropout rates (Sunil Wadhwani)
EXPLANATION
Sunil describes an AI‑driven suite that creates personalized reading exercises for young children, addressing the main cause of early school dropout—lack of reading ability. The pilot was adopted statewide, reaching millions of students.
EVIDENCE
He explains that the biggest reason for dropout is inability to read, and the AI suite generates personalized stories and exercises for each child; after a successful pilot, the state of Rajasthan made it mandatory for 3 million school-age children [80-88].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The reading-proficiency AI has been scaled to tens of millions of children, providing personalized exercises that address early-grade dropout, as documented in the partnership reports [S18][S17].
MAJOR DISCUSSION POINT
Reading proficiency AI
Argument 5
Early engagement with ministries and alignment to national priorities is essential for scale (Sunil Wadhwani)
EXPLANATION
Sunil emphasizes that working directly with government ministries to identify national priorities ensures that AI solutions are relevant and can be scaled effectively from the outset.
EVIDENCE
He recounts that they identify challenges by speaking directly with the health and education ministries, learning that tuberculosis is a top health priority, which guided their AI focus [37-41].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Collaboration with health and education ministries from the problem-identification stage is emphasized as a key lesson for scaling AI solutions [S12].
MAJOR DISCUSSION POINT
Government‑first approach
AGREED WITH
S. Krishnan, Ankur Vora, Lacuna Kone, Shikoh Gitau
DISAGREED WITH
Lacuna Kone
Argument 6
Integration with existing government platforms (Nikshay for TB, Rakshak for education) enables nationwide rollout (Sunil Wadhwani)
EXPLANATION
Sunil points out that embedding AI algorithms into established public platforms such as the TB case‑management system Nikshay and the education platform Rakshak allows rapid, country‑wide deployment.
EVIDENCE
He details that the TB solution was integrated into Nikshay, a national case-management system, and the education AI was plugged into Rajasthan’s Rakshak platform covering 70 000 schools, 400 000 teachers and 8 million students, which would have been impossible without these digital public infrastructures [118-124].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
India’s digital public infrastructure and platforms such as Nikshay and Rakshak are referenced as examples of embedding AI into existing systems for rapid country-wide deployment [S23][S24].
MAJOR DISCUSSION POINT
Platform integration
Argument 7
Leveraging India’s digital public infrastructure (Aadhaar, UPI) provides data pipelines and identity for AI services (Sunil Wadhwani)
EXPLANATION
Sunil highlights that India’s existing digital public goods—such as the biometric ID system Aadhaar and the digital payments network UPI—serve as essential data and identity backbones for scaling AI applications.
EVIDENCE
He cites Aadhaar and UPI as prime examples of digital public infrastructure that underpin health, education and agriculture AI solutions, noting that these platforms were critical for scaling [111-115].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Aadhaar and UPI are cited as core digital public infrastructure that underpins data and identity pipelines for health, education and agriculture AI services [S12].
MAJOR DISCUSSION POINT
Digital public infrastructure
AGREED WITH
S. Krishnan, Shalini Kapoor
Argument 8
Solutions must simplify frontline workers’ workflows to ensure adoption (Sunil Wadhwani)
EXPLANATION
Sunil stresses that AI tools must make life easier for health workers and teachers; otherwise, even top‑down mandates will not lead to real usage.
EVIDENCE
He explains that if a tool does not ease the frontline worker’s job, adoption will fail, emphasizing the need for a pull rather than a push from the top [125-127].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
User-centric design that eases frontline health workers’ and teachers’ tasks is highlighted as essential for organic adoption of AI tools [S18].
MAJOR DISCUSSION POINT
User‑centric design
Argument 9
Mutual learning model: India exports know‑how while also absorbing African innovations (Sunil Wadhwani)
EXPLANATION
Sunil notes that while the Wadhwani AI initiatives began in India, they are now engaging with governments across the Global South, sharing expertise and also learning from African innovations, creating a two‑way knowledge flow.
EVIDENCE
He describes how, after scaling to 100 million Indians, they received inquiries from Kenya, Rwanda, Indonesia, Egypt and Mexico, leading to the establishment of operations in Rwanda, Ethiopia and Kenya and a commitment to mutual learning [139-168].
MAJOR DISCUSSION POINT
Two‑way South‑South learning
AGREED WITH
Shikoh Gitau, Lacuna Kone, Shalini Kapoor, S. Krishnan, Ankur Vora
Argument 10
Patience amid rapid AI change and logistical challenges is a key personal takeaway (Sunil Wadhwani)
EXPLANATION
Sunil reflects that AI is accelerating change, and the logistical challenges of the summit (traffic, etc.) taught him patience, a quality needed to navigate fast‑moving AI environments.
EVIDENCE
He offers a counter-intuitive answer, saying AI makes the world move faster and the traffic challenges taught patience, assuring that progress will happen despite obstacles [391-395].
MAJOR DISCUSSION POINT
Patience in AI era
S
Shalini Kapoor
2 arguments113 words per minute923 words488 seconds
Argument 1
AI diffusion relies on shared “pathways” and playbooks so innovations can move across borders (Shalini Kapoor)
EXPLANATION
Shalini argues that AI diffusion requires documented pathways and playbooks, similar to digital rails, which allow solutions developed in one country to be replicated elsewhere, fostering South‑South knowledge transfer.
EVIDENCE
She describes AI diffusion as the “routes and rails” that need to be laid, likening it to digital public infrastructure, and says these playbooks enable sharing of AI innovations across nations, referencing the concept’s origin with Geoffrey Hinton [179-194].
MAJOR DISCUSSION POINT
AI pathways and playbooks
AGREED WITH
Lacuna Kone
Argument 2
Celebrating collective collaboration (100 pathways to 2030) highlights that AI progress stems from joint effort rather than rivalry (Shalini Kapoor)
EXPLANATION
Shalini celebrates the “100 pathways to 2030” initiative as a clarion call for shared learning, comparing it to mountaineering routes that guide future travelers, emphasizing that AI should be collaborative, not competitive.
EVIDENCE
She references the 100 pathways call, likening it to Edmund Hillary’s route sharing, and notes that panelists agreed AI progress comes from collaboration, not competition [245-256].
MAJOR DISCUSSION POINT
100 pathways initiative
L
Lacina Kone
2 arguments159 words per minute807 words303 seconds
Argument 1
Smart Africa’s AI Council unites governments, private sector, and philanthropies; a clear regulatory “cloud” is needed before financing can flow (Lacina Kone)
EXPLANATION
Lacina explains that the AI Council, comprising governments, private actors and philanthropies, provides a governance structure, but financing only follows once a stable regulatory environment—metaphorically a “cloud”—is in place.
EVIDENCE
She details the AI Council’s formation with 49 countries signing a declaration, its governance composition of ministers and private members, and stresses that finance depends on a predictable regulatory “cloud” before private sector investment can occur [198-240].
MAJOR DISCUSSION POINT
Regulatory cloud for finance
Argument 2
Vision of a single African digital market illustrates the power of harmonized regulation and shared infrastructure (Lacina Kone)
EXPLANATION
Lacina shares a vision of integrating Africa’s 1.4 billion people into a unified digital market, arguing that harmonized regulation and shared digital infrastructure can unlock continent‑wide scale.
EVIDENCE
She states that Africa’s 1.4 billion population can be leveraged through collaboration, cites the AI Council and the need for regulatory harmonization, and highlights the ambition to create a single digital market with common regulations [401-406].
MAJOR DISCUSSION POINT
Pan‑African digital market
S
S. Krishnan
3 arguments170 words per minute1499 words526 seconds
Argument 1
The India AI Mission delivers frugal, sovereign compute and model infrastructure at one‑third global cost (S. Krishnan)
EXPLANATION
S. Krishnan outlines that India’s AI Mission provides low‑cost, sovereign compute and model resources, making AI infrastructure about one‑third as expensive as comparable global offerings.
EVIDENCE
He notes that AI compute in India is now available at a third of the price of the rest of the world, reflecting a frugal approach to building AI infrastructure [315-319].
MAJOR DISCUSSION POINT
Low‑cost sovereign AI infrastructure
Argument 2
Government‑subsidized AI resources are open‑source and can be shared with other Global South nations (S. Krishnan)
EXPLANATION
He states that the AI models and platforms built under the mission are government‑subsidized, open‑source, and the Indian government is willing to share them with other Global South countries.
EVIDENCE
He mentions commitments to share the AI Kosh model, sovereign models, and other resources with the UN and other nations, emphasizing that they are taxpayer-funded and open for export [320-327].
MAJOR DISCUSSION POINT
Open‑source AI sharing
AGREED WITH
Sunil Wadhwani, Shikoh Gitau, Lacuna Kone, Shalini Kapoor, Ankur Vora
DISAGREED WITH
Lacuna Kone, Other speakers (Sunil Wadhwani, S. Krishnan)
Argument 3
Partnership with the Gates Foundation underpins the mission’s focus on people, planet, and progress (S. Krishnan)
EXPLANATION
Krishnan highlights that the Gates Foundation has been a key partner from the planning stage, aligning the AI Mission with the broader agenda of people, planet and progress.
EVIDENCE
He acknowledges the Gates Foundation as a “very key partner” throughout the mission’s design and implementation, and notes joint sessions and collaborations that reinforce this focus [329-332].
MAJOR DISCUSSION POINT
Gates Foundation partnership
A
Ankur Vora
1 argument92 words per minute584 words377 seconds
Argument 1
The innovation‑to‑impact journey is non‑linear; sustained partnership (e.g., Gates “Advantage India for AI”) is vital (Ankur Vora)
EXPLANATION
Ankur reflects that moving from innovation to real‑world impact is not a straight path and requires continuous effort and partnerships, such as the Gates Foundation’s new “Advantage India for AI” initiative.
EVIDENCE
He remarks that the road from innovation to impact is not guaranteed, cites the seven-year partnership with Gates and the newly announced AI pledge, and calls for continued collaboration [131-137].
MAJOR DISCUSSION POINT
Non‑linear impact pathway
AGREED WITH
Sunil Wadhwani, S. Krishnan, Lacuna Kone, Shikoh Gitau
S
Shikoh Gitau
2 arguments152 words per minute394 words155 seconds
Argument 1
Framing AI as a political and economic issue and addressing the “collaboration tax” are critical for cross‑regional cooperation (Shikoh Gitau)
EXPLANATION
Shikoh argues that AI must be treated not only as a technology but also as a political and economic matter, and that the “collaboration tax” – the effort and resources needed to coordinate across borders – must be acknowledged and mitigated.
EVIDENCE
She discusses making AI a political/economic issue, defines the collaboration tax as the effort and resources required for cross-regional partnership, and calls for mechanisms to reduce this burden [260-273].
MAJOR DISCUSSION POINT
Collaboration tax
Argument 2
The summit showcased Global South unity, reinforcing that AI development is a collaborative race, not a competition (Shikoh Gitau)
EXPLANATION
Shikoh describes the summit’s atmosphere of collective celebration, emphasizing that the Global South can work together on AI, turning it into a multi‑horse race rather than a binary competition.
EVIDENCE
She recounts standing before a diverse audience of about 300 people, feeling that the Global South is united in AI, and that the landscape has shifted from a two-horse race to a multiple-horse race [398-401].
MAJOR DISCUSSION POINT
Global South unity
Agreements
Agreement Points
Government partnership and alignment with national priorities is essential for scaling AI solutions.
Speakers: Sunil Wadhwani, S. Krishnan, Ankur Vora, Lacuna Kone, Shikoh Gitau
Early engagement with ministries and alignment to national priorities is essential for scale (Sunil Wadhwani) Government‑subsidized AI resources are open‑source and can be shared with other Global South nations (S. Krishnan) The innovation‑to‑impact journey is non‑linear; sustained partnership (e.g., Gates “Advantage India for AI”) is vital (Ankur Vora) Smart Africa’s AI Council … a clear regulatory “cloud” is needed before financing can flow (Lacuna Kone) Framing AI as a political and economic issue and the need for political goodwill (Shikoh Gitau)
All speakers emphasized that working closely with governments-through ministries, regulatory frameworks, and sustained partnerships-is the cornerstone for deploying AI at scale, whether in health, education, or broader AI initiatives [37-41][92-100][111-118][320-327][131-137][231-238][266-271].
POLICY CONTEXT (KNOWLEDGE BASE)
The Building Scalable AI reports stress that government partnership from day one is essential for achieving scale in the Global South, emphasizing deep collaboration with senior civil servants and alignment with national priorities [S42][S43][S44].
Digital public infrastructure (DPI) is a critical backbone for AI deployment and scaling.
Speakers: Sunil Wadhwani, S. Krishnan, Shalini Kapoor
Leveraging India’s digital public infrastructure (Aadhaar, UPI) provides data pipelines and identity for AI services (Sunil Wadhwani) Government‑subsidized AI resources are open‑source and can be shared with other Global South nations (S. Krishnan) AI diffusion relies on shared “pathways” and playbooks, analogous to digital rails (Shalini Kapoor)
The panelists agreed that existing digital platforms such as Aadhaar, UPI, Nikshay, Rakshak, and the broader AI mission infrastructure enable rapid, nationwide AI roll-outs and can be leveraged by other countries through open-source sharing and documented pathways [111-124][307-319][179-194].
POLICY CONTEXT (KNOWLEDGE BASE)
Multiple panels describe AI as a potential component of DPI, highlighting inclusion, integrity, and safety as core pillars and noting that AI is still in early-days of being integrated into DPI frameworks like Aadhaar or UPI [S39][S40][S41].
South‑South collaboration and mutual learning are vital for AI diffusion across the Global South.
Speakers: Sunil Wadhwani, Shikoh Gitau, Lacuna Kone, Shalini Kapoor, S. Krishnan, Ankur Vora
Mutual learning model: India exports know‑how while also absorbing African innovations (Sunil Wadhwani) Framing AI as a political and economic issue and addressing the “collaboration tax” (Shikoh Gitau) Smart Africa’s AI Council unites governments, private sector, and philanthropies; a clear regulatory “cloud” is needed before financing can flow (Lacuna Kone) AI diffusion relies on shared “pathways” and playbooks … collaboration not competition (Shalini Kapoor) Government‑subsidized AI resources are open‑source and can be shared with other Global South nations (S. Krishnan) The innovation‑to‑impact journey is non‑linear; sustained partnership … for Global South (Ankur Vora)
All speakers highlighted the importance of two-way knowledge exchange, shared playbooks, and coordinated institutional mechanisms (AI Council, Gates partnership) to spread AI solutions across Africa, Asia and beyond, stressing that collaboration, not competition, drives impact [139-168][260-273][203-210][245-256][320-327][131-137].
POLICY CONTEXT (KNOWLEDGE BASE)
The importance of South-South partnerships is highlighted in the Building Scalable AI study and reinforced by the African Union’s 2024 AI strategy, which calls for regional cooperation and knowledge sharing across Global South nations [S42][S51][S52].
Documented pathways/playbooks are needed to replicate AI innovations across borders.
Speakers: Shalini Kapoor, Lacuna Kone
AI diffusion relies on shared “pathways” and playbooks so innovations can move across borders (Shalini Kapoor) The AI Council’s thematic groups address computing, data, skills, regulation, market and investment – forming collaborative pathways (Lacuna Kone)
Both panelists argued that creating clear, documented routes-whether called “digital rails” or thematic groups-facilitates the transfer of AI solutions from one country to another, enabling scalable South-South learning [179-194][221-226].
POLICY CONTEXT (KNOWLEDGE BASE)
The India-Israel Innovation Roundtable identified systematic innovation pathways and playbooks as a key requirement for effective cross-border replication of AI solutions [S47].
Similar Viewpoints
Both stress that government‑backed, open‑source AI resources aligned with national priorities enable rapid, large‑scale deployment and can be exported to other countries [37-41][92-100][111-118][320-327].
Speakers: Sunil Wadhwani, S. Krishnan
Early engagement with ministries and alignment to national priorities is essential for scale (Sunil Wadhwani) Government‑subsidized AI resources are open‑source and can be shared with other Global South nations (S. Krishnan)
Both highlight that a predictable regulatory environment created with government involvement is a prerequisite for financing and scaling AI initiatives [92-100][231-238].
Speakers: Sunil Wadhwani, Lacuna Kone
Early engagement with ministries and alignment to national priorities is essential for scale (Sunil Wadhwani) Smart Africa’s AI Council … a clear regulatory “cloud” is needed before financing can flow (Lacuna Kone)
Both see structured, thematic pathways (playbooks, councils) as essential mechanisms for AI diffusion across nations [179-194][221-226].
Speakers: Shalini Kapoor, Lacuna Kone
AI diffusion relies on shared “pathways” and playbooks … (Shalini Kapoor) The AI Council’s thematic groups … provide collaborative pathways (Lacuna Kone)
Both underscore that long‑term partnerships (e.g., with the Gates Foundation) and open‑source sharing are key to moving from innovation to impact at scale [131-137][320-327].
Speakers: Ankur Vora, S. Krishnan
The innovation‑to‑impact journey is non‑linear; sustained partnership … is vital (Ankur Vora) Government‑subsidized AI resources are open‑source and can be shared with other Global South nations (S. Krishnan)
Unexpected Consensus
Finance is not the primary barrier to AI scaling; regulatory certainty is.
Speakers: Lacuna Kone, Sunil Wadhwani
Finance is not the issue; regulatory “cloud” must exist first (Lacuna Kone) Early engagement with ministries … ensures scaling; focus is on government alignment rather than financing (Sunil Wadhwani)
While many discussions assume funding constraints limit AI deployment, both Lacuna and Sunil converge on the view that a stable regulatory environment and government partnership are the real prerequisites, making the finance-vs-regulation emphasis an unexpected point of agreement [233-238][92-100].
POLICY CONTEXT (KNOWLEDGE BASE)
Analyses of AI adoption in the Global South note that compute and talent shortages outweigh financing constraints, while regulatory sandboxes are promoted to provide certainty, suggesting regulatory frameworks are more decisive than finance for scaling [S46][S48][S49].
AI tools must be designed to make frontline workers’ lives easier to ensure adoption.
Speakers: Sunil Wadhwani, Shikoh Gitau
Solutions must simplify frontline workers’ workflows to ensure adoption (Sunil Wadhwani) Political goodwill and a “pull” approach are needed for AI adoption (Shikoh Gitau)
Sunil’s user‑centric design point and Shikoh’s emphasis on political goodwill both converge on the idea that AI adoption hinges on making tools attractive and easy for end‑users, a nuance not explicitly stated elsewhere.
POLICY CONTEXT (KNOWLEDGE BASE)
Evidence from health and agriculture pilots shows that AI systems need to work on the edge and be tailored to local policies, emphasizing usability for frontline workers as a prerequisite for adoption [S57][S58][S59].
Overall Assessment

The panel displayed strong consensus around three pillars: (1) government partnership and regulatory alignment as the foundation for scaling AI; (2) leveraging existing digital public infrastructure and open‑source models to enable rapid, cost‑effective deployment; (3) fostering South‑South collaboration through shared pathways, playbooks, and mutual learning. These agreements signal a coordinated vision that AI for development can be accelerated by aligning policy, infrastructure, and cross‑regional cooperation.

High consensus – the speakers repeatedly reinforced the same themes across multiple statements, indicating a unified strategic direction that could drive coordinated actions among governments, donors, and private actors in the Global South.

Differences
Different Viewpoints
Scaling approach: government‑first versus private‑sector‑led execution
Speakers: Sunil Wadhwani, Lacuna Kone
Early engagement with ministries and alignment to national priorities is essential for scale (Sunil Wadhwani) Smart Africa’s AI Council unites governments, private sector, and philanthropies; a clear regulatory “cloud” is needed before financing can flow (Lacuna Kone)
Sunil stresses that AI solutions must be co-designed with ministries from day one, with government accountability and integration into public platforms to achieve scale [92-100][107-110][111-118]. Lacuna argues that while governments set the environment, the private sector must execute and that financing only follows once a predictable regulatory “cloud” exists, downplaying finance as a barrier [231-235][236-238].
POLICY CONTEXT (KNOWLEDGE BASE)
Forum discussions reveal a split between government-led and market-driven scaling models, with policy papers calling for clear delineation of public and private roles in AI ecosystems [S60][S61][S62].
Importance of financing for AI scaling
Speakers: Lacuna Kone, Other speakers (Sunil Wadhwani, S. Krishnan)
Smart Africa’s AI Council … finance is not the issue (Lacuna Kone) Government‑subsidized AI resources are open‑source and can be shared with other Global South nations (S. Krishnan)
Lacuna claims that finance is the last concern and that the regulatory environment is the primary prerequisite for private investment [233-235]. In contrast, Krishnan and Sunil discuss the need for subsidized resources, partnerships (e.g., with the Gates Foundation), and sharing models, implying that financing mechanisms are central to scaling AI across countries [320-327][139-168].
POLICY CONTEXT (KNOWLEDGE BASE)
While some studies downplay finance as the main obstacle, other reports emphasize the need for innovative financing mechanisms and coordinated financial services to bridge digital gaps, underscoring ongoing debate on financing’s role [S48][S55][S56].
Unexpected Differences
Finance as a non‑issue versus need for financing mechanisms
Speakers: Lacuna Kone, Other participants (Sunil Wadhwani, S. Krishnan)
Smart Africa’s AI Council … finance is not the issue (Lacuna Kone) Government‑subsidized AI resources are open‑source and can be shared … (S. Krishnan)
Lacuna’s claim that finance is merely the last hurdle contrasts with Krishnan’s and Sunil’s emphasis on subsidized resources, partnerships, and funding models to enable large-scale AI deployment, revealing an unexpected divergence on the role of financing [233-235][320-327][139-168].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy recommendations advocate for innovative financing instruments to address the digital divide, indicating that even if finance is not the top technical barrier, dedicated mechanisms are still considered essential [S56][S55].
AI as a political/economic issue versus a technical solution
Speakers: Shikoh Gitau, Sunil Wadhwani
Framing AI as a political and economical issue and addressing the “collaboration tax” (Shikoh Gitau) We have a government that is very pro‑technology … clear regulatory framework … openness in government (Sunil Wadhwani)
Shikoh argues that AI must be treated as a political/economic matter and that the collaboration tax must be mitigated, while Sunil focuses on technical solutions, government openness, and regulatory frameworks without explicitly addressing the political-economy dimension, indicating an unexpected mismatch in framing [265-271][260-273][139-168].
Overall Assessment

The discussion showed broad consensus on AI’s potential for social impact and the need for South‑South collaboration, but clear disagreements emerged around the primary scaling mechanism (government‑first vs private‑sector‑led), the role of financing, and how AI should be framed (technical tool vs political/economic issue).

Moderate – while participants share common goals, divergent views on implementation pathways could affect coordination and resource allocation, requiring deliberate alignment to avoid fragmented efforts.

Partial Agreements
All three agree that AI can generate large‑scale social impact, but Sunil focuses on technical deployment through government platforms, Ankur stresses the need for continuous partnership to bridge innovation and impact, and Krishnan highlights open‑source, subsidized resources as the means to achieve that impact [56-62][71-73][131-137][320-327].
Speakers: Sunil Wadhwani, Ankur Vora, S. Krishnan
AI‑based cough analysis for rapid TB screening, increasing detection rates by 25% (Sunil Wadhwani) The innovation‑to‑impact journey is non‑linear; sustained partnership is vital (Ankur Vora) Government‑subsidized AI resources are open‑source and can be shared with other Global South nations (S. Krishnan)
All endorse South‑South collaboration, yet Shalini emphasizes documented pathways, Shikoh stresses reducing the collaboration tax and political goodwill, while Sunil describes a two‑way learning model, showing different preferred mechanisms for knowledge transfer [179-194][260-273][139-168].
Speakers: Shalini Kapoor, Shikoh Gitau, Sunil Wadhwani
AI diffusion relies on shared “pathways” and playbooks so innovations can move across borders (Shalini Kapoor) Framing AI as a political and economic issue and addressing the “collaboration tax” are critical for cross‑regional cooperation (Shikoh Gitau) Mutual learning model: India exports know‑how while also absorbing African innovations (Sunil Wadhwani)
Takeaways
Key takeaways
AI‑driven tools in India have demonstrably improved health outcomes (cough‑based TB screening, automated sputum analysis, medication‑adherence prediction) and education outcomes (personalized reading‑proficiency platform). Scaling AI solutions requires early, humble partnership with government ministries, alignment to national priorities, and integration with existing digital public infrastructure (e.g., Nikshay, Rakshak, Aadhaar, UPI). Front‑line user experience is critical; tools must simplify workflows for health workers and teachers to achieve adoption. South‑South collaboration is framed as sharing “pathways” and playbooks so that innovations can be diffused across borders without each country reinventing the wheel. Smart Africa’s AI Council illustrates a multi‑stakeholder model (government, private sector, philanthropies) where a clear regulatory “cloud” precedes financing. India’s AI Mission provides frugal, sovereign compute and model infrastructure at roughly one‑third global cost, with open‑source, government‑subsidized resources intended for sharing with other Global South nations. Partnerships such as the Gates Foundation’s “Advantage India for AI” are seen as essential to bridge innovation and impact and to support the 100‑Pathways‑to‑2030 agenda. The summit highlighted the non‑linear journey from innovation to impact, the need for patience amid rapid AI change, and the collective spirit of the Global South in AI development.
Resolutions and action items
Launch operations of the Wadhwani AI Institute in Rwanda, Ethiopia, and Kenya (team dispatched this month). Commit to impact 500 million people globally by 2040, building on the current 100 million annual impact in India. Share India’s AI treasury (compute, models, datasets) with other Global South countries once capacity thresholds are met. Establish a Center for International Cooperation under India’s National Institute of Smart Governance to support DPI implementation abroad. Continue and deepen partnership with the Gates Foundation for funding, knowledge‑exchange, and scaling of AI solutions. Integrate AI algorithms into existing government platforms in partner countries (e.g., analogous to Nikshay and Rakshak). Develop and disseminate “AI pathway” playbooks to facilitate South‑South knowledge transfer.
Unresolved issues
Specific mechanisms and timelines for transferring India’s AI models and compute infrastructure to other countries remain undefined. Details on how to harmonize regulatory frameworks across diverse African nations to create the required “regulatory cloud” are not settled. The financing model for large‑scale deployments in partner countries (beyond the statement that finance is “the last thing”) lacks concrete plans. Operationalization of the “collaboration tax” – how to reduce the effort and resources needed for cross‑regional partnerships – was raised but not resolved. Metrics and monitoring frameworks for evaluating impact of exported AI solutions in new contexts were not detailed.
Suggested compromises
Prioritize creation of a stable regulatory environment (“cloud”) before seeking large private‑sector financing, acknowledging that finance follows regulation. Adopt a humility‑first approach when engaging with government officials, positioning AI providers as partners seeking to understand problems rather than imposing solutions. Balance private‑sector execution with public‑sector facilitation: government provides open digital infrastructure and subsidies, private firms deliver technology, philanthropies de‑risk early pilots. Design AI tools to make frontline workers’ lives easier, thereby generating pull‑based adoption rather than top‑down mandates.
Thought Provoking Comments
The only way to scale is government. You have to work with government from day one, think about scale from day one, and integrate your AI solutions into existing digital public infrastructure like Nikshay for TB and Rakshak for education.
Highlights that technical brilliance alone is insufficient; sustainable impact requires institutional partnership, early planning for scale, and leveraging existing public platforms—a perspective that reframes how AI projects should be designed.
Shifted the conversation from describing AI solutions to discussing the structural requirements for nationwide deployment. It prompted follow‑up remarks about digital public infrastructure from other panelists and set the stage for the discussion on South‑South knowledge transfer.
Speaker: Sunil Wadhwani
AI solutions are great at a macro level, but if the frontline health worker or teacher doesn’t find the tool makes their life easier, it won’t be adopted.
Emphasizes the human‑centered design principle that adoption hinges on usability for end‑users, not just on algorithmic performance, adding depth to the scaling conversation.
Led participants to consider user experience and incentive structures, reinforcing the earlier point about government partnership and influencing later remarks about political goodwill and collaboration tax.
Speaker: Sunil Wadhwani
AI diffusion is about the routes and rails that need to be laid, just like electricity was diffused across continents; we need playbooks that can be shared so that a solution built in Kenya can be used in India and vice‑versa.
Introduces the metaphor of diffusion infrastructure, framing AI adoption as a systematic, transport‑like process rather than isolated projects, and calls for reusable playbooks.
Opened a new thematic thread on “pathways to scale” and prompted Lacina Kone and Shikoh Gitau to discuss regulatory and collaboration mechanisms, moving the dialogue toward concrete mechanisms for South‑South exchange.
Speaker: Shalini Kapoor
Finance is not the issue; finance is the last thing you should think about. The real prerequisite is the regulatory environment – the ‘cloud’ that creates the rain for investment.
Challenges the common assumption that lack of capital blocks AI projects, redirecting focus to policy and regulatory stability as the primary enabler.
Reoriented the discussion from funding scarcity to the need for conducive policy frameworks, influencing Shikoh’s point about political goodwill and reinforcing Sunil’s earlier emphasis on government partnership.
Speaker: Lacina Kone
We need to start talking about the ‘collaboration tax’ – the effort, resources, and pain required to bring different stakeholders together, and how governments should help lower that tax.
Coins a new term that captures the hidden costs of cross‑border and cross‑sector collaboration, highlighting a practical barrier often overlooked in high‑level AI talks.
Prompted the panel to consider concrete steps to reduce collaboration friction, linking back to earlier points about regulatory clouds and government facilitation, and deepening the conversation about operationalizing South‑South partnerships.
Speaker: Shikoh Gitau
India’s AI mission model provides compute at a third of the global price, builds sovereign models, and is designed to be shared with the Global South – a frugal, open‑source approach to AI infrastructure.
Offers a concrete, scalable model for democratizing AI resources, moving the dialogue from abstract principles to a tangible example of how a nation can enable widespread AI access.
Validated Sunil’s and Shalini’s earlier points about public infrastructure, and gave the audience a real‑world template for replication, steering the conversation toward actionable sharing of resources.
Speaker: S. Krishnan
The journey from innovation to impact is not a straight road; we often focus on the innovation part and assume impact will follow, but it requires deliberate work and partnership.
Summarizes a central theme of the session, reminding participants that breakthroughs need systematic pathways to translate into societal benefit.
Served as a reflective pivot that reinforced earlier insights about scaling, government involvement, and South‑South collaboration, tying together the various strands of the discussion.
Speaker: Ankur Vora
Overall Assessment

The discussion was shaped by a series of pivotal insights that moved it from showcasing impressive AI applications to dissecting the systemic foundations needed for real impact. Sunil’s early revelation that government partnership and integration with digital public infrastructure are essential reframed the conversation around scalability. This was deepened by Shalini’s diffusion metaphor, Lacina’s regulatory‑vs‑finance argument, and Shikoh’s ‘collaboration tax’ concept, each adding layers of complexity about policy, economics, and operational friction. S. Krishnan’s concrete example of India’s frugal AI mission provided a tangible model for replication, turning abstract ideas into actionable pathways. Collectively, these comments redirected the dialogue toward practical, collaborative, and inclusive strategies for South‑South knowledge transfer, emphasizing that technology alone is insufficient without the right institutional, regulatory, and human‑centered frameworks.

Follow-up Questions
How can the learnings from India’s AI initiatives be effectively transferred and adapted to other Global South contexts?
Ankur asked Sunil to discuss South‑South partnership and the transfer of learnings, indicating a need for concrete strategies for cross‑regional adaptation.
Speaker: Ankur Vora
What are the concrete opportunities for South‑South collaboration in building pathways to scale AI solutions?
Shalini sought Lacina’s view on collaborative pathways, highlighting a need to identify specific mechanisms for joint scaling across countries.
Speaker: Shalini Kapoor (addressed to Lacina Kone)
How can AI diffusion help move use cases from pilot to production across the Global South?
She asked how diffusion can be operationalized, pointing to a gap in understanding the transition from experimental pilots to large‑scale deployment.
Speaker: Shalini Kapoor (addressed to Shikoh Gitau)
What is the ‘collaboration tax’ and how can it be reduced to facilitate smoother partnerships?
Shikoh introduced the concept of collaboration tax, suggesting further investigation into the resources and effort required for cross‑country AI collaborations.
Speaker: Shikoh Gitau
Why is the investment cycle perceived as too slow for AI, and what financing models could accelerate AI deployment in the Global South?
Lacina highlighted the sluggish investment pace, indicating a need for research into more agile funding mechanisms for AI projects.
Speaker: Lacina Kone
How can frugal AI compute models and sovereign AI infrastructure be shared internationally while ensuring security and sustainability?
Krishnan discussed India’s low‑cost AI compute and sovereign models, prompting further study on scalable, secure sharing of such resources with other nations.
Speaker: S. Krishnan
What are the best practices for integrating AI solutions with existing government digital public infrastructure (e.g., Nikshay, Rakshak) to achieve scale?
Sunil emphasized the importance of leveraging government platforms, suggesting research into integration frameworks and interoperability standards.
Speaker: Sunil Wadhwani
What is the measurable impact of the AI‑based cough detection tool on TB treatment outcomes beyond detection rates?
While detection rates increased, the transcript notes a need to assess downstream effects on treatment success and mortality.
Speaker: Sunil Wadhwani
How effective are AI‑driven personalized reading tools in improving literacy outcomes at scale, and what metrics should be used?
Sunil described the reading proficiency initiative, indicating a need for rigorous evaluation of its educational impact across millions of students.
Speaker: Sunil Wadhwani
How can AI tools be designed to make frontline health workers’ and teachers’ lives easier, ensuring adoption and sustained use?
Sunil highlighted that tools must ease frontline workers’ tasks, pointing to research on user‑centered design and adoption incentives.
Speaker: Sunil Wadhwani
What mechanisms can ensure that AI solutions remain open‑source yet protected from cyber threats in large‑scale deployments?
Krishnan mentioned open‑source AI with security safeguards, suggesting further study on balancing openness with cybersecurity.
Speaker: S. Krishnan

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Advancing Scientific AI with Safety Ethics and Responsibility

Advancing Scientific AI with Safety Ethics and Responsibility

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel examined how the rapid emergence of AI-driven biodesign tools is reshaping bio-security governance, moving risk from physical labs to the design stage and demanding new oversight mechanisms [7-13]. Participants stressed that data governance, model evaluation and red-team exercises remain essential components of this response [14-15].


Speakers argued that a single central authority in Delhi would be ineffective and called for decentralized, institution-level checks such as empowering biosafety officers and creating adaptive, non-periodic oversight mechanisms [24-28][31]. They noted that traditional paper-based facility inspections are outdated for AI-enabled research and that adaptive, rapid review processes are needed [133-138]. The discussion highlighted the uneven resources across Indian research institutions and the necessity of training more personnel in chemical, AI, and nuclear security to match the vibrant but heterogeneous scientific ecosystem [15-18][122-130][133-140].


In response to concerns about open science, the panel advocated a tiered-access model combined with pre-deployment assessments and “know-your-customer” style vetting, arguing that blanket restrictions would stifle innovation while differentiated capability-level governance could mitigate misuse [41-48][49-57]. They emphasized that open-source tools are critical for innovation in low-resource settings and should not be conflated with danger [54-57]. RAND Europe’s global risk index and structured rubrics for evaluating frontier models before release were cited as useful tools [43-45][106-108].


The discussion highlighted that many Southeast Asian countries lack AI readiness and that safety evaluations must incorporate sociocultural contexts, small-model edge deployments, and accountability frameworks, with self-regulation complementing formal standards [66-71][75-79]. Capacity-building measures such as AI literacy and awareness programs for marginalized communities were identified as necessary to close gaps [175-176]. Proposals included regular six-monthly independent monitoring, an AI safety institute linked to governments, and shared incident-reporting mechanisms with tiered confidentiality to enable coordinated global oversight [105-119][161-170][226-235].


Overall, the panel concluded that effective AI-biosecurity governance will require decentralized yet integrated oversight, capacity building in the global south, socio-technical risk assessment, and harmonized data and legal frameworks to ensure both scientific progress and safety, with coordinated agency and cross-border collaboration to avoid fragmentation [24-31][122-154][161-183][209-242][255-263].


Keypoints


Major discussion points


AI is moving bio-risk upstream, demanding new governance structures.


The rapid rise of AI-enabled biodesign tools decouples risky capabilities from traditional physical containment, shifting the threat to the design stage of biology [6-13]. This calls for “more decentralized checks and balances” and empowerment of existing biosafety and information-security offices, rather than a single central authority [23-28][24-27].


Balancing open-science benefits with controlled access through tiered, capability-based governance.


Participants argue for “tiered access and contextual norms” and stress the importance of pre-deployment assessments with structured rubrics [40-48]. They warn against conflating open-source with danger, advocating differentiated governance at the capability level rather than blanket restrictions [55-58].


Context-specific capacity building and socio-technical evaluation for the Global South.


The panel highlights gaps in AI readiness, the need for small-model, edge-focused solutions, and participatory risk assessments that reflect local socio-cultural realities [62-71][73-78]. Self-regulation and unified, adaptable frameworks are seen as essential for countries like India and other low-resource settings [78-79].


Institutionalizing independent evaluation, red-teamings, and continuous monitoring.


A six-monthly “ritual” of risk assessment, possibly run by an AI-safety institute with formal government links, is proposed to embed systematic oversight [105-112][113-119]. Such mechanisms would require significant multilateral investment and coordination with bodies like the WHO or the Biological Weapons Convention [116-118].


Ensuring interoperable, cross-border biosurveillance and data-sharing.


The discussion stresses the current fragmentation of data standards and legal regimes, recommending harmonised standards (e.g., federated HL7-FHIR-like frameworks), pre-negotiated legal safe-harbors, and shared evaluation criteria to enable coordinated pandemic-response and bio-risk monitoring [212-224][226-235].


Overall purpose / goal of the discussion


The panel aimed to map the emerging security and governance challenges posed by AI tools that can design biological agents, and to explore practical policy, technical, and institutional pathways-ranging from decentralized oversight to international data-standard harmonisation-that can preserve the benefits of open scientific innovation while preventing misuse.


Overall tone and its evolution


The conversation maintained a professional, solution-oriented tone throughout. It began with a broad framing question, moved into a diagnostic phase highlighting risks and structural gaps, then shifted to constructive proposals and concrete action items. While the urgency of the bio-security threat was repeatedly underscored, the tone remained collaborative rather than alarmist, ending on a forward-looking note emphasizing coordination and capacity-building.


Speakers

Moderator


Role/Title: Conference Moderator (session moderator) [S15]


Area of Expertise: Session moderation


Speaker 1


Role/Title: (not specified in external sources)


Area of Expertise: Biosecurity, AI-enabled biodesign, risk governance (as discussed in the transcript)


Speaker 2


Role/Title: (not specified in external sources)


Area of Expertise: AI safety and security governance, independent evaluation and red-team­ing of AI systems (as discussed in the transcript)


Speaker 3


Role/Title: (not specified in external sources)


Area of Expertise: AI policy, socio-technical assessment, AI readiness for the Global South (as discussed in the transcript)


Audience Member 1


Role/Title: Founder of Corral Inc [S3]


Area of Expertise: (not specified)


Audience Member 2


Role/Title: Participant from a German group [S18]


Area of Expertise: (not specified)


Audience Member 3


Role/Title: (not specified)


Area of Expertise: (not specified)


Additional speakers:


Justin – mentioned only by name in the closing remarks; no role, title, or expertise provided.


Full session reportComprehensive analysis and detailed insights

Opening framing – The moderator began by asking whether the challenges of AI-enabled biodesign should be framed primarily as a data-governance problem, a model-design issue, or a verification-and-compliance matter [1].


Speaker 1 (biosecurity perspective) – He clarified that he is a bio-security specialist, not an AI-safety expert, and described a deep structural change in the life-sciences: risk governance has traditionally relied on physical controls such as lab inspections and material-transfer agreements [7-8]. The rapid emergence of more than 1 500 AI-enabled biodesign tools-from protein-engineering to pathogen-host interaction modelling-has begun to decouple risky capabilities from those physical containment measures, moving the risk “upstream” to the design phase [9-13]. While data governance, model evaluation and red-team exercises remain essential [14-15], they must be complemented by new upstream mechanisms. He called for training personnel in chemical, AI and nuclear security [19-22] and for empowering information-security and biosafety offices to handle emerging AI risks [23-28][31]. Rather than a single central authority in Delhi, he advocated adaptive, decentralized oversight that goes beyond periodic, paper-based inspections [23-28][31]. He also proposed a tiered risk-classification scheme for AI-enabled biodesign tools, with higher scrutiny for virus-focused models [122-156]. Two additional points were made in his closing remarks: (i) the digital-to-physical barrier-even freely available AI designs require physical infrastructure to become actual pathogens, preserving a control point [250-255]; and (ii) CEPI’s agentic-AI platform is already being used to detect jailbreak attempts and to accelerate vaccine development [300-310].


Open-science discussion (Speaker 2) – Responding to the moderator’s second question, he said a binary answer was impossible and advocated a “tiered-access and contextual-norms” approach [41-42]. He praised RAND Europe’s global risk index and its structured pre-deployment assessment rubrics, likening them to “know-your-customer” (KYC) procedures that can credential researchers for defensive work while preserving open-source innovation for low-resource settings [43-57]. He stressed that blanket restrictions would stifle innovation; instead, differentiated, capability-level governance could mitigate misuse without conflating open-source tools with danger [54-58]. He also warned that once frontier models are released the danger is “already out there” and cannot be easily withdrawn [84-89][105-112].


Institutional gaps (Speaker 3, Geeta) – Asked to identify the most immediate gaps, she noted that AI-readiness varies dramatically across the Global South: India ranks third globally, but many Southeast Asian nations lag far behind [62-64]. Large-language models are typically trained on Western data, and existing safety benchmarks show a 20-30 % failure rate in biological risk assessments [65-71][73-78]. She called for socio-cultural evaluations, small-model edge deployments, and participatory risk-assessment processes involving end-users [65-78]. India’s policy of voluntary self-regulation was highlighted, and she urged a unified yet adaptable framework that can be tailored to diverse deployment environments [78-79].


Independent evaluation norm (Speaker 2) – In response to whether independent evaluation and red-team testing should become a global norm, he drew an analogy to nuclear oversight (IAEA) and noted that biology is highly diffused, making traceability difficult [84-89]. Citing a recent SECURE-Bio study in which a frontier LLM (ChatGPT-3) outperformed expert virologists on wet-lab troubleshooting [100-104], he proposed a systematic six-monthly monitoring ritual conducted by a credentialed, independent AI-safety institute with formal government links [84-89][105-112]. He clarified that the institute would anchor its work to the Biological Weapons Convention or the WHO, even though the relationship is not yet fully established [113-119][116-118].


Feasibility in heterogeneous ecosystems (Speaker 1) – He described the “wide heterogeneity” of Indian institutions, ranging from well-resourced labs to under-funded centres [122-131]. Traditional periodic, paper-based inspections are outdated; instead, rapid, adaptive review processes are required [133-138]. He called for upstream safeguards, cross-trained AI-biosafety review panels, and investment in domestic evaluation capacity such as the AI safety institute at IIT Madras [147-154]. Leveraging tech-sovereignty measures to control the import and deployment of critical AI models was also recommended [155-156].


Emerging scientific powers shaping governance (Speaker 3) – Geeta explained that India is already creating “sandboxes” for health and ideological AI systems and that a forthcoming Global-South network for trustworthy AI and an AI-safety commons will enable low-resource countries to share tools, benchmarks and best practices [161-166]. She described an incident-reporting framework tailored to Indian contexts, capturing a taxonomy of harms-including physical, psychological, cyber, socio-economic and environmental impacts-and supporting capacity-building programmes for healthcare workers [169-176][267-274]. These initiatives are complemented by collaborative multi-stakeholder efforts and the recently published AI-governance guidelines from METI [178-182][260-262].


Scope of safety evaluation (Speaker 1) – He broadened the discussion from model-centric assessment to a full socio-technical appraisal [189-203]. Key considerations include capability uplift relative to governmental capacity, incentive structures, cross-border diffusion of risk, and the digital-to-physical barrier that still limits the translation of malicious code into real pathogens [194-201][250-255]. He warned that without integrating AI evaluation into existing biosafety and resource-security systems, audits would merely scrutinise algorithms while ignoring the institutions that operationalise them [200-202].


Avoiding fragmentation (Speaker 2) – He highlighted that many countries are deploying AI-driven biosurveillance (syndromic, genomic sequencing, outbreak modelling) on incompatible data standards and legal regimes, leading to dangerous data-hoarding-as observed during the COVID-19 pandemic [212-224]. He proposed three remedies: (i) harmonising data standards through a federated HL7-FHIR-like framework for public-health surveillance; (ii) establishing pre-negotiated legal safe-harbours for cross-border data sharing during emergencies; and (iii) agreeing on shared evaluation criteria that can be embedded in national surveillance systems [226-235][230-236]. He also noted the siloing between AI-governance and bio-security communities, which creates a “gap where the risk happens” [237-241].


Closing remarks (moderator) – The panel’s key points were summarised: safety evaluation is systemic; incident-response mechanisms and cross-border solutions are needed; and a balance must be struck between open-source innovation and managed access [255-263].


Audience Q&A


* Harms taxonomy: a researcher asked to expand the definition of harms beyond physical injury; Geeta’s team explained that their incident-reporting framework already categorises physical, psychological, cyber, socio-economic and environmental harms and that they are developing toolkits to assess healthcare-worker perceptions of AI [264-274][267-274].


* Model drift: a participant raised temporal model drift; Geeta responded that monitoring data-distribution drift is part of the system-monitoring approach and a key safety criterion [286-288].


* Web of prevention: Speaker 1 advocated a decentralized yet integrated leadership structure that empowers biosafety officers and provides a top-level reporting channel [294-299]; Speaker 2 illustrated Singapore’s multi-agency coordination model (NEA, MOH, Communicable Disease Agency, PREPARE) as a concrete example of an effective “web of prevention” [300-313].


Consensus & tensions – The panel agreed on the necessity of decentralized, capability-based governance; the importance of pre-deployment assessments combined with continuous AI-driven monitoring; the urgency of capacity-building and tech-sovereignty measures in the Global South; and the need for harmonised data standards and legal safe-harbours to avoid fragmentation. Divergence remained on the optimal locus of oversight-whether a fully decentralized network of local checks or a centrally-linked AI-safety institute is preferable-and on the degree to which open-source tools should be subject to tiered access controls. These tensions point to hybrid models that blend bottom-up empowerment with top-down coordination, and to further work on funding mechanisms, operational designs for six-monthly monitoring, and concrete protocols for DIY and small-scale commercial bio-AI activities.


Action items – (i) launch the Global-South trustworthy-AI network and an AI-safety commons; (ii) adopt tiered, capability-level access and pre-deployment rubrics for high-risk biodesign tools; (iii) embed AI safety checks into grant-review and institutional-review processes; (iv) establish a six-monthly independent monitoring regime via a credentialed AI-safety institute linked to the BWC/WHO; (v) develop a tiered risk-classification scheme for biodesign tools; (vi) create federated data-standard frameworks (e.g., HL7-FHIR-adapted) and pre-negotiated legal safe-harbours for emergency data sharing; (vii) roll out an incident-reporting taxonomy covering the full spectrum of harms; and (viii) invest in capacity-building programmes for biosafety officers, AI-safety personnel and tech-sovereignty measures. Unresolved issues include the precise governance and funding structures for the proposed institute, operationalising tiered access without stifling legitimate open-source research, and scaling continuous model-drift detection in low-resource settings. Addressing these will be essential for a resilient, inclusive governance regime that safeguards both scientific progress and global bio-security.


Session transcriptComplete transcript of the session
Moderator

Key area should we think about it as a governance data governance problem, problem in model design or should it be more on a verification or compliance angle.

Speaker 1

Thanks thank you very much Shyam for having me and good morning to everyone and welcome to this session. So I think okay let me maybe just start with saying that I’m not an AI or AI safety expert so whatever I say take it with a pinch of salt. My work is in biosecurity and that’s the angle I’ll come from. I think all of those things whether it’s a model evaluation and other things those are there and those are very very important factors and that those are the things that we need to keep in mind. But on top of that there is also a very important deep structural change that is happening. For example in the field of life sciences historically whatever risk and risk governance things that we had were very much linked to the physical infrastructure and lab facilities and facility inspection and material transfer control and things like that.

But that seems to have changed and seems to be changing very rapidly now with the kind of AI biodesign tools as well as LLMs that are emerging. So I think Rand also did a study on this, but there are more than probably 1 ,500 biodesign tools that are out there, and those are totally transforming how life sciences, but in general, science is done. Now, what kind of change that we are seeing is with these capabilities, now it’s much easier to engineer proteins, optimize DNA sequences to do things that we want, have better pathogen host modeling, interaction modeling, and things like that. Now, these capabilities are… because of AI becoming partly decoupled from the physical containment measures which were usually used in the life sciences.

So we have a lot of this risk landscape shifting a little bit more upstream to the design side when it comes to at least biological side of things. So yes, data governance, things matter. Model evaluation and red teamings are essential and we should be doing that. But also it is very important that especially for a country like India where we have a very vibrant scientific ecosystem but that is also very uneven. How we can use this AI -enabled science which is rapidly evolving into the existing mechanisms to some extent but also at the same time develop those capabilities, have more people with the core capabilities and more people with the core capabilities and more people with the core capabilities chemical security, AI nuclear security, and things like that.

So we need to train more people on those things. So integrating, again, going back to the life sciences, so integrating AI evaluation into biosafety system, strengthening the institutional readiness. Some places there are information, some labs and some institutions have information security labs or information security offices. How we can get them better prepared for these new emerging risks that are coming due to AI. Some places they have biosafety officers or biosecurity officers. How we can enable them better to address the AI risk is what the direction that we need to move towards. And have more adaptive oversight mechanism that is not only based on the, limited to this once in a while inspection that happens, but that goes more with the rapidly evolving things that we are seeing with the AI models coming up.

And I think, I think, So, just in terms of paradigm change that we are seeing and that you mentioned, is that there need to be more decentralized checks and balances and oversight mechanisms. If there is one authority sitting somewhere in Delhi and trying to do everything, that’s not going to work. So that is one of the things that we have to collectively think about. How do we decentralize these kind of oversight systems to some extent? For example, as I was saying, how we can empower the information security or biosecurity offices and create what in the field of disarmament where I have worked on called way of prevention. One measure is not enough. It’s not sufficient.

You need to have a number of measures in place which collectively can help prevent something bad from happening. Thank you.

Moderator

Thank you. That’s very insightful. And I think we’ve already touched on some areas that, you know, that would be follow -up questions. P .T., focusing a bit more on… open science where high risk domains, especially in biological data and AI capabilities, as Surya was mentioning. How do we preserve the benefits of open science while preventing the destabilizing diffusion of capabilities that we were just discussing about?

Speaker 2

Thank you. Thank you for having me today. So I guess like I would love to be able to give like a binary yes or no answer. Right. I think we all want to have that. But unfortunately, that’s not quite the case. So we need to find a way to balance the openness and also the restrictions as well. So I guess my answer here would be sort of like a tiered access and contextual norms. I think those are really important. And I think RAN Europe has done a really great job at establishing the global risk index on AI enabled biological tools. And also just generally looking into AI safety in general, where they do this thing where they call the pre -deployment assessment.

with structured rubrics. And I’m a huge fan of that because I think that when you release very frontier models and frontier tools, the danger is already out there once released. It’s really hard to withdraw the danger. But however, prevention, right? There’s this window before you release where you can do a pre -deployment assessment. So I think I’m a really huge fan of that and also the same way that I’m a big fan of KYC, know your customers. And I guess this principle also pretty much applies whereas in the case of biosecurity, where we differentially allow the development of medical countermeasures and also the defensive measures that is necessary for the research, but also don’t limit the researchers from actually innovating either.

And I guess my point here is that we’re not going to be able to do that. The non -safeguarded access, like private access to credential researchers where necessary for like defensive research is absolutely necessary. And then, you know, like open source tools, it’s necessary. Like we can’t turn away from being open source. Like any governance structure that conflates open source with danger makes a huge mistake because that also is a very critical development point, especially for lower resource settings. So we cannot afford to conflate that altogether. So I guess a very long way to answer this and then to summarize my answer is that differentiated governance at capability level is always better than blanket restriction at access level.

Yeah.

Moderator

I think that’s a very structured answer and I think, you know, there’s a start of a very valid framework level conversation that’s already happening there. Geetha, turning to you, thinking more about institutional gaps in enabling some of the solutions that we are discussing, potential solutions, what are the most immediate gaps that you see in evaluating systems, technical capability, regulatory and coordination, largely from the policy angle that you work in?

Speaker 3

Thank you, Shyam. Good morning, everyone. So on the technical capabilities, right, the most fundamental thing I see is the AI readiness aspect of deployment. So in general, when we see India stands or ranks third globally, and when we see the Southeast Asian countries, I think Indonesia is around 49, and so there we see the gap, right? So whatever we do from the Western context or in the Indian context can never be catered to the AI readiness aspect of deployment. So I think it’s important to the needs, the unique needs of the Southeast. Asian countries and moreover what there is the end user perception where we see that we have to build lot of capacity for creating awareness among the end users who are actually going to use the products and from the policy perspective I would like to give you certain aspects where we think about the socio -cultural aspects that is relevant to the deployment environments.

So in general the large language models are usually trained on the western data and the very recent research work maybe I will cover a bit of both tech and policy here. So there is a Southeast Asia related benchmark, safety benchmark which says that all these leading large language models have failed when they evaluated for more than 20 to 30 percent of the risk. So in the biological settings so which means that we did not have enough safeguards which will protect people from encountering all these risks. And moreover, so this lets us know that we have to build in more sociocultural evaluations and assessments which will cater to the harms that is more particular to that particular deployment environment rather than just having a high level evaluation strategies.

And this cannot come just from the policy side, right? So we need to bring in all the participatory approach which will bring in the end users, the different stakeholders involved in using all these AI systems, be it model, right from the requirements definition, right? So when we assess whether we need an AI system or not, generally now there is a perception saying that for whatever we are going to build or the problem that we are going to solve, by default we assume. We assume that we need a large language model which will not care. which is not even possible to have it deployed in a low resource setting, right? So we need to think about small language models which will enable edge deployments at the low resource settings and also consider all the multicultural and socio -economic diversity that exist in these regions so that your model doesn’t hallucinate, is still fair and also establish some governance and accountability frameworks which will make the developers more accountable and also because having the developers more accountable will enable them considering more safeguards, right?

And also create more awareness about the main fundamental thing is that they will be expected to document whatever testing that has been gone through. And on the policy side, there is one more aspect which is the Indian government also endorses, right? The self -regulation. voluntary commitments on managing and mitigating risk that comes out of all these AI models. So I think we have to have a unified framework which can still be adaptable to different deployment settings.

Moderator

I think we are already getting a diversity of perspectives here and it is very useful to hear. Moving ahead and thinking about institutionalizing these kind of capabilities in scientific AI context, PT turning to you. Should independent evaluation and red teaming of AI systems from a technical kind of solution perspective for this problem that generate biological outputs, especially thinking biosecurity, given your perspective on this, should it become a norm and part of the global scientific specialist infrastructure? And if so, how would we go about that?

Speaker 2

I think we have to have a clear understanding of the role of the AI system and how it can And I think that is a key point. And I think that is a key point. And I think that is a key point. And I think that is a key point. And I think that is a key point. And I think that is a key point. And I think that is a key point. And I think that is a key point. So I guess a good example to use here is probably we’re thinking of nuclear weapons, right? Which falls under this organization called the International Atomic Energy Agency, the IAEA. Now, from my perspective, I think fissile materials, correct me if I’m wrong, they’re very scarce.

And they are, to a certain degree, technically trackable. And they are also, more than anything else, highly regulated. Whereas biology, on the other hand, is everything but that. It’s diffused, it’s dual -use by nature, and it’s also nearly impossible to trace. And also, most importantly, commercially available, right? And so in the recent study, actually, this was done by this organization called SECURE. Bio, where they actually tested frontier large language models against expert virologists. And it turns out that ChachiPT -03 actually outperformed expert virologists by 94 % at troubleshooting wet lab protocols. So that’s a very shocking number, right? And then, I mean, obviously you mentioned earlier that there’s a very concentrated effort that is happening between the US, UK, and China, like the global superpowers, basically.

And I guess there’s, we, in the recommendation from the RAND Europe that I was, you know, helping out with is that we recommended that governments and also independent researchers do this six -monthly ritual of monitoring and also assessment of risk on a continuous basis. And we also suggested, obviously, like using AI as an automation tool to increase the efficiency of this risk monitoring system. But I think, to your point, I think stuff like this, stuff like that is non -interactive methodology that doesn’t require, you know, researchers to actually query directly with the danger systems is actually already in and of itself a very meaningful, you know, safeguard. But that is not enough. You know, we need something that is much larger than that.

That is the integration into, like, you know, institutionalizing it. And I would argue that, like, a six -monthly, you know, ritual, that refresh cadence, for it to be delivered, it’s going to require a very significant investment from the government at multilateral level, right? And so we can’t go without any investment at all. So my suggestion would be to actually implement this AI safety or security institute model that we’ve been applying where largely… It is technically credentialed. It’s independent, but also has a very… formal relationship with the government. And something that I would caveat from the bio side is that for the institution to have some kind of anchoring around biological weapons convention or the WHO.

Because right now that relationship is not quite there yet. And I think, you know, back to my point of like pre -deployment assessment, I think that is definitely needed and then the result has to be shared then across the credential network with tiered confidentiality that rather than being kept, you know, as a proprietary to the different state. I think it’s kind of a

Moderator

That’s an interesting position, PT. Suryesh, thinking more about safety measures at large, how can we make sure that they remain rigorous and feasible within research ecosystems that you’re quite familiar with, you know, from a biosecurity angle, if you will, but largely also in the larger scientific ecosystem.

Speaker 1

Thanks Shyam. I think first, yeah first thing that we need to understand is how that ecosystem is and then see if certain measures will work there or not, right. One of the hallmarks of let’s say Indian scientific ecosystem is there is a lot of heterogeneity. There are some places which are really extremely well performing and there are other places who are not well resourced or have other all kind of challenges. So, understanding how the ecosystem is, what kind of regulation within the institutes that are there, what kind of administrative measures that are there, what kind of safety teams these kind of institutes might have, all of those things are extremely important, right. The governance capacity, compliance culture and technical expertise varies widely in Indian institutions.

And I believe this is true for many other countries in the global south as well. So it’s not something very unique. Particularly to India, we have challenges related to different kind of resources. And even when the resources are there, sometimes it’s also problematic to use them efficiently enough. Now, given that context, if we just import safety frameworks that are developed in a well -resourced place in a Western country or any developed countries, I don’t know if those would be a very good fit for the kind of system that we have here. So those might become more performative than functional to some extent. Another challenge that also P .T. mentioned to some extent is that the speed and scale of AI is huge, right?

And we need these traditional review mechanisms that institutes have for safety audits and all of those things are not going to work. We need something which is far more adaptive and quick. And also what we had traditionally is this periodic paper -based facility -centric kind of measures. And those are very much outdated in the era of AI that we live in. Now, so what… Now the question becomes, how do we design proportionate capability -aware safeguards that would be better matched for the challenges that we have? One of the major challenges, as I think a lot of us realize, is that there is limited awareness about AI safety when it comes to scientific issues, even among the scientists.

So a huge number, a large majority of scientists just don’t know what they are putting, let’s say in chat GPT might be harmful or what they are getting out of biodesign tools could be harmful to some extent. So there is some understanding about the privacy -related issues, but safety and security is still a big gap in understanding of even the scientific experts that are there. Now also regarding AI, I think there needs to be a tiered risk classification. So not everything is highly risky. There are certain biodesign tools, for example, that are trained in… in virus data. Those we’ll put in a higher risk category compared to something which is just working, let’s say, on certain animals which are not dangerous.

Now, also the safety measures, as I was mentioning earlier, as the risk has moved a bit upstream, it has come more on the design side, we should also have more safety measures moving upstream. And as Piti was mentioning that, you know, certain kind of evaluation that are before launching AI tools are necessary, but also integrating AI evaluation modules into grant review processes, creating cross -trained AI biosafety review panels, so panels specifically for AI biosafety at, from the bottom -up side, instead of having them from the top -down approach. Investing more in domestic evaluation capacity, having more AI safety institutes like Geeta’s home institute at IIT Madras. So we need a lot more of that. And lastly, I think what we have in the US and UK are these, a lot of AI safety work is being done there, right?

And as I was mentioning, importing that directly might not work. And we in the global south are largely the users and importer of this technology. So we have to see from the bottom up side, where do we put those safety measures? Do we, like when it comes to import, what kind of, when the data is being transferred, is there certain places where we can put those kind of safeguards? Also, how we can use some tech sovereignty measures in this context, right? That tech sovereignty measures are used for a number of things, but AI security is something, AI safety and security is something where those could also be used to some extent. So, yeah, I would stop here and then we can discuss.

Thank you.

Moderator

Thank you. And I think a lot of useful thoughts here for us to explore a bit more. I think we’ve… just crossed the mid mark and I’m going to use Geeta to kind of like bridge between the next two topics by combining two of your questions sorry for that so just as Surya just mentioned will the emerging scientific powers you know global south middle powers would they be able to shape governance in this context especially you know enable science or will they continue to inherit the frameworks and if they were to show leadership what would that look like in scientific AI and research ecosystems and you know you’ve already been working on some of this so I’m looking forward to kind of hearing concrete measures that you know are happening

Speaker 3

Sure. So in general what I think is definitely the emerging powers right they are putting on all efforts to bring in all the tools and frameworks that are required for governing these AI systems and for example, so India’s strategy towards all these emerging techs is that they are trying to create sandboxes which are highly essential for deploying or evaluating safety aspects for the models, right? So they do it for healthcare systems, they do it for ideology systems and whatever, right? So these type of tools and frameworks come from Indian settings will actually help the other underdeveloped countries to learn from the strategies that we use and then build something of their own or something which cannot go cross border can still happen through learning and collaboration, right?

So for example, we are going to launch a global south network for trustworthy AI and we are going to launch a global south network for trustworthy AI and we are going to launch a global south network for trustworthy AI which will enable all these mechanisms to happen, enable people to… develop and deploy AI systems which will be deployed in the low resource settings. And the other initiative which is going to give a very big leap in evaluating AI safety is coming up with an AI safety commons for the global south. That is part of the safe and trusted AI pillar that is one of the pillars in this impact summit and I think in another one or two years we will have safety commons which will help us evaluate and assess how these AI data models and systems work for different deployment settings.

Another important thing is that as Suresh mentioned about the audit frameworks. So when we come with, when we focus on the kind of risk and audit mechanisms that we have here, we still have it from an organization perspective and not from the end user perspective. So at CRI, we have come up with an incident reporting mechanism and a framework that caters to the Indian settings. So it tells you how to operationalize incident, AI incident reporting in the Indian settings, which is completely different from the Western settings. And here we have to get the harms that the people experience in the marginalized communities, which will never be recorded everywhere, right? So how do we enable all these things?

So since it is all about all these CERN -based systems, right, even those things will have certain impacts to the marginalized communities, which may be an indirect impact. But how do they are knowing about such things are happening to them, right? So those kinds of gaps we should mitigate by building more awareness, creating more AI literacy. And we should also be able to provide more privacy to all these people. The final thoughts about combining all these things is that we have to bring in some kind of collaborative work between the different stakeholders who are involved in developing and deploying these systems. And the governments have already given certain prompt knowledge about how to enable all these things through the techno -legal framework and guidelines that was recently published and the AI governance guidelines.

Which was recently published by METI. So the Southeast Asian countries can learn from the developing countries like India and then have curated a more tailored approach towards their unique needs. So that is what I think. So whoever has an opportunity or a willingness to have more things that will actually help them use or leverage these technologies can learn from whatever. Learn from. the mistakes as well as the experience that the other countries have, which is now openly available through all these summits.

Moderator

That’s very useful and I’m looking forward to following up on IIT Madras’s work in this front as well. Going to Suresh for kind of the last question in this series really, should, you know, safety measures, evaluations, primarily focus, where should the focus be at the model level? And you talked about upstream quite a bit. Should there be more broader socio -technical readiness measures, misuse considerations? Where do you think it should be?

Speaker 1

And also, very importantly, how we have to also see it from the context of, you know, people doing their own thing, DIY kind of science that happens. And also, small -scale commercial activities which are not fully under the oversight mechanism of the government, right? So, considering all of these points, right, the policy evaluation must expand from model -centric assessment to socio -technical assessment. And this would include, you know, evaluating things like how much capability uplift relative to the government capacity that is there. So, government has certain capacity to manage or do oversight, but these AI tools, how are they changing that? Incentive structures, very, very important, that shape the model deployment. Also, the diffusion of risk across borders.

All of these things don’t respect national borders, right? So, how it’s going to spread. If people using VPN or other things, a number of other things that are there. So integration, lastly, the integration with existing biosafety and resource security systems as I had already mentioned. So briefly, like performance evaluation is necessary, but governance -relevant evaluation must be systemic. And otherwise, we risk auditing algorithms while ignoring the institutions that operationalize them. And that is very, very important, how we focus on that institutional level mechanisms. Thank you. Thank you.

Moderator

Piti, kind of the last structured question before we move into a bit more of an open conversation. AI becomes embedded not just in new capacities, but also existing programs like biosurveillance, public health systems. And so there’s a mix between emerging kind of scientific knowledge with more legacy, let’s call engineering knowledge as well. So. So how do we make sure that safety, evaluation, interoperability, all of that exists in this divide without fragmentation happening across the ecosystem? Because, you know, you can easily imagine everyone’s doing their own AI, you know, safety evaluation and not necessarily talking to each other.

Speaker 2

Thank you, Shyam. I think this is a very important question. And it’s also a topic that I’m really passionate about as well, which is biosurveillance. To your point, I think, you know, countries are already deploying AI -enabled biosurveillance systems that are, you know, either syndromic surveillance or it could be, you know, genomic sequencing pipelines or outbreak modeling. The countries are already doing that, but they are not building on… the unified data standards. So they’re basically building on very incompatible data standards with very different legal regimes across the borders. We’ve seen that in Southeast Asia. We’ve seen that even countries like, for example, Singapore to Malaysia, you see different legal regiments on how they monitor the data and also the biosurveillance.

And so the fragmentation risk is actually not a technical risk, I would argue, because it’s not just a technical risk, because we’ve seen COVID. I feel like if anyone is anybody saying, I think we all were a little bit traumatized by COVID. We’ve seen how data hoarding and incompatible reporting actually cost lives. And I saw that especially happening across the region in the lower resource settings. Like countries like Cambodia, for example. AI systems that are trained on non -representative data obviously are going to perform much worse. And guess what happens? When they perform worse, the region that is most affected is the region that needs the help the most. And because of that, and also that region is also the same region with the least data infrastructure.

And so I guess to sort of like answer your question and what I think we need to do, I think there are three things to be addressed here. The first one is obviously the data standards harmonization. Currently, we don’t have that. I think we would need not like a global overhead standard that enforces on every country, but more of a federated interpretability that applies frameworks that applies to different countries. So I can think of like HL7FHIR, which is the federated… healthcare interpretability resources that are attempting to address these very specific issues on clinical data, but this one would be adapted for public health surveillance. And the second point is the legal safe harbors for basically just kind of cross -border sharing of data for public health emergencies that are negotiated beforehand because, and this is important, beforehand, because if you negotiate during an outbreak, people are going to be freaking out.

People are going to be like, I’m not going to share my data to you. What are you going to do with that data? So this needs to be done beforehand. And the last point, and also the most politically challenging point, is actually to have some kind of shared evaluation criteria across the board between different countries that are embedded into the national surveillance systems. And, for example, like Singapore data infrastructure environment might not apply to countries with like different climate data or like different demographic data. So this needs to be applied into, you know, the national surveillance systems. And what I noticed, I guess like the last message is that what I noticed the AI governance framework often thinks of biosurveillance as like an edge, like a niche edge case.

And then people in biosecurity frameworks, like doing biosecurity frameworks, thinks that AI governance is like a tool. And these people don’t talk to each other. And that gap, that gap right there is where the risk happens. So, yeah, we just need to talk to each other more. That’s easier said. Yeah.

Moderator

So I think I’m just about to close with maybe five minutes or just under that for audience questions. Thank you, Justin. 10 second final thoughts on each of you from the panel. Suresh.

Speaker 1

Just wanted to very quickly, we need to also keep in mind that how AI could help solve some of these AI safety challenges. How agentic AI could be used, let’s say, when people are trying to develop vaccines. CEPI has developed this platform where agentic AI is being used to check if there is someone who is trying to jailbreak or someone who is trying to misuse the tool that is there. Second very quick point, also, with all that what I said, there is still a gap to transfer things from digital to physical, what is called digital to physical barrier. So, even if you have everything, you still can’t just develop, modify viruses without having a proper physical infrastructure and there are still some ways to control that.

Thank you.

Speaker 3

I think we should move on transforming from issues to intelligence like learning from the risk that happens and feeding it back to the model training and other assessment activities to mitigate the risk in real time so that is where we need to move towards bringing in more people into evaluations and then making it safer for people to use

Speaker 2

I guess I’ll make it quick the point that I want to make here is that Shurya should echo his point I think you’re right that we should not shoot ourselves in the foot especially for developing countries, I think it’s really important and so my message for the last message here is just kind of like while we are forging ahead in innovation and while we are innovating ahead in whatever domains of scientific domains that we’re doing we need to be conscious of the impact that we have and I think in the AI Impact Summit is one of the really good places to jumpstart that kind of conversations and break the silos. Thank you.

Moderator

Thank you everyone. I’m just going to take probably one minute to kind of summarize key points Evaluation, I see largely a systemic question, safety measure systemic question. I especially like the point on incident response not being already there. And a couple of points on the cross -border solutions and problems, we already have that. Discussion on open signs, we talked about how managed access, safeguards, and comparing government capacity to manage that versus letting it out for more DIY -oriented signs, which is a good term, I really like that. That’s a key area. And for emerging scientific powers, of course, collaboration is key. Tailored approach, that’s something that I’m again waiting to see from IT Madras as well, their contribution on this.

And some cross -border work on legal safe harbors, data standard harmonization, PT that you mentioned, really land well from this panel. I’m going to… I’m going to stop my… summary right now and you know more of this would be kind of put together in a blog at some point in the uh nearby future uh perhaps uh we can go for questions uh first uh yes please i think i can give you mine

Audience Member 1

Thank you so much for your wonderful insights i really enjoyed this session as a researcher in safety of ai at the university of york so i focus on psychological harms of ai and so what i want to ask particularly gita is um when it comes to the definition of harms and traditional safety engineering they’re catering more to physical harms and now we see the whole spectrum of harms expanding beyond that so i would love to know the work being done by karai and you in this area and and in fact enrich my research with it

Speaker 3

Yeah, sure uh so when we actually assess harms and impacts right we do we have to do it from the different two different perspectives one is on the functional side where we assess all these algorithmic risk and other stuff. From the human centric perspective, like you said, we can keep doing everything from the psychology perspective and other ethics and other stuff. So, here at CIRI, we do work on assessing bias, determining whether the model is stereotypical or not and how do we generate explanations for the high level scientific models and all. So, from the perspective of the psychological things, there is this cognitive science or cognitive capabilities of AI models which will actually enhance or degrade the capabilities of humans.

So, those things are we are trying to do some assessments from the incident perspective. So, if you go to read the incident reporting framework that we have, we have a taxonomy of risk and harms and also the impacts. So, from the kinds of harms that we have defined, we have categorized it as physical, psychological, cyber incident based harm set. And moreover, we have all the generic kinds of harms like algorithmic harms, socio -economic harms, the environmental harms and all. So, we are trying to come up with a taxonomy that will cater to the different hierarchies that will be applied to these kind of harms and impacts which will again be model specific, use case specific and the domain specific.

So, that is where we are trying to work on. And we also have a healthcare based tool, a toolkit which will enable people to actually assess the perceptions of how they treat these models, how they see whether these AI applications are helpful for them or not and then come up with some capacity building programs for different roles in which they are working on. And this has been done with CMC Wellure Hospital and we have been assessing the perceptions of healthcare workers. and then come up with a training module which will enable them to use AI models or tools more confidently rather than, say, being resistant or not relying on them for so much.

Moderator

Last, probably last quick question. Maybe keep it short on the responses as well, please. Sorry.

Audience Member 2

Hi. So my question is about, like, we are discussing all the geographical barriers, right? The modality is geography. When we change the geography, the models tend to perform poorly. Are we concerned about the temporal modality as well? When we go forward in time, the data is going to change eventually, and that is going to affect modeling. And how do we plan on, like, you know, mitigating such a problem if it arises?

Speaker 3

Yeah. So this comes under the model monitoring, the system monitoring approach, where we consider the data drifts out of distribution. So we consider the distribution aspects of the data and models. So definitely this is one of the criteria where you assess safety and evaluate the impacts of it.

Moderator

Yes, I think last question

Audience Member 3

Thank you so much for the insightful discussion, really appreciated the expertise that you’re bringing to the topic and thanks PT for bringing up COVID because my question is about that. As we learn from COVID biosecurity risk can quickly become a cross border existential threat. So what would a successful web of prevention and incident response framework look like and who are you looking up to in this space? Like who’s doing it well in this space?

Speaker 1

I can start maybe PT can add. So I think as I was mentioning, it will have to be more decentralized but at the same time integrated to the leadership. So I think there needs to be more empowerment of people who are like biosafety officers in the lab or who are institutional biosafety committee members, who are people who are working on the ethics and research security side at the institutes. So those are the people who need to be empowered. So there needs to be more capacity building of those people and at the same time there needs to be a mechanism established so they can report those incidents to the very top and there is top leadership sitting in the capitals.

They can in some way get an overview or monitor the situation as it is going on at different institutes level.

Speaker 2

Thanks. I can add a little bit to that. So in Singapore we actually have different agencies responsible for this. So we have the National Environmental Agency and then we have the MOH, obviously the Ministry of Health and then we also have different smaller agencies like Communicable Disease Agency and also like Prepare Agency where they are responsible for different tasks. But I want you to envision this as almost like the way that Singapore is trying to establish itself. I think it’s trying to establish itself almost as a firefighter. So when there’s an incident where there’s a crisis, who is actually doing what is very clear but it’s always not always clear across like different countries. For example, in Laos, Vietnam, might be looking very different, but I think having a very coordinated response across the different agencies on who is doing what.

Like, for example, National Environmental Agency is responsible for wastewater surveillance. So monitoring how the sickness is increasing or spiking or not, those are the people, yeah, that you would look up to. And I think that’s the last word, right? It all comes down to prevention and preparedness, even in this much like anything else with biocontext.

Moderator

Thank you, everyone, for the question, and thank you to my brilliant panelists, Suryesh, Geeta, and P .T. This was a very insightful discussion. On the screen is the work from RAND Europe with CLTR, some of what was referred to by P .T. and other panelists as well, some aspects of what we were discussing about risk typification. You’ll probably get some ideas there as well. And with that, I close. I’m surprised. I’m supposed to hand over these mementos to apparently including me, so let us do that now. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (11)
Factual NotesClaims verified against the Diplo knowledge base (5)
Confirmedhigh

“The moderator began by asking whether the challenges of AI‑enabled biodesign should be framed primarily as a data‑governance problem, a model‑design issue, or a verification‑and‑compliance matter.”

The knowledge base records that the moderator provided the opening framing for the discussion (Keynote-Rishad Premji) [S60].

Confirmedhigh

“Risk governance has traditionally relied on physical controls such as lab inspections and material‑transfer agreements, but AI‑enabled biodesign tools are moving risk upstream to the design phase.”

The knowledge base notes that AI has fundamentally altered where risks originate in biological research, shifting attention from traditional physical controls to upstream, design-phase considerations [S21].

Confirmedhigh

“Data governance, model evaluation and red‑team exercises remain essential for managing AI‑enabled biodesign risks.”

Red-teaming is highlighted as a critical, human-intensive process for identifying system gaps and scaling evaluation methods, underscoring its essential role [S56].

Additional Contextmedium

“Model cards, evaluation benchmarks and feedback loops are used to flag potential risks and improve AI models.”

The knowledge base describes the practice of publishing model cards and evaluation benchmarks to provide transparency and create feedback loops that can surface risks [S35].

Additional Contextmedium

“AI technology could facilitate the development of chemical or biological weapons, creating new security challenges.”

UK Prime Minister Rishi Sunak’s remarks emphasize that AI may enable the creation of chemical or biological weapons, supporting the claim that AI-enabled biodesign introduces novel security concerns [S66].

External Sources (71)
S1
AI Transformation in Practice_ Insights from India’s Consulting Leaders — – Romal Shetty- Sanjeev Krishan- Audience member 3- Audience member 4
S2
Global Perspectives on Openness and Trust in AI — Speakers:Alondra Nelson, Audience member 3 Speakers:Anne Bouverot, Alondra Nelson, Audience member 3
S3
AI Transformation in Practice_ Insights from India’s Consulting Leaders — -Audience member 1- Founder of Corral Inc -Audience member 6- Role/title not mentioned
S4
Day 0 Event #82 Inclusive multistakeholderism: tackling Internet shutdowns — – Nikki Muscati: Audience member who asked questions (role/affiliation not specified)
S5
Global Perspectives on Openness and Trust in AI — – Karen Hao- Audience member 1- Audience member 5
S6
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S7
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S8
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S9
Building the Workforce_ AI for Viksit Bharat 2047 — -Speaker 1- Role/Title: Not specified, Area of expertise: Not specified -Speaker 3- Role/Title: Not specified, Area of …
S10
S12
Keynote_ 2030 – The Rise of an AI Storytelling Civilization _ India AI Impact Summit — -Speaker 2: Role appears to be event moderator or host. Area of expertise and specific title not mentioned.
S13
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — -Speaker 1- Role/title not specified (appears to be a moderator/participant) -Speaker 2- Role/title not specified (appe…
S14
Policy Network on Artificial Intelligence | IGF 2023 — Moderator 2, Affiliation 2 Speaker 1, Affiliation 1 Speaker 2, Affiliation 2
S15
Keynote-Olivier Blum — -Moderator: Role/Title: Conference Moderator; Area of Expertise: Not mentioned -Mr. Schneider: Role/Title: Not mentione…
S16
Day 0 Event #250 Building Trust and Combatting Fraud in the Internet Ecosystem — – **Frode Sørensen** – Role/Title: Online moderator, colleague of Johannes Vallesverd, Area of Expertise: Online session…
S17
Conversation: 02 — -Moderator: Role/Title: Event moderator; Area of expertise: Not specified
S18
Global Perspectives on Openness and Trust in AI — -Audience member 2- Part of a group from Germany
S19
Day 0 Event #82 Inclusive multistakeholderism: tackling Internet shutdowns — – Nikki Muscati: Audience member who asked questions (role/affiliation not specified)
S20
The Arc of Progress in the 21st Century / DAVOS 2025 — – Paula Escobar Chavez: Audience member asking a question (specific role/title not mentioned)
S21
Advancing Scientific AI with Safety Ethics and Responsibility — – Speaker 1- Speaker 2 – Speaker 1- Speaker 3 Both speakers advocate for decentralized approaches where local institut…
S22
Advancing Scientific AI with Safety Ethics and Responsibility — And also, very importantly, how we have to also see it from the context of, you know, people doing their own thing, DIY …
S23
U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence — <strong>Sihao Huang:</strong> of these agents work with each other smoothly. And protocols are so important because that…
S24
https://app.faicon.ai/ai-impact-summit-2026/advancing-scientific-ai-with-safety-ethics-and-responsibility — All of these things don’t respect national borders, right? So, how it’s going to spread. If people using VPN or other th…
S25
Masterclass#1 — The speaker emphasised that political determination is crucial for successful capacity reinforcement but argued that it …
S26
Agenda item 5: Day 2 Afternoon session — Not all countries have the same technological and technical capacities
S27
Oversight of AI: Hearing of the US Senate Judiciary Subcommitee — We believe that the benefits of the tools we have deployed so far vastly outweigh the risks, but ensuring their safety i…
S28
Responsible AI in India Leadership Ethics & Global Impact part1_2 — The aviation approach demonstrates sophisticated risk management through embedded safety procedures and continuous monit…
S29
Networking Session #60 Risk &amp; impact assessment of AI on human rights &amp; democracy — Matt O’Shaughnessy: Thank you so much, David. And it’s great to be here, even just virtually. So, you asked about the…
S30
AI Meets Cybersecurity Trust Governance &amp; Global Security — “AI governance now faces very similar tensions.”[27]”AI may shape the balance of power, but it is the governance or AI t…
S31
Towards a Safer South Launching the Global South AI Safety Research Network — Arguments:Global South countries lack capacity building, access to compute, and face linguistic/cultural mismatches in b…
S32
WS #82 A Global South perspective on AI governance — AUDIENCE: Thank you for the wonderful thought provoking conversation. I wanted to ask, I only attended half of the ses…
S33
WS #103 Aligning strategies, protecting critical infrastructure — International cooperation and multistakeholder collaboration Need for capacity building, especially in the Global South
S34
WS #193 Cybersecurity Odyssey Securing Digital Sovereignty Trust — Adisa argues that policies should require AI threat modeling and red teaming as regulatory requirements for AI systems, …
S35
Keynote-Alexandr Wang — “We publish model cards and evaluation benchmarks and data so you can see how they work, their intended use, and how we …
S36
HIGH LEVEL LEADERS SESSION I — Capacity building for policy oversight and management of partnerships is considered crucial. Government institutions nee…
S37
How can sandboxes spur responsible data-sharing across borders? (Datasphere Initiative) — Promoting policies that enable responsible and interoperable cross-border data transfers, access, and sharing is of para…
S38
Cross-Border Data Flows: Harmonizing trust through interoperability mechanisms (DCO) — In conclusion, the Digital Cooperation Organization is dedicated to promoting the digital economy and leveraging technol…
S39
Policymaker’s Guide to International AI Safety Coordination — This comment crystallizes the fundamental tension at the heart of AI governance – the misalignment between market incent…
S40
GOVERNING AI FOR HUMANITY — – 120 Supported by the proposed AI office, the standards exchange would also benefit from strong ties to the internation…
S41
Artificial intelligence — AI applications in the physical world (e.g. in transportation) bring into focus issues related to human safety, and the …
S42
Advancing Scientific AI with Safety Ethics and Responsibility — And also create more awareness about the main fundamental thing is that they will be expected to document whatever testi…
S43
Advancing Scientific AI with Safety Ethics and Responsibility — Differentiated governance at capability level rather than blanket restrictions, allowing beneficial applications while c…
S44
Secure Finance Risk-Based AI Policy for the Banking Sector — It may be appreciated that an India first approach is not inward looking. It is context aware. It ensures that governanc…
S45
ISO and IEC issue new standard for AI risk management — The International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) have pu…
S46
Responsible AI in India Leadership Ethics & Global Impact part1_2 — The aviation approach demonstrates sophisticated risk management through embedded safety procedures and continuous monit…
S47
Setting the Rules_ Global AI Standards for Growth and Governance — And I think… similar with some of the controls that might need to be kind of used to manage some of the risks if there…
S48
Cross-Border Data Flows: Harmonizing trust through interoperability mechanisms (DCO) — An additional benefit of promoting data sharing and adoption of successful health improvement models is the potential fo…
S49
Advancing Scientific AI with Safety Ethics and Responsibility — -Shifting Risk Landscape in Life Sciences: The discussion highlighted how AI biodesign tools and LLMs are fundamentally …
S50
Advancing Scientific AI with Safety Ethics and Responsibility — Shifting Risk Landscape in Life Sciences: The discussion highlighted how AI biodesign tools and LLMs are fundamentally c…
S51
AI governance needs urgent international coordination — AGIS Reports analysisemphasises that as AI systems become pervasive, they create significant global challenges, includin…
S52
AI Meets Cybersecurity Trust Governance &amp; Global Security — “AI governance now faces very similar tensions.”[27]”AI may shape the balance of power, but it is the governance or AI t…
S53
Towards a Safer South Launching the Global South AI Safety Research Network — Raised by:Mr. Abhishek Singh This highlights the need for practical evaluation tools and capacity building mechanisms t…
S54
Towards a Safer South Launching the Global South AI Safety Research Network — -Need for multilingual and multicultural evaluation systems: The discussion emphasized developing benchmarks beyond Engl…
S55
Keynote-Alexandr Wang — “We publish model cards and evaluation benchmarks and data so you can see how they work, their intended use, and how we …
S56
Driving Social Good with AI_ Evaluation and Open Source at Scale — Evidence:Described the multi-step pipeline of red teaming requiring human involvement at gap identification, prompt crea…
S57
How can sandboxes spur responsible data-sharing across borders? (Datasphere Initiative) — Promoting policies that enable responsible and interoperable cross-border data transfers, access, and sharing is of para…
S58
HIGH LEVEL LEADERS SESSION I — Capacity building for policy oversight and management of partnerships is considered crucial. Government institutions nee…
S59
Open Forum #7 Advancing Data Governance Together Across Regions — – **Meri Sheroyan** emphasized continuing to pilot small-scale cross-border data-sharing initiatives in specific sectors…
S60
Keynote-Rishad Premji — Opening framing by the moderator
S61
Open Forum #26 High-level review of AI governance from Inter-governmental P — Andy Beaudoin: Good afternoon, everyone. So, you did ask us a very important question, and you did ask us privately to…
S62
Open Forum #37 Digital and AI Regulation in La Francophonie an Inspiration and Global Good Practice — Moderator: … Ok. Dear friends, ladies and gentlemen, welcome. Theoretically, now it works, right? The headphones, you …
S63
How to make AI governance fit for purpose? — This comment elevated the discussion to a more philosophical level, moving beyond technical regulatory approaches to con…
S64
UNITED NATIONS CONFERENCE ON TRADE AND DEVELOPMENT — Alternative forms of data governance are emerging to enable the sharing of data for public interest purposes. In the cur…
S65
Artificial intelligence (AI) and cyber diplomacy — The speaker argued for balanced attention across short-term, mid-term, and long-term AI risks, cautioning against fixati…
S66
UK PM Sunak urges for government action on AI risks — British Prime Minister Rishi Sunak has emphasised the need for governments to addressthe risks associated with AI. Sunak…
S67
Indias AI Leap Policy to Practice with AIP2 — Brando Benefi, co-reporter of the EU AI Act, argued that voluntary ethical frameworks alone are insufficient. “If you su…
S68
What is it about AI that we need to regulate? — A critical insight emerged that the problem has evolved from a coverage gap to a usage gap. William Lee from ITU noted i…
S69
How can Artificial Intelligence (AI) improve digital accessibility for persons with disabilities? — Rosemary Kayess:Hello, thank you for the invitation to speak today. Article 27 of the Universal Declaration of Human Rig…
S70
AI as critical infrastructure for continuity in public services — Excellent question. Thank you so much for that. Good afternoon, everybody. Thank you for all the comments. So we’ve been…
S71
Scaling AI for Billions_ Building Digital Public Infrastructure — And the other is the adversarial part of the AI is that. though you use AI for cyber security but the issue is that ther…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
Speaker 1
5 arguments159 words per minute1969 words742 seconds
Argument 1
Decentralized checks and empowerment of biosafety officers
EXPLANATION
Speaker 1 argues that oversight of AI‑enabled biosecurity should not be centralized in a single authority in Delhi, but distributed across many institutional actors. Empowering existing information‑security and biosafety offices will create a network of checks and balances that can respond more quickly to emerging risks.
EVIDENCE
He notes that a single authority cannot handle the workload and calls for decentralised oversight mechanisms, citing the need to empower information-security or biosecurity offices and create a “way of prevention” with multiple measures rather than a single one [24-31]. Later he stresses the importance of empowering biosafety officers, building their capacity, and establishing a reporting channel to top leadership for incident awareness [295-299].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
External sources emphasize the need for decentralized oversight and empowerment of local biosafety officers, noting that centralized mechanisms are insufficient [S21] and that local empowerment is advocated [S22].
MAJOR DISCUSSION POINT
Decentralised oversight
AGREED WITH
Speaker 2
DISAGREED WITH
Speaker 2
Argument 2
Preserve open‑source innovation while protecting low‑resource settings
EXPLANATION
Speaker 1 stresses that open‑source tools are essential for innovation, especially in low‑resource environments, and should not be banned. At the same time, capacity‑building and training are needed so that these settings can use the tools safely.
EVIDENCE
He points out India’s vibrant but uneven scientific ecosystem and the need to develop core capabilities, train more people, and create mechanisms that protect low-resource settings while leveraging AI-enabled science [15-16][75]. He also calls for expanding evaluation from model-centric to socio-technical assessments that consider diverse contexts [190-191].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Both sources argue that open-source tools are essential for innovation in low-resource settings and should not be conflated with danger, recommending risk management rather than bans [S22] and supporting decentralized approaches [S21].
MAJOR DISCUSSION POINT
Open‑source innovation in low‑resource settings
DISAGREED WITH
Speaker 2
Argument 3
Pre‑deployment assessments and integration of AI evaluation into grant reviews
EXPLANATION
Speaker 1 proposes that safety checks should be embedded early in the research pipeline, including mandatory pre‑deployment assessments and linking AI safety evaluation to grant‑review processes. This would ensure that risks are identified before tools are released.
EVIDENCE
He mentions integrating AI evaluation modules into grant review processes and creating cross-trained AI biosafety review panels as a bottom-up approach rather than top-down [147-148].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Consensus on embedding safety checks before deployment is highlighted in external evidence, with calls for pre-deployment assessments and integration into funding reviews [S21] and structured rubrics [S22].
MAJOR DISCUSSION POINT
Embedding safety in funding decisions
AGREED WITH
Speaker 2
DISAGREED WITH
Speaker 2
Argument 4
Move safeguards upstream; tiered risk classification for biodesign tools
EXPLANATION
Speaker 1 argues that because AI is now decoupled from physical containment, risk mitigation must shift upstream to the design phase. He suggests a tiered risk classification that distinguishes high‑risk biodesign tools (e.g., virus‑focused) from lower‑risk ones.
EVIDENCE
He describes the shift of risk upstream and proposes a tiered risk classification, noting that tools trained on virus data should be placed in a higher-risk category compared to those working on harmless organisms [142-146].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
External commentary notes the shift of biosecurity risk upstream due to AI-enabled tools and proposes tiered risk classification for biodesign applications [S21].
MAJOR DISCUSSION POINT
Upstream risk classification
AGREED WITH
Speaker 2, Moderator
DISAGREED WITH
Speaker 3
Argument 5
Heterogeneous governance capacity; training and tech‑sovereignty measures required
EXPLANATION
Speaker 1 highlights the wide variation in governance capacity across Indian institutions and the Global South, emphasizing the need for targeted training and tech‑sovereignty strategies. He warns that importing Western frameworks wholesale may be ineffective.
EVIDENCE
He describes the heterogeneity of Indian scientific institutions, limited resources, and the challenge of efficiently using those resources, arguing that imported safety frameworks could become performative rather than functional [124-131]. He later mentions tech-sovereignty measures as a way to bolster AI security [155-156].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Sources discuss the varied governance capacity across institutions and the need for targeted training and tech-sovereignty strategies to avoid perfunctory adoption of Western frameworks [S21] and [S22].
MAJOR DISCUSSION POINT
Capacity gaps and sovereignty
AGREED WITH
Speaker 3
S
Speaker 2
5 arguments152 words per minute1873 words737 seconds
Argument 1
Tiered access and contextual norms for high‑risk tools
EXPLANATION
Speaker 2 suggests a tiered‑access model where high‑risk AI‑enabled biological tools are subject to stricter controls, while lower‑risk tools remain openly available. Context‑specific norms and pre‑deployment assessments are key to balancing openness and safety.
EVIDENCE
He proposes a tiered access and contextual norms approach, referencing RAND Europe’s global risk index and pre-deployment assessments with structured rubrics, and likens the model to KYC principles for biosecurity [41-45][49-56].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
External material proposes tiered-access models with contextual norms and KYC-like procedures for high-risk tools [S22] and references similar recommendations in [S21].
MAJOR DISCUSSION POINT
Tiered access model
AGREED WITH
Speaker 1
DISAGREED WITH
Speaker 1
Argument 2
Differentiated, capability‑level governance rather than blanket restrictions
EXPLANATION
Speaker 2 argues that governance should be differentiated at the capability level, targeting specific high‑risk functionalities instead of imposing blanket restrictions on all AI tools. This nuanced approach preserves innovation while mitigating danger.
EVIDENCE
He summarizes his view that “differentiated governance at capability level is always better than blanket restriction at access level” [57-58].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
A direct quote supporting differentiated capability-level governance over blanket restrictions appears in the sources [S22] and is echoed in [S21].
MAJOR DISCUSSION POINT
Capability‑level governance
AGREED WITH
Speaker 1, Moderator
Argument 3
Six‑monthly independent monitoring and AI safety institute model
EXPLANATION
Speaker 2 recommends a regular six‑monthly independent monitoring regime, supported by a dedicated AI safety institute that operates independently yet maintains formal ties with governments. This structure would provide continuous risk assessment and require substantial multilateral investment.
EVIDENCE
He cites a RAND Europe recommendation for six-monthly monitoring and the use of AI to automate risk monitoring [105-108], notes the need for significant government investment [110-112], and outlines an AI safety institute model with credentialing and formal government relationships [113-118].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Regular six-monthly monitoring and the establishment of an AI safety institute are recommended in the external literature [S22].
MAJOR DISCUSSION POINT
Regular independent monitoring
AGREED WITH
Speaker 1
DISAGREED WITH
Speaker 1
Argument 4
Use AI as an automation tool for continuous risk monitoring
EXPLANATION
Speaker 2 emphasizes that AI itself can be leveraged to automate and scale continuous risk monitoring, making the surveillance of AI‑enabled bio‑tools more efficient and timely.
EVIDENCE
He mentions that AI can be used as an automation tool to increase the efficiency of risk monitoring systems [106-108].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The use of AI to automate continuous risk monitoring is advocated in the sources [S22].
MAJOR DISCUSSION POINT
AI‑driven risk monitoring
AGREED WITH
Speaker 1
Argument 5
Harmonise data standards (e.g., federated HL7‑FHIR) and create legal safe‑harbors for emergency data sharing
EXPLANATION
Speaker 2 calls for harmonised public‑health data standards, such as a federated HL7‑FHIR approach, and pre‑negotiated legal safe‑harbors that allow cross‑border data sharing during emergencies. These steps would reduce fragmentation and improve outbreak response.
EVIDENCE
He outlines the need for data-standards harmonisation, proposing a federated HL7-FHIR-like framework for public-health surveillance [226-230], and stresses the importance of pre-negotiated legal safe-harbors for emergency data sharing [231-234].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
External evidence calls for federated HL7-FHIR data standards and pre-negotiated legal safe-harbors for emergency data sharing [S22] and mentions similar ideas in [S21].
MAJOR DISCUSSION POINT
Data standards and legal safe‑harbors
S
Speaker 3
4 arguments147 words per minute1665 words675 seconds
Argument 1
AI readiness gaps and need for socio‑cultural benchmarks
EXPLANATION
Speaker 3 points out that many Global South countries lag in AI readiness, and that safety benchmarks must incorporate socio‑cultural factors specific to each deployment environment. Without such benchmarks, large language models may fail to address region‑specific risks.
EVIDENCE
He cites India’s global AI ranking, Southeast Asian gaps, and a safety benchmark showing that leading LLMs fail 20-30 % of risk assessments in biological settings, underscoring the need for socio-cultural evaluations [62-70].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Sources highlight AI readiness gaps in the Global South and the need for socio-cultural benchmarks in safety assessments [S21] and note capacity differences across regions [S26].
MAJOR DISCUSSION POINT
Readiness and cultural benchmarks
DISAGREED WITH
Speaker 1
Argument 2
Incident‑reporting framework and audit mechanisms tailored to local contexts
EXPLANATION
Speaker 3 describes the development of an incident‑reporting framework designed for Indian settings, which captures harms experienced by marginalized communities and aligns with local regulatory realities.
EVIDENCE
He explains that CRI has created an incident-reporting mechanism and framework specific to Indian contexts, aiming to capture harms in marginalized communities that might otherwise be missed [169-172].
MAJOR DISCUSSION POINT
Local incident‑reporting
Argument 3
Cross‑trained AI‑biosafety review panels and domestic evaluation capacity
EXPLANATION
Speaker 3 advocates for creating cross‑trained review panels that combine AI expertise with biosafety knowledge, and for building domestic evaluation capacity to reduce reliance on external frameworks.
EVIDENCE
He notes the need for participatory approaches, cross-trained AI-biosafety panels, and accountability mechanisms, and calls for investment in domestic evaluation capacity such as AI safety institutes like the one at IIT Madras [73-78][148-149].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
External references suggest creating cross-trained review panels and building domestic evaluation capacity, including AI safety institutes such as at IIT Madras [S22] and [S21].
MAJOR DISCUSSION POINT
Cross‑trained panels and domestic capacity
AGREED WITH
Speaker 1
Argument 4
Global‑South network for trustworthy AI and safety commons to share best practices
EXPLANATION
Speaker 3 announces the launch of a Global South network and a safety commons that will enable low‑resource countries to share tools, benchmarks, and evaluation practices for trustworthy AI.
EVIDENCE
He describes the upcoming global-south network for trustworthy AI and an AI safety commons expected within one to two years, which will support evaluation and assessment across deployment settings [164-166].
MAJOR DISCUSSION POINT
South‑South collaboration platform
AGREED WITH
Moderator, Speaker 1
A
Audience Member 3
2 arguments190 words per minute78 words24 seconds
Argument 1
Need for coordinated, multi‑agency incident response framework
EXPLANATION
The audience member asks what a successful, coordinated incident‑response and prevention framework would look like and which organisations are already implementing such systems effectively.
EVIDENCE
The question highlights the desire for a multi-agency web of prevention and incident response, referencing lessons from COVID-19 and seeking examples of effective frameworks [290-293].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
External commentary notes the lack of dedicated incident-response mechanisms and the need for coordinated multi-agency frameworks [S21] and draws parallels with aviation risk-management approaches [S28].
MAJOR DISCUSSION POINT
Coordinated incident response
AGREED WITH
Speaker 1, Speaker 3, Moderator
Argument 2
Coordination among national agencies to avoid fragmented surveillance
EXPLANATION
The audience member stresses that fragmented national surveillance systems hinder effective biosurveillance, and calls for better coordination among agencies across borders.
EVIDENCE
The question raises concerns about fragmented surveillance and the need for coordinated national agency action, echoing earlier discussion about multi-agency response [281-285].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Sources discuss fragmented surveillance and the importance of coordinated national agency action to improve biosurveillance [S21].
MAJOR DISCUSSION POINT
Agency coordination
A
Audience Member 1
1 argument167 words per minute100 words35 seconds
Argument 1
Develop a taxonomy that includes physical, psychological, cyber, socio‑economic and environmental harms
EXPLANATION
The audience member requests a broader taxonomy of harms that goes beyond traditional physical safety to incorporate psychological, cyber, socio‑economic and environmental impacts.
EVIDENCE
She describes the taxonomy being built at CIRI, which categorises harms into physical, psychological, cyber, socio-economic, environmental, and algorithmic categories, and mentions tools for assessing perceptions in healthcare settings [264-274][275-276].
MAJOR DISCUSSION POINT
Expanded harms taxonomy
AGREED WITH
Speaker 1, Speaker 3, Moderator, Audience Member 3
A
Audience Member 2
1 argument170 words per minute75 words26 seconds
Argument 1
Implement model‑monitoring for data‑distribution drift to handle temporal changes
EXPLANATION
The audience member asks whether temporal model drift is being considered, noting that data distributions change over time and could affect model performance.
EVIDENCE
The question points to concerns about temporal modality and data drift, and Speaker 3 responds that model-monitoring for out-of-distribution data is part of safety evaluation [281-288].
MAJOR DISCUSSION POINT
Temporal drift monitoring
M
Moderator
5 arguments125 words per minute969 words462 seconds
Argument 1
Develop robust incident‑response frameworks for AI‑enabled biosecurity risks
EXPLANATION
The moderator points out that current systems lack dedicated incident‑response mechanisms for AI‑driven bio‑risk, and stresses that establishing such frameworks is essential to contain emerging threats promptly.
EVIDENCE
In the closing summary the moderator notes, “I especially like the point on incident response not being already there,” highlighting the gap in existing preparedness and the need to create coordinated response structures [255-257].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The gap in incident-response for AI-enabled biosecurity is identified in external sources, emphasizing the need for robust frameworks [S21] and referencing systematic approaches like aviation safety [S28].
MAJOR DISCUSSION POINT
Incident‑response gap
Argument 2
Promote cross‑border data sharing and legal safe‑harbors to overcome fragmentation
EXPLANATION
The moderator emphasizes that fragmented national surveillance and data‑sharing practices hinder effective bio‑security, calling for harmonised standards and pre‑negotiated legal safe‑harbors to enable seamless cross‑border collaboration during emergencies.
EVIDENCE
The moderator’s summary references “a couple of points on the cross-border solutions and problems, we already have that,” underscoring the need for coordinated international data mechanisms and legal arrangements [258-262].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
External literature recommends harmonised data standards and legal safe-harbors for cross-border sharing during emergencies [S22] and highlights pre-negotiated arrangements [S21].
MAJOR DISCUSSION POINT
Cross‑border data coordination
Argument 3
Balance managed access with DIY‑oriented research to prevent uncontrolled diffusion while supporting innovation
EXPLANATION
The moderator notes the tension between tightly managed access to high‑risk AI tools and the reality of grassroots or DIY scientific activity, arguing that policies must find a middle ground that safeguards security without stifling low‑resource innovation.
EVIDENCE
In the summary the moderator remarks on “managed access, safeguards, and comparing government capacity to manage that versus letting it out for more DIY-oriented signs,” identifying this as a key area of concern [259-261].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The tension between open-source DIY science and security is discussed, with calls for balanced policies that preserve innovation while managing risk [S22] and [S21].
MAJOR DISCUSSION POINT
Managed access vs DIY research
Argument 4
Emerging scientific powers from the Global South should lead through collaboration and tailored governance approaches
EXPLANATION
The moderator calls for the Global South’s scientific communities to take a leadership role, stressing that collaboration and context‑specific governance models are needed rather than simply importing Western frameworks.
EVIDENCE
The moderator states, “Emerging scientific powers… collaboration is key. Tailored approach…,” indicating the expectation for these regions to shape governance with locally adapted solutions [260-262].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Sources advocate for leadership by emerging scientific powers through collaboration and context-specific governance rather than importing Western models [S21] and [S22].
MAJOR DISCUSSION POINT
Leadership of emerging scientific powers
AGREED WITH
Speaker 3, Speaker 1
Argument 5
Adopt systemic, institution‑level safety evaluation rather than isolated algorithm audits
EXPLANATION
The moderator warns that focusing solely on algorithmic audits ignores the broader institutional context, advocating for safety assessments that integrate organisational processes, governance structures, and capacity considerations.
EVIDENCE
In the closing remarks the moderator cautions, “we risk auditing algorithms while ignoring the institutions that operationalize them,” highlighting the need for a holistic, systemic evaluation approach [260-262].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
External evidence warns against isolated algorithm audits and promotes systemic, institution-level safety evaluation [S21] and draws on aviation systemic safety practices [S28].
MAJOR DISCUSSION POINT
Systemic safety evaluation
Agreements
Agreement Points
Decentralized oversight and empowerment of local biosafety and information‑security offices
Speakers: Speaker 1, Speaker 2
Decentralized checks and empowerment of biosafety officers Six‑monthly independent monitoring and AI safety institute model
Both speakers argue that a single central authority cannot cope with AI-enabled bio-risk and that oversight should be distributed across many institutional actors. Speaker 1 calls for empowering existing information-security and biosafety offices and creating a network of checks and balances [24-31][295-299]. Speaker 2 proposes an independent, credentialed AI safety institute that conducts regular six-monthly monitoring, providing a decentralized, yet formally linked, capability for continuous risk assessment [105-108][113-118].
POLICY CONTEXT (KNOWLEDGE BASE)
This approach aligns with the “India-first” policy that stresses context-aware, locally grounded governance while remaining globally coherent, and mirrors the decentralized safety procedures advocated in aviation-style AI risk management frameworks [S44][S46].
Tiered / capability‑based governance rather than blanket restrictions
Speakers: Speaker 1, Speaker 2, Moderator
Move safeguards upstream; tiered risk classification for biodesign tools Tiered access and contextual norms for high‑risk tools Differentiated, capability‑level governance rather than blanket restrictions Balance managed access with DIY‑oriented research
All three participants stress that governance should be differentiated according to the risk or capability of a tool. Speaker 1 proposes a tiered risk classification that puts virus-focused biodesign tools in a higher-risk category [142-146]. Speaker 2 recommends a tiered-access model with contextual norms and KYC-like procedures for high-risk capabilities [41-45][49-56]. The Moderator highlights the tension between managed access and DIY research, urging a middle-ground approach [259-261].
POLICY CONTEXT (KNOWLEDGE BASE)
The recommendation reflects the differentiated, capability-level governance model promoted in recent AI safety literature, which calls for tiered access to high-risk capabilities while keeping open-source tools available to innovators [S43][S47][S45].
Pre‑deployment assessment and integration of AI safety checks into funding/review processes
Speakers: Speaker 1, Speaker 2
Pre‑deployment assessments and integration of AI evaluation into grant reviews Tiered access and contextual norms for high‑risk tools
Both speakers advocate embedding safety checks early in the research pipeline. Speaker 1 suggests mandatory pre-deployment assessments and linking AI evaluation to grant-review panels [147-148]. Speaker 2 praises structured pre-deployment rubrics and likens them to KYC processes, arguing they are essential before releasing frontier models [44-48][49-50].
POLICY CONTEXT (KNOWLEDGE BASE)
Consistent with ISO/IEC 23894:2023 guidance on embedding AI risk assessments into project lifecycles and with recommendations for an embedded “safety dial” that balances safeguards with operational needs [S45][S46].
Capacity development, domestic evaluation capacity and tech‑sovereignty
Speakers: Speaker 1, Speaker 3
Heterogeneous governance capacity; training and tech‑sovereignty measures required Cross‑trained AI‑biosafety review panels and domestic evaluation capacity
Both speakers note the uneven governance capacity across institutions and the need for locally-grown expertise. Speaker 1 points to wide heterogeneity in Indian (and Global South) institutions and calls for targeted training and tech-sovereignty strategies [124-131][155-156]. Speaker 3 calls for cross-trained review panels and investment in domestic evaluation capacity such as AI safety institutes [73-78][148-149].
POLICY CONTEXT (KNOWLEDGE BASE)
Echoes the “India-first” emphasis on building domestic evaluation capacity and technological sovereignty while staying aligned with international standards and best-practice frameworks [S44][S42][S46].
Harmonised public‑health data standards and pre‑negotiated legal safe‑harbours for cross‑border sharing
Speakers: Speaker 2, Moderator
Harmonise data standards (e.g., federated HL7‑FHIR) and create legal safe‑harbours for emergency data sharing
Speaker 2 stresses the need for federated data-standard frameworks (e.g., HL7-FHIR) and pre-negotiated legal safe-harbours to enable rapid cross-border data exchange during emergencies [226-234]. The Moderator echoes this by highlighting the importance of cross-border solutions and legal mechanisms to avoid fragmentation [258-262].
POLICY CONTEXT (KNOWLEDGE BASE)
Draws on cross-border data-flow frameworks that promote interoperable health data standards and pre-negotiated safe-harbour provisions, as outlined in DCO-based harmonisation initiatives and international standards exchanges [S48][S40].
Institutional incident‑response and reporting mechanisms for AI‑enabled bio‑risk
Speakers: Speaker 1, Speaker 3, Moderator, Audience Member 3
Need for coordinated, multi‑agency incident response framework Develop a taxonomy that includes physical, psychological, cyber, socio‑economic and environmental harms
All parties agree that current systems lack a dedicated incident-response framework. Speaker 1 calls for empowering biosafety officers and establishing reporting channels to top leadership [295-299]. Speaker 3 describes an incident-reporting framework tailored to Indian contexts that captures a broad taxonomy of harms [269-272]. The Moderator explicitly notes the gap in incident-response mechanisms [255-257]. Audience Member 3 asks what a successful multi-agency web of prevention would look like, underscoring the demand for such a framework [290-293].
POLICY CONTEXT (KNOWLEDGE BASE)
Mirrors the incident-response and continuous monitoring mechanisms advocated in aviation-style AI safety and ISO risk-management standards for systematic reporting of bio-risk incidents [S46][S45].
South‑South collaboration platforms and tailored governance for emerging scientific powers
Speakers: Speaker 3, Moderator, Speaker 1
Global‑South network for trustworthy AI and safety commons to share best practices Emerging scientific powers from the Global South should lead through collaboration and tailored governance approaches
Speaker 3 announces a Global-South network for trustworthy AI and an upcoming safety commons to help low-resource countries share tools and benchmarks [164-166]. The Moderator calls for the Global South to shape governance rather than merely import Western frameworks, emphasizing collaboration and context-specific solutions [260-262]. Speaker 1 highlights India’s vibrant but uneven scientific ecosystem and the need to leverage it for regional leadership [15-16].
POLICY CONTEXT (KNOWLEDGE BASE)
Aligns with the “India-first” collaborative model that encourages South-South partnerships and context-specific governance while remaining compatible with global coordination efforts [S44][S42].
Use of AI itself as an automation tool for continuous risk monitoring
Speakers: Speaker 1, Speaker 2
Use AI as an automation tool for continuous risk monitoring
Both speakers see AI as part of the solution to monitor AI-enabled bio-tools. Speaker 2 explicitly mentions using AI to automate risk-monitoring systems [106-108]. Speaker 1 later notes that AI can help solve safety challenges, such as agentic AI checking for jailbreak attempts in vaccine development platforms [246-248].
POLICY CONTEXT (KNOWLEDGE BASE)
Supported by examples of AI-driven continuous safety monitoring in physical-world applications and aviation risk management, highlighting AI’s role in ongoing risk detection [S41][S46].
Similar Viewpoints
Both argue that safety checks must be performed before release and should be embedded in funding decisions, using structured rubrics and contextual norms to balance openness with risk mitigation [44-48][49-50][147-148].
Speakers: Speaker 1, Speaker 2
Pre‑deployment assessments and integration of AI evaluation into grant reviews Tiered access and contextual norms for high‑risk tools
Both highlight the uneven capacity across institutions in the Global South and call for targeted training, domestic expertise, and cross‑disciplinary review panels to build a resilient biosafety ecosystem [124-131][155-156][73-78][148-149].
Speakers: Speaker 1, Speaker 3
Heterogeneous governance capacity; training and tech‑sovereignty measures required Cross‑trained AI‑biosafety review panels and domestic evaluation capacity
Both stress the necessity of interoperable public‑health data standards and pre‑negotiated legal mechanisms to avoid fragmentation during crises [226-234][258-262].
Speakers: Speaker 2, Moderator
Harmonise data standards and create legal safe‑harbours for emergency sharing
Unexpected Consensus
Open‑source tools are essential and should not be conflated with danger
Speakers: Speaker 1, Speaker 2
Preserve open‑source innovation while protecting low‑resource settings Open‑source tools are essential and should not be banned
While Speaker 1 focuses on capacity-building for low-resource settings, his argument list includes preserving open-source innovation, which aligns with Speaker 2’s explicit defence of open-source tools as critical for innovation in low-resource environments [15-16][54-55]. This convergence was not obvious given their different primary emphases (bio-security vs AI governance).
POLICY CONTEXT (KNOWLEDGE BASE)
Reinforced by policy briefs that separate open-source innovation from high-risk tool restriction, advocating tiered access that preserves open-source benefits while safeguarding dangerous capabilities [S43][S45].
AI can be part of the solution to its own safety challenges
Speakers: Speaker 1, Speaker 2
Use AI as an automation tool for continuous risk monitoring
Both speakers, despite coming from different backgrounds, agree that AI should be leveraged to automate risk monitoring and even to detect misuse (e.g., jailbreak detection in vaccine platforms) [106-108][246-248]. This mutual view that AI can help police AI-enabled bio-risk was not anticipated.
POLICY CONTEXT (KNOWLEDGE BASE)
Consistent with the view that AI systems can be employed for self-monitoring and mitigation of risks, as discussed in AI safety literature for physical systems [S41][S46].
Overall Assessment

There is strong convergence among the panelists on the need for decentralized, capability‑based governance, early pre‑deployment safety checks, capacity building in the Global South, harmonised data standards with legal safe‑harbours, and robust multi‑agency incident‑response frameworks. These shared positions span technical, policy, and institutional dimensions.

High consensus – the speakers largely reinforce each other’s proposals, indicating a collective readiness to pursue coordinated, context‑sensitive governance mechanisms. This consensus suggests that future policy work can build on these common foundations rather than reconciling divergent views.

Differences
Different Viewpoints
Centralized AI safety institute vs. decentralized biosafety oversight
Speakers: Speaker 1, Speaker 2
Decentralized checks and empowerment of biosafety officers Six‑monthly independent monitoring and AI safety institute model
Speaker 1 argues that oversight of AI-enabled biosecurity should be distributed across many institutional actors, empowering local information-security and biosafety offices and avoiding a single authority in Delhi [24-31][295-299]. Speaker 2 proposes a dedicated AI safety institute with formal government ties that would conduct six-monthly independent monitoring, requiring substantial multilateral investment [105-112][113-118]. This reflects a clash between a decentralized, bottom-up model and a more centralized, institutionalized approach.
POLICY CONTEXT (KNOWLEDGE BASE)
Reflects the tension between centralized standard-setting bodies (e.g., ISO/IEC) and decentralized, context-aware governance models advocated for biosafety and AI risk management [S45][S44][S46].
Open‑source innovation vs. tiered‑access restrictions for high‑risk tools
Speakers: Speaker 1, Speaker 2
Preserve open‑source innovation while protecting low‑resource settings Tiered access and contextual norms for high‑risk tools
Speaker 1 stresses that open-source tools are essential for innovation in low-resource environments and should not be banned; instead, capacity-building and training are needed [15-16][75][190-191]. Speaker 2 recommends a tiered-access model with stricter controls for high-risk AI-enabled biological tools, likening it to KYC procedures and arguing that differentiated, capability-level governance is preferable to blanket restrictions [41-45][49-56][57-58]. The two positions differ on how much access should be limited.
POLICY CONTEXT (KNOWLEDGE BASE)
Directly addressed in policy discussions recommending tiered access that protects high-risk capabilities while keeping open-source tools freely available for innovation [S43][S45].
Pre‑deployment assessments vs. periodic post‑deployment monitoring
Speakers: Speaker 1, Speaker 2
Pre‑deployment assessments and integration of AI evaluation into grant reviews Six‑monthly independent monitoring and AI safety institute model
Speaker 1 proposes embedding safety checks early in the research pipeline through mandatory pre-deployment assessments linked to grant-review processes and cross-trained review panels [147-148]. Speaker 2 advocates a six-monthly independent monitoring regime, using AI to automate continuous risk monitoring and suggesting ongoing post-deployment oversight [105-108][110-112]. Both agree on the need for assessment but disagree on timing and mechanism.
POLICY CONTEXT (KNOWLEDGE BASE)
Juxtaposes the pre-deployment risk-assessment emphasis of ISO/IEC guidance with the continuous post-deployment monitoring approaches championed in aviation-style AI safety frameworks [S45][S46].
Technical tiered risk classification vs. socio‑cultural benchmarking
Speakers: Speaker 1, Speaker 3
Move safeguards upstream; tiered risk classification for biodesign tools AI readiness gaps and need for socio‑cultural benchmarks
Speaker 1 suggests a tiered risk classification that separates high-risk biodesign tools (e.g., virus-focused) from lower-risk ones, focusing on capability-aware safeguards [142-146]. Speaker 3 argues that safety benchmarks must incorporate socio-cultural factors specific to each deployment environment, noting that leading LLMs fail 20-30 % of risk assessments in biological settings due to lack of such context [62-70]. The disagreement lies in emphasizing technical capability versus contextual, cultural evaluation.
Unexpected Differences
Centralized institute vs. decentralized local empowerment
Speakers: Speaker 1, Speaker 2
Decentralized checks and empowerment of biosafety officers Six‑monthly independent monitoring and AI safety institute model
It was not anticipated that two experts focusing on biosecurity and AI safety would diverge sharply on the locus of governance. Speaker 1 pushes for a network of local checks, while Speaker 2 envisions a credentialed, centrally‑linked AI safety institute. This contrast was not evident earlier in the discussion.
POLICY CONTEXT (KNOWLEDGE BASE)
Mirrors the broader debate captured in the “India-first” approach and international standards discourse about balancing global coordination with locally empowered oversight structures [S44][S45][S46].
Open‑source unrestricted access vs. tiered‑access control
Speakers: Speaker 1, Speaker 2
Preserve open‑source innovation while protecting low‑resource settings Tiered access and contextual norms for high‑risk tools
Both speakers are advocates of responsible AI, yet they unexpectedly clash on whether open‑source tools should remain freely available (Speaker 1) or be subject to tiered, KYC‑like restrictions (Speaker 2). The disagreement surfaces despite shared concern for low‑resource innovation.
Overall Assessment

The panel shows substantial disagreement on governance architecture (centralized institute vs. decentralized local checks), on the degree of access restriction for high‑risk AI tools (open‑source freedom vs. tiered access), and on the primary timing of safety assessments (pre‑deployment embedding vs. periodic post‑deployment monitoring). There is also a conceptual split between technical risk classification and socio‑cultural benchmarking. While participants converge on the need for oversight, incident reporting, and AI‑driven monitoring, they diverge on implementation pathways.

Moderate to high disagreement. The divergent views on centralisation, access control, and assessment timing could lead to fragmented policy approaches if not reconciled, potentially weakening collective biosecurity safeguards and slowing coordinated action across the Global South.

Partial Agreements
Both speakers agree that AI should be employed to monitor risks continuously. Speaker 2 highlights AI‑driven automation for six‑monthly risk monitoring [106-108], while Speaker 3 points to model‑monitoring for out‑of‑distribution data as part of safety evaluation [286-288]. They differ on the specific focus—general risk monitoring versus drift detection—but share the goal of AI‑enabled ongoing oversight.
Speakers: Speaker 2, Speaker 3
Use AI as an automation tool for continuous risk monitoring Model‑monitoring for data‑distribution drift to handle temporal changes
Both emphasize the need for systematic incident reporting. Speaker 1 calls for empowering biosafety officers and establishing reporting mechanisms up to national leadership [295-299]. Speaker 3 describes an Indian‑specific incident‑reporting framework that captures harms in marginalized communities [169-172]. The agreement is on establishing reporting structures; the divergence is in the specific institutional design.
Speakers: Speaker 1, Speaker 3
Empower biosafety officers and create reporting channels to top leadership Incident‑reporting framework and audit mechanisms tailored to local contexts
Takeaways
Key takeaways
AI‑enabled bio‑design tools shift risk upstream from physical labs to the design phase, requiring new governance approaches. Decentralized, multi‑agency oversight and empowerment of biosafety officers is essential, especially in heterogeneous ecosystems like India and the Global South. Open science can be preserved by using tiered, capability‑based access controls and contextual norms rather than blanket bans on open‑source tools. Significant AI‑readiness gaps exist in many Global South countries; socio‑cultural benchmarks, training, and tech‑sovereignty measures are needed. Independent evaluation, red‑team­ing and six‑monthly monitoring should become institutionalised, possibly via an AI safety institute model linked to existing bodies (e.g., WHO, BWC). Safety assessments must move upstream, include tiered risk classification for biodesign tools, and be integrated into grant‑review and institutional review processes. Cross‑border collaboration is required: harmonised data standards (e.g., federated HL7‑FHIR), pre‑negotiated legal safe‑harbors, and shared incident‑reporting frameworks. A broader taxonomy of harms (physical, psychological, cyber, socio‑economic, environmental) and continuous model‑drift monitoring are needed to capture emerging risks.
Resolutions and action items
Launch a Global‑South network for trustworthy AI and an AI safety commons to share evaluation tools and best practices. Adopt tiered, capability‑level governance (pre‑deployment assessments, KYC‑style credentialing) for high‑risk bio‑AI tools. Integrate AI safety checks into grant‑review processes and create cross‑trained AI‑biosafety review panels at institutions. Establish six‑monthly independent monitoring cycles, potentially through a dedicated AI safety institute with government liaison. Develop and deploy a tiered risk‑classification scheme for biodesign tools, with higher scrutiny for pathogen‑related models. Create federated data‑standard frameworks (e.g., HL7‑FHIR adapted for public‑health surveillance) and negotiate legal safe‑harbor agreements for emergency data sharing. Implement an incident‑reporting taxonomy covering physical, psychological, cyber, socio‑economic and environmental harms, and roll it out in Indian and other Global‑South contexts. Invest in capacity‑building programmes for biosafety officers, bio‑security officers, and AI safety personnel, including tech‑sovereignty measures.
Unresolved issues
Concrete mechanisms for decentralised oversight: how authority, funding and accountability will be distributed among regional bodies. Funding models and governance structures for the proposed six‑monthly monitoring and AI safety institute. Details of how tiered access and KYC‑style credentialing will be operationalised without stifling legitimate open‑source innovation. Coordination of multiple national agencies (e.g., health, environment, security) across borders and the establishment of interoperable response protocols. Management of DIY and small‑scale commercial bio‑AI activities that fall outside formal regulatory frameworks. Specific processes for negotiating and enforcing legal safe‑harbors for cross‑border data sharing in advance of emergencies. Scalable approaches for continuous model‑drift detection and mitigation in low‑resource settings.
Suggested compromises
Adopt tiered, capability‑based access controls rather than outright bans on open‑source bio‑AI tools. Combine top‑down leadership (national coordination) with bottom‑up empowerment of institutional biosafety officers and review panels. Use pre‑deployment assessments together with ongoing, automated risk‑monitoring to balance thoroughness and agility. Allow differentiated governance at the capability level while maintaining open‑source availability for low‑risk tools, protecting low‑resource innovation. Integrate AI safety evaluation into existing grant and institutional review processes rather than creating parallel, heavyweight structures.
Thought Provoking Comments
The risk landscape is shifting upstream to the design side because AI‑enabled biodesign tools decouple capability from physical containment. We need decentralized checks and balances rather than a single authority in Delhi.
Highlights a structural change in how bio‑risk is generated, moving the focus from labs to the design phase and calls for a fundamentally different governance model.
Set the stage for the discussion on governance architecture, prompting other speakers to consider decentralised oversight, institutional capacity, and the need for new regulatory mechanisms.
Speaker: Speaker 1 (biosecurity expert)
We should adopt tiered access and contextual norms – a pre‑deployment assessment with structured rubrics, similar to KYC, allowing credentialed researchers to work on defensive projects while keeping open‑source tools available.
Introduces a concrete, nuanced framework that balances openness with security, moving beyond binary yes/no answers.
Shifted the conversation from abstract risk to actionable policy levers, leading the moderator to ask about open‑science trade‑offs and inspiring later talks on differentiated governance.
Speaker: Speaker 2 (PT)
AI readiness varies hugely across the Global South; we need socio‑cultural evaluations, small‑language models for edge deployment, and participatory approaches that involve end‑users from the start.
Brings attention to the mismatch between high‑resource model development and low‑resource deployment contexts, emphasizing cultural relevance and capacity building.
Prompted a deeper dive into regional disparities, influencing subsequent remarks about tailored benchmarks, local incident‑reporting frameworks, and the need for a Global South AI safety commons.
Speaker: Speaker 3 (Geeta)
A six‑monthly, independent, credentialed AI safety institute – linked formally to governments and anchored to the Biological Weapons Convention or WHO – could institutionalise continuous risk monitoring and pre‑deployment assessment.
Proposes a concrete institutional model that mirrors the IAEA for nuclear safety, addressing the gap of sustained, systematic oversight.
Created a turning point toward institutional solutions, leading other panelists to discuss funding, governance structures, and the feasibility of such an institute in the Global South.
Speaker: Speaker 2 (PT)
We must harmonise data standards, create legal safe‑harbours for cross‑border sharing during emergencies, and develop shared evaluation criteria for biosurveillance systems; otherwise fragmentation will cost lives.
Identifies a concrete technical‑legal bottleneck that directly links data interoperability with public‑health outcomes, extending the discussion beyond AI model risk to system‑level coordination.
Redirected the dialogue toward cross‑national collaboration, prompting audience questions about temporal drift and inspiring suggestions about federated standards like HL7‑FHIR.
Speaker: Speaker 2 (PT)
Agentic AI can be used to guard against misuse – e.g., CEPI’s platform that checks for jailbreak attempts – and we still have a digital‑to‑physical barrier that limits the translation of malicious code into real pathogens.
Turns the narrative from AI as a pure threat to AI as a potential defensive tool, and re‑introduces the concept of physical containment as a complementary safeguard.
Balanced the earlier risk‑heavy tone, opened space for discussing AI‑enabled safety tools, and reinforced the need for layered, multi‑modal defenses.
Speaker: Speaker 1 (biosecurity expert)
Overall Assessment

The discussion was driven forward by a series of pivot points that moved the conversation from abstract risk identification to concrete governance architectures, regional capacity considerations, and technical‑legal solutions. Speaker 1’s framing of a upstream, design‑centred risk landscape forced participants to rethink traditional biosafety models. PT’s tiered‑access and institutional‑institute proposals supplied actionable policy scaffolding, while Geeta’s emphasis on AI readiness and socio‑cultural fit highlighted the inequities that any global framework must address. The later focus on data‑standard harmonisation and legal safe‑harbours broadened the scope to system‑level coordination, linking AI safety to public‑health infrastructure. Collectively, these comments reshaped the dialogue into a multi‑layered, globally inclusive roadmap rather than a single‑track regulatory narrative.

Follow-up Questions
How can we preserve the benefits of open science while preventing the destabilizing diffusion of high‑risk AI capabilities in biology?
Balancing openness with security is critical to maintain scientific collaboration without enabling misuse of powerful biodesign tools.
Speaker: Speaker 2
What are the most immediate institutional gaps in evaluating AI systems for biosecurity in India and Southeast Asia?
Identifying gaps in AI readiness, socio‑cultural assessment, and capacity building is essential for effective governance in these regions.
Speaker: Speaker 3
Should independent evaluation and red‑team testing of AI systems that generate biological outputs become a global norm, and how could such a framework be implemented?
Establishing systematic, periodic independent assessments could provide continuous risk monitoring for frontier models.
Speaker: Speaker 2
How can safety measures remain rigorous and feasible within heterogeneous, low‑resource research ecosystems?
Tailoring safeguards to varied institutional capacities is needed to avoid perfunctory compliance and ensure real protection.
Speaker: Speaker 1
How do we ensure safety, evaluation, and interoperability across legacy and emerging AI‑enabled scientific programs without fragmentation?
Coordinated standards and shared evaluation criteria are required to prevent siloed efforts and data hoarding.
Speaker: Speaker 2
How should harms—including psychological, socio‑economic, and environmental—be defined and categorized for AI‑enabled biosecurity tools?
A comprehensive taxonomy is needed to capture the full spectrum of risks beyond traditional physical harms.
Speaker: Audience Member 1 (directed to Speaker 3)
How can we address temporal data drift and model performance degradation over time in AI‑driven biosurveillance and biosecurity applications?
Monitoring and adapting to distribution shifts is vital to maintain model reliability as data evolves.
Speaker: Audience Member 2 (directed to Speaker 3)
What would a successful web of prevention and incident‑response framework look like for cross‑border biosecurity threats, and which existing models should be emulated?
Designing a coordinated, multi‑agency response system is crucial for rapid containment of biosecurity incidents.
Speaker: Audience Member 3 (directed to Speakers 1 and 2)
How can decentralized, adaptive oversight mechanisms for AI‑enabled biosecurity be designed and operationalized?
Decentralized checks and balances can overcome the limitations of a single central authority, especially in large, diverse nations.
Speaker: Speaker 1
What tiered risk classification schemes should be applied to biodesign tools and AI models to enable proportionate safeguards?
Differentiating high‑risk from low‑risk tools helps allocate resources efficiently and avoid over‑regulation.
Speaker: Speaker 1
How can the Global South build AI safety capacity through institutes, commons, and networks (e.g., AI safety commons, Global South trustworthy AI network)?
Localized expertise and collaborative platforms are needed to tailor governance to regional contexts.
Speaker: Speaker 3
How can data standards for AI‑enabled biosurveillance be harmonized across countries, possibly using federated frameworks like HL7 FHIR?
Standardized, interoperable data formats enable effective cross‑border monitoring and response.
Speaker: Speaker 2
What legal safe‑harbor mechanisms are needed to facilitate cross‑border data sharing during public‑health emergencies?
Pre‑negotiated legal protections can reduce hesitation to share critical data when crises arise.
Speaker: Speaker 2
How can shared evaluation criteria and incident‑reporting mechanisms be tailored to regional contexts (e.g., Indian settings) while maintaining consistency?
Context‑specific frameworks ensure relevance while enabling comparable assessments across jurisdictions.
Speaker: Speaker 3
How can agentic AI be employed to monitor and prevent misuse of AI tools in vaccine development and other high‑risk domains?
Using AI to guard against AI‑driven threats creates a self‑reinforcing safety loop.
Speaker: Speaker 1
What tech‑sovereignty measures can be implemented to control the import, export, and deployment of AI safety‑critical models and data?
National control over critical AI assets can mitigate risks of uncontrolled diffusion.
Speaker: Speaker 1
What would be the design and governance model for a six‑monthly, multilateral AI safety institute anchored to the Biological Weapons Convention or WHO?
Regular, internationally coordinated assessments could provide continuous oversight of frontier bio‑AI systems.
Speaker: Speaker 2
What are the feasibility and performance considerations of deploying small language models for edge use in low‑resource settings?
Smaller models may offer appropriate capability without the overhead of large, resource‑intensive systems.
Speaker: Speaker 3
How should socio‑cultural impacts and biases of AI models be evaluated for specific deployment environments in Southeast Asia?
Context‑aware assessments are needed to prevent harms that arise from cultural mismatches or bias.
Speaker: Speaker 3
How can pre‑deployment assessment rubrics and credentialed researcher networks with tiered confidentiality be designed to balance openness and security?
Structured access controls can enable responsible research while limiting exposure of dangerous capabilities.
Speaker: Speaker 2

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

AI Algorithms and the Future of Global Diplomacy

AI Algorithms and the Future of Global Diplomacy

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel at the AI Impact Summit brought together German officials and experts to discuss how artificial intelligence is being used as both a diplomatic tool and a geopolitical factor [1-4]. Raphael Leuner explained that the German government launched data labs across ministries in 2021, including one in the Foreign Office, allowing data scientists to work directly inside the ministry and rapidly co-create AI solutions rather than following lengthy traditional IT projects [12-14][17-20][22-24]. He emphasized that this internal positioning enables fast development of AI tools for tasks such as breaking data silos and supporting diplomatic negotiations, a model he sees as especially suited for the fast-moving AI field [16-18][21-23].


Shahani Yaktiyami noted that technology has always shaped foreign policy, and today AI represents the latest wave, with middle powers like India and Germany leveraging regulatory influence or sector-specific applications instead of competing for frontier models [36-41][46-50]. She argued that the current AI race is framed as a competition between the United States and China, but middle powers can assert influence through the AI value chain, for example by focusing on industrial AI or healthcare applications [48-51]. Norman Schulz added that AI’s rapid diffusion creates risks that require international regulation, citing historical parallels with nuclear weapons and calling for cooperation to mitigate those risks [66-68][76-78]. He highlighted the Global Digital Compact and the newly created Independent Scientific International Panel on AI as mechanisms to ensure inclusive, science-based governance and to give non-frontier countries a voice in setting AI rules [166-179].


Shyam Krishnakumar described India’s position as strong in building context-appropriate models and applications, though not yet capable of developing large frontier models, and identified opportunities for Indo-German cooperation in industrial AI, healthcare, and automation [92-100][108-119]. Raphael confirmed that the Foreign Office prioritises open-source solutions, reusing existing applications and developing negotiation support tools, while also observing that many leading open-source AI models currently originate from China, raising strategic considerations [128-133][134-137]. Shahani warned that adopting foreign or open-source models must account for national security risks, and that AI is already being integrated into think-tank reports and other diplomatic workflows, requiring geopolitical risk assessment in technology decisions [141-148][149-155].


In response to audience questions, Norman stated that AI will automate data-consumption tasks for diplomats, freeing them to focus on analysis and relationship-building, but will not replace human decision-making [250-258]. Both Shahani and Raphael cautioned against allowing AI to shape geopolitical narratives, noting the danger of bias and the use of AI by actors to amplify misinformation, while emphasizing the need for human oversight and bias-detection tools [274-281][284-286][295-298]. The discussion concluded that middle powers should collaborate on open-source AI development and sector-specific applications, leveraging their complementary strengths to create inclusive governance and practical tools for diplomacy [212-214][215-218].


Keypoints


Major discussion points


AI implementation inside the German Foreign Office – The ministry has created a network of data labs (16 across federal ministries since 2021) that enable rapid, internal co-creation of AI tools, favouring short development cycles over traditional multi-year IT projects. The office prioritises open-source technologies and uses AI to support diplomatic work such as document analysis and negotiation preparation. [12-15][18-24][127-133][128-136]


Geopolitical framing of AI as “technology diplomacy” – Panelists stress that AI is the latest layer in a long history of technology shaping foreign policy (industrial, nuclear, space revolutions). While great powers compete for frontier AI leadership, middle powers like Germany and India can exert influence through regulation, standards, and niche value-chain strengths rather than by chasing large-scale models. [35-41][46-51][78-86]


Governance, security, and sovereignty concerns – There is broad agreement that AI carries significant risks (bias, weaponisation, dependence on foreign models) and therefore requires international cooperation, regulation, and the development of indigenous capabilities. The discussion references the UN-led Global Digital Compact, the Independent Scientific International Panel on AI, and the need for “managed interdependence” in the AI stack. [68-77][157-166][170-186]


Indo-German cooperation focused on concrete applications – Both sides see opportunities in sector-specific AI (industrial automation, healthcare, robotics) that combine Germany’s industrial data and automation expertise with India’s large talent pool and cost-effective model development. Open-source collaborations are highlighted as a way to avoid dependence on US/China-dominated frontier models. [90-110][127-136][202-214]


AI’s impact on diplomatic narratives and media – While AI can accelerate information processing, panelists warn against letting AI autonomously shape geopolitical narratives. Human oversight is essential to mitigate bias and prevent the amplification of disinformation; AI is viewed as a tool for detection and efficiency, not a substitute for diplomatic judgment. [274-283][287-293][294-298]


Overall purpose / goal of the discussion


The panel aimed to explore how artificial intelligence can be harnessed as a practical tool within foreign ministries, to understand its broader geopolitical implications, and to identify pathways for responsible governance and bilateral cooperation-particularly between Germany and India-so that middle powers can meaningfully influence the emerging AI order.


Overall tone and its evolution


Opening segment: Informative and optimistic, highlighting the rapid, internal development of AI tools and the benefits of data labs.


Mid-discussion: Shifts to a more analytical and cautionary tone, emphasizing geopolitical competition, security risks, and the need for regulation and international frameworks.


Later segment: Becomes collaborative and forward-looking, focusing on concrete Indo-German partnership opportunities and the constructive role of open-source initiatives.


Closing remarks: Return to a balanced, pragmatic tone, acknowledging both the opportunities AI offers for efficiency and the imperative of human oversight to prevent bias and misuse.


Overall, the conversation moves from enthusiasm about AI’s potential, through a sober assessment of risks and power dynamics, to a constructive outlook on cooperative solutions.


Speakers

Raphael Leuner – Data Scientist at the German Federal Foreign Office; leads AI and data labs, focuses on AI tools for diplomacy, open-source AI, and negotiation support [S1][S2].


Gunda Ehmke – Moderator/Host of the panel discussion; facilitates conversation on AI in diplomacy and policy [S3][S4].


Norman Schulz – Consulate at the Coordination Staff for AI and Digital Technologies, German Foreign Office; discusses AI governance, the Global Digital Compact, and UN scientific AI panel [S6][S7].


Shahani Yaktiyami – Senior Officer, Technology Program at the German Marshall Fund; expertise in technology policy, AI geopolitics, international relations, and AI governance frameworks [S8][S9].


Shyam Krishnakumar – Associate at the Pranav Institute (focus on emerging technology, public policy, and society from an India-first perspective); speaks on Indo-German AI cooperation, sectoral AI applications, and open-source innovation [S11][S12].


Audience – Participants from the public, e.g., Sreeni (student, Ashoka University) and Sanjeevni (radio journalist, UK), asking questions about AI automation in foreign-policy work and media narratives.


Additional speakers:


Jian – Mentioned by the moderator (“Jian, let me turn to you now”) but no role or expertise detailed in the transcript.


Full session reportComprehensive analysis and detailed insights

The AI Impact Summit panel, moderated by Gunda Ehmke, brought together three senior German officials-Raphael Leuner (Federal Foreign Office data scientist), Dr Shahani Yaktiyami (German Marshall Fund), and Norman Schulz (Coordination Staff for AI and Digital Technologies)-to discuss how artificial intelligence can serve both diplomatic practice and geopolitical strategy [1-4].


Leuner described the German government’s 2021 decision to establish data labs in every federal ministry, resulting in sixteen labs by 2022 that embed data scientists directly within ministries. This “fast co-creation” model replaces the traditional two-year, large-team IT projects with rapid prototyping of tools that break down data silos and support diplomatic work such as analysing large document collections for negotiations [12-24][16-23]. The Foreign Office favours open-source technologies, re-using applications from German states (e.g., a general chat and knowledge-base) and building bespoke negotiation-support tools that help diplomats sift through “huge piles of documents” [127-133][128-130]. Leuner also highlighted a strategic tension: many leading open-source large-language models now originate from China, raising security concerns for Europe and prompting an incentive to seek Indian alternatives [134-137].


Yaktiyami placed AI within a longer history of “technology diplomacy,” recalling the industrial, nuclear and space revolutions as precedents for how technology reshapes foreign policy [35-41]. She argued that the current AI race is framed as a US-China competition, but middle powers such as Germany and India can exert influence by leveraging comparative advantages-Germany through regulatory and standards leadership, India through application-focused deployment-rather than by chasing frontier-model development [46-51][78-86]. She noted that the summit’s branding changed from the “AI Action Summit” under the French presidency to the “AI Impact Summit” under India’s stewardship, signalling India’s desire to claim a place in global technology diplomacy [46-51]. Yaktiyami also joked that the German Marshall Fund will “force you to read” its reports, underscoring the institute’s proactive push for AI-enhanced analysis [274-280].


Schulz emphasized AI’s growing ubiquity and the need for coordinated international regulation, likening the current risk profile to the early nuclear era and urging that the United States and China eventually work together on limits and safeguards [66-78]. When asked whether the current governance approach in the German Foreign Office is adequate, he answered succinctly that “the short answer would be no,” before elaborating on broader risk-management needs [66-68]. He cited a recent Davos speech by Mark Carney, who urged middle-power cooperation on AI governance [78-82]. Schulz highlighted the UN-led Global Digital Compact and the newly created Independent Scientific International Panel on AI as mechanisms to provide inclusive, science-based governance, noting that the panel will deliver its first report ahead of the AI-for-Good dialogue in Geneva [166-186].


Krishnakumar presented the Indian perspective, noting that while India does not yet build frontier-scale models, it excels at context-appropriate innovation, low-cost large-scale inference, and rapid grassroots development of language-specific models [92-103]. He identified concrete Indo-German collaboration opportunities in industrial AI-combining Germany’s automation expertise and industrial data with India’s talent pool and cost-effective model building-and in healthcare, where India’s high surgical volume provides rich data for AI applications [108-119][112-119]. He linked these ideas to the “open-source revolution” of the 1990s, arguing that collaborative, low-cost development can democratise AI and reduce dependence on US/China-dominated stacks [219-222][90-101].


All panelists endorsed an open-source-first strategy, citing transparency, reduced vendor lock-in, and the possibility of Indo-German collaborative models [127-130][123-130][219-222][157-164]. Leuner reiterated that AI-driven negotiation tools can process documents, freeing diplomats to focus on analysis and relationship-building [130-133]. Yaktiyami warned that narrative formation must remain a human activity and that AI-generated geopolitical narratives pose security risks [274-280]. Schulz added that AI can serve as a tool for bias detection, complementing human oversight [287-293][294-298].


During the audience Q&A, a student asked which parts of foreign-policy research could be automated; Schulz replied that AI can automate information consumption and document summarisation, thereby freeing diplomats for “connecting the dots” while decision-making remains a collaborative human process [250-259]. A journalist then queried whether AI could produce unbiased media narratives; Yaktiyami responded that AI should not shape narratives at all, warning of bias from training data, while Schulz noted AI’s utility for bias detection and Leuner warned that malicious actors already exploit AI to amplify propaganda across fake websites [274-280][287-293][294-298].


The panelists collectively underscored a shared vision: over the next five years AI will become embedded across sectors-including agriculture, finance, industry, communication, and diplomacy-and middle powers can steer this diffusion by focusing on sector-specific, open-source collaborations rather than competing for frontier model supremacy [208-213][61-64][120-121]. Key next steps identified were: (i) continue the open-source-first policy within the German Foreign Office [127-136]; (ii) launch joint Indo-German pilots in industrial automation and healthcare [202-214]; (iii) contribute German expertise to the UN AI scientific panel [166-184]; and (iv) embed geopolitical risk assessments into all AI procurement processes [141-148][149-155]. By combining rapid internal co-creation, inclusive multilateral governance, and targeted bilateral projects, the participants argued that middle powers can both harness AI’s benefits for diplomacy and mitigate its attendant risks [212-218][224-230].


Session transcriptComplete transcript of the session
Gunda Ehmke

Institute. Then we have Raphael Leuner, Data Scientist at the German Federal Foreign Office. We have Dr. Shahani Yaktiyami, Senior Officer, Technology Program at the German Marshall Fund. And we have Norman Schulz, Consulate at the Coordination Staff, AI and Digital Technologies at the German Foreign Office. And to kick off the conversation today, so we will cover both AI as a topic and as a tool, I would like to first start with a tool. So going to Raphael, who is a Data Scientist, how do you use AI in the Foreign Office? And I also know that you have data labs, data and AI labs in the Foreign Office. So could you maybe share a little bit of your day -to -day work?

And yeah, actually, how could AI be used in diplomacy?

Raphael Leuner

Yeah, thanks so much. Yeah, maybe to get to take a step back and answer the question, how like someone like me as a as a data scientist by training ends up in a foreign ministry. I think that’s something that at least when we talk to colleagues around the world is still rather rare. We had kind of the lucky coincidence that I think in 2021, the German government decided to start data labs in all of its federal ministries. And so in the coming years since then, 2022 or until 2022, kind of 16 data labs have been founded in the German federal government. And I was lucky enough to be part of the one in the German federal foreign office. Yeah.

And I was working on AI ever since we started more on traditional data science, I would say. So tearing down data silos between governments or government institutions, in Germany and, of course… And ever since JetGPT and the AI revolution, we have been working mostly on AI tools. And I think the big advantage that we see is that we are in the ministry itself and have very, very short contacts and short paths to our colleagues who are working in Berlin and, of course, all around the world. And I believe in a field that is as fast moving as AI, that is so important because it doesn’t really work to develop these tools in sort of a traditional IT way of doing things, right?

We used to have IT development projects that take two years, have huge teams, cost a lot of money, but that are just not fast enough to deliver on an AI solution that our colleagues, our colleagues are experiencing themselves in their private lives, right? And some of them even… some official aspects. So what we think is the big advantage that we have and what we kind of from our experience would always advertise for is kind of this fast co -creation from within an organization. And I think that is for a topic like diplomacy that is the best way of leveraging AI. And I’m happy to go into more detail about that.

Gunda Ehmke

Thank you. And I will later ask you more on concrete use cases. But first, I would like to switch to the geopolitical dimension. So Sharini, switching over to you. Taking a step back, AI is now in the political landscape more or less present everywhere. From the Arctis, but also here at the Summit. Can you give us a broader picture? How is AI shaping diplomacy or foreign policy in general? What is the debate and where are we at the moment?

Shahani Yaktiyami

Thank you. Thank you for the question and also the invitation to be here, which is actually also me being in my home country. So you’ve invited me to my home country, which is an interesting space to be in. But at the broader sort of geopolitical level, AI is shaping not only sort of how we use technology in our strategic communication as countries as well, but as a tool of technology diplomacy. And I don’t necessarily think this is particularly new. Throughout the history of international relations and foreign policy, technology has always shaped our foreign policy. So this is the AI revolution. But if we take it back to the Industrial Revolution, if we take it back to the nuclear revolution, if we take it back to the space race, technology has always informed diplomacy.

And today it is artificial intelligence. So the technology is not new. Yes. But the tactics aren’t. And today we are here at the AI Summit, and this is also India’s way of communicating that it is being a part of a particular technological revolution, which in its previous histories, because of colonial encounters and things, we’ve been excluded. So in this space, this is a way in which countries from our parts of the world are also trying to kind of claim a space in global technology diplomacy. And this is through AI. And what I would also kind of want to just qualify is what we’re seeing in this particular sort of AI race is narratives of competition.

So if you look at sort of policy documents coming out of the United States, coming out of China, there’s a clear connection between kind of winning an AI race or securing leadership in artificial intelligence. And if you are a country of that size and you are the country that has, invented the frontier technology and you’ve been sort of the first movers in that. if a kind of geopolitical leverage which countries like Germany and India perhaps don’t have because we aren’t at that frontier capability but that being said we’re not powerless we just have a different form of power expressing power and that is when the entire middle power conversation comes into play both India and Germany can see themselves are in fact arguably middle powers and they have different ways of using their specific leverage on an AI value chain as geopolitical leverage so for Germany historically this has been through rules and through regulation and regulatory power for India now it is making a case for applications so India and we’ve seen the fact that the summit has changed from the AI action summit which was the French presidency now is seeing India framing it as the impact summit the slogans of the summit are very very much to do with aspirations to deployment or aspirations to impact.

So that is really a way in which a middle power like India is also trying to kind of claim its position on the stack. So what you’re seeing are the great powers who are competing at the frontier level, and then there are middle powers who are claiming their specific power on the value chain in different ways. And I’ll stop there for a second.

Gunda Ehmke

Thank you very much. And I would like to pick up this statement that you said. But the tech is new, but the tactics aren’t. So I have here a diplomat sitting next to me. Would you agree with the statement? And how do you govern AI in the Ministry of Foreign Affairs? And would you say, is this still the right approach to AI?

Norman Schulz

Oh, well, the short answer would be no. But the topic is so broad that obviously I could give you a four -hour talk about it. But as a diplomat, as you said, one has to start by saying that the AI Impact Summit here in Delhi, where we are all gathered, showcases the broad variety of AI and the broad picture that AI is now part of every day’s life, of all strands of life. That it is a tool in communication. It is a tool in agriculture, in industrial entrepreneurship, in finance, and also in diplomacy and foreign policy. So I find that very interesting what you alluded to, that we have these revolutions all the time. like the Industrial Revolution, like the nuclear revolution after the Second World War?

And where do the foreign ministries, where do foreign policy comes in? I mean, the technological revolution created frontrunners like the UK, maybe a little bit like France. But there was a point in time when people saw that only being at the front and adapting the frontier models is not the way to success. But we have to find a way to regulate things because otherwise people will lose their lives. It’s not work safe. It’s polluting the environment. Even back then, there was a problem. Nuclear power, the same thing. There was a race in the 50s. And the Cuba crisis beginning at the 60s showed to the world that the nuclear race could not go on like it was.

But we need international cooperation to mitigate somehow the risks of it. And I think AI is at a similar point. Maybe it needs a couple of more years when the U .S. and China will actively come together and work out what limitations and regulations we have to put on the technology because the risks in the end are outweighing possible and potential benefits. And the other great question is where do the middle powers come in? And this is what India and Germany are talking about. Well, we had the speech of Mark Carney, the Canadian Prime Minister in Davos, where he actively called for the middle power cooperation. And he said, well, we don’t have the power to do that.

I think India is at a one. wonderful place because you are a digital powerhouse and you have all the structures and all the workforce to also become an AI powerhouse. I would also make the case that Germany has also some advantages. We have infrastructure, we have the money to invest into AI, and we also have industrial data to be a frontrunner. Even if we didn’t succeed at the stage of large language models, maybe when it comes to robotics and embodied AI, Germany will still have a role to play. And obviously we at the Foreign Office are there to accompany the development of this and to prepare. Prepare the ground for international cooperation. And I believe it at that because others…

Gunda Ehmke

thank you thank you i would like to turn now to the printing perspective um the pranav institute works at the intersection of emerging technology public policy and society from an india first perspective um how do you see um how do you see potential room for cooperation between india and germany like we hear now the middle power those are middle power i hear a lot at the summit that india is leading in a ai adoption um i wouldn’t say so maybe in germany maybe my german colleagues would agree or disagree with me but from your perspective where do you see cooperation like potential cooperation could you also go a step back and um explain to the audience where you see india at the moment maybe also in light of the ai summit

Shyam Krishnakumar

yeah can you hear me i think that’s a very challenging question to answer. Where is India at? India is at a very interesting place, certainly. India is not lagging behind. India is not yet at a place where we can build frontier models. I think the infrastructure capacity for that is very high. I do see some interesting innovation coming out of India. When we saw those 14 models that was released over 14 days and very, very interesting in the sense that this is innovation which is grounded, contextual. It is coming from the grassroots. You are able to find native language use cases. You are able to do inference at scale at much an order of magnitude cheaper costs.

So, you are seeing technical innovation which is more context appropriate coming from India. There is, of course, a large workforce which is talented in technology and there is an upscaling possibility that certainly exists when into AI and that is a very large pipeline. So I think India is a very interesting place. India is adopting, India is innovating, India is building applications and use cases, which is a very useful way to think about the technology in its early stages, right? Because there is a huge possibility of investment booms and busts that can come in when you go in a technologically challenging direction without being adaptive. So I think the focus on saying what can we solve is a very useful way to think.

I think the counselor did allude to industrial AI. That’s a fantastic use case of cooperation where you and India could possibly, Indo -German cooperation would certainly work out in that sense because there is industrial expertise, there is automation expertise in Germany, there is industrial data. India has the capability to build technology, build models. So I think if we were to identify and not worry about the race for frontier models, because transformers are not going to be the same. They’re not going to be the only technology paradigm out there and not play the game that leading powers are, but to really think as middle powers do as Sharon said and say that can we focus on sectoral expertise?

For example, AI in healthcare is a fantastic opportunity for. Indo -German cooperation, there is fantastic data available. India performs 10 times the number of surgeries that other countries do. So there’s very interesting data available. Germany has the capacity to invest. Can we cooperate? Germany has expertise in automation. India has, you know, people who can build AI models. Can we cooperate? So I think there is possibility for bilateral cooperation that, you know, gives an argument that is more than one plus one in the case of some of these. And I don’t think it’s a zero -sum game that U .S. is winning or China is winning and they’re all left behind. I think the focus on applications is really where a differentiator is possible, and that need not come at frontier -level costs.

Gunda Ehmke

Thank you. And I would like to focus now on this application side because this is maybe the way to react to big tech or like us as a country being in the middle between these mentioned countries. Rafa, can I hand over to you to share a little bit how you have the foreign office approach? I know that you are working on it. It’s a negotiation tool. And to what extent can open source also be a solution or might be a solution? to the situation where we are at the moment.

Raphael Leuner

Sure. Yeah, so I think it’s exactly as you said, that the focus is on application. We have made a consequential, but I think important decision at the beginning that when we are implementing AI, we are focusing for most of what we do on open source technologies, not just the models themselves, but also a lot of the kind of scaffolding and applications around it. So on the one hand, for example, we are reusing applications that, for example, come from one of our state governments who have done like kind of a general chat and knowledge -based application that we are reusing. But of course, we have specific applications in the foreign office like supporting negotiations. A lot of what diplomats nowadays do is not necessarily sitting in rooms and negotiating face -to -face, but actually digging through huge piles of documents and…

trying to understand the positions of other countries, the impact that NGOs, academia, corporations bring into huge negotiation processes. And, of course, that’s, as we probably all know, is a great chance for artificial intelligence to leverage. I think one important point when we’re talking about AI and open source AI in governments is that we have seen a big trend shift or a shift in the trend last year where we have seen that a lot of the kind of leading open source AI models and actually also the ones that have been adopted in many parts of the world are coming from China nowadays. I think that’s an interesting intersection between my position as a technical observer here where we are looking at the numbers and seeing that really, like, you know, the world is adopting Chinese AI models at the moment.

And, of course, the consequences that that might bring for a country like Germany or a country like Europe. Like India on a global scale, if maybe… some of our partners are implementing Chinese AI models. So that is something that when it comes to open source, I think it’s really important that countries like India, and I think India is at a great position, and I’m super excited to see these new Indian AI models as well, these Indian LLMs, to see if there can be pushes that offer alternatives to these Chinese models.

Gunda Ehmke

Thank you. I would like to come back to this impact aspect. Now we heard impact in the public sector, but maybe also reflecting on the summit, AI Impact Summit. What are your thoughts on how will we now continue the conversation regarding impact, regarding really being concrete and not only writing governance formats or governance frameworks, so how can we make this cooperation very concrete and also continue where we are and face this geopolitical challenge

Shahani Yaktiyami

Shoni, yeah I like that all the geopolitical questions then somehow come back to me but I don’t blame you because my background is in international relations so that serves very well this purpose but I want to kind of also connect your point to what you just said about open source and the China connection I think we’re reaching a stage in international relations in which geopolitics and technology can’t be separated when we are integrating artificial intelligence into our daily life and into our government systems we can’t really separate the security risks that come with it And I think every country has a unique security situation. For Germany, obviously, there is the concern with Ukraine. With India, we have border security challenges as well.

We have territorial disputes that are very significant and have very serious national security implications. So the kind of technology we deploy into our systems, and if it’s open source Chinese models or any other form where we perceive or any country would perceive a national security risk, that needs to be factored in. And that is why even in our technology decisions, they have to factor geopolitical risk, which back in the day was not something that, say, companies would have to do. But now every single company that I see has now a position for a geopolitical risk advisor. And that really comes from the fact that we are living in a world in which if we are using technologies so seriously in our lives, lives, we do need to factor in how those technologies can be weaponized in a particular geopolitical situation.

And then that kind of brings me back to also some of the points that were on, you said, you know, where we, you, foreign office would like find it helpful for reports to be kind of processed to AI. As a think tank, I think I’m a little bit hurt, I have to say, because a lot of our work is producing a lot of those reports, but we will force you to read them. We’re very persistent at the German Marshall Fund. We will reach out and invite you and make you read them. But jokes aside, it is really, we’re aware also that our ability to consume information as well is kind of becoming shorter, but the world is getting more complex.

And therefore, we are also kind of preparing, even in the think tanking that we do, even in the way in which we kind of do our daily jobs, to factor in that. There will be an AI in this system, and we kind of need to put that into consideration as well.

Gunda Ehmke

and since there will be an AI in the system we have to make sure that we can trust this AI and that it’s also inclusive and that it’s yeah ethical in a sense or trustworthy regarding to standards so how do you and the government react to this could you also share more about the global digital compact and what is this panel about this scientific panel I think it’s called and how do we make sure that from this governance it goes to the system to the AI system like how do we make sure that the systems are aligned with our values

Norman Schulz

well that’s big question the best way to align the systems with our values is to develop to develop them ourselves right and not just procure them from from outside and I couldn’t agree more with the point that you made about the Chinese models, that even if it is open source, even if it runs on our servers, there are still Chinese models. They still have the Chinese ways and the Chinese ways of thinking, which comes through maybe not all the time. So using AI to do diplomatic work will not be the way because then every report will be the same, right? So I hope that Germany will not go the way to write the diplomatic reports now only using AI or summarizing it.

But we need our diplomats to insert that innovative thinking. And innovative thinking does not come from AI. Because AI… AI is much rather replicating, summarizing, in my understanding. The new ideas still come from the human side. As far as I make it out. Global digital compact. Thanks for the question. The Foreign Office was the lead in Germany to negotiate the global digital compact. And obviously you can make a point that this is a UN compact and the UN system is under immense pressure at the moment. So what does it achieve? And I would make the point that despite all that, it has at least produced two valuable avenues for future cooperation and discussion, two platforms.

The first is the AI panel. I think it’s called Independent Scientific International Panel on AI, but I could be wrong with the two I’s. It’s rather complicated. But it was just yesterday that the UN Secretary General made the point that the AI panel and the second one, the dialogue I will come to in a second, are the two major things where the UN is coming into the picture. And the panel has the task to put our discussions that we have on a global level about AI on a scientific basis. So those are experts, and I’m happy that there are two experts from Germany on the panel. Only the U .S. and China have also two experts.

I’m terribly sorry. I don’t know how many Indian experts are on the panel. But we’ll find. We’ll find that out. True. So they will produce a first report, a summary of where the AI science is now standing in time for the first global dialogue on AI governance, which will happen in July in Geneva in the margins or back to back with the AI for Good Summit at the International Telecommunications Union. And this dialogue serves the other big purpose of the Global Digital Compact, which is to make the AI discussion inclusive. And so it’s also the UN Secretary General nonetheless that said that AI cannot be a discussion among the few, the ones that are the front runners like the US and China.

They should not be the one to they should not. Not be the only one to set the rules, but it has to be a truly inclusive discussion about the AI. Up until now, more than 100 countries were not part of this discussion because they were not members of the European Union, not members of the Council of Europe, not members of the G7 or the G20. But they are the ones that will use AI, that will adapt AI, and they will also feel the bad results if AI is not doing what it is supposed to do. So it’s good that they have a voice at the table, that all UN member states will in July come together and talk about AI on the scientific base that the panel has provided.

So that is something that the Global Literature Compact is doing. And, of course, we can talk about geopolitics all the time, but I think that’s a way forward. And it’s to make a point. And I stop here and make a point.

Gunda Ehmke

Thank you. Thank you. And, Jian, let me turn to you now. And the fact that there is not a zero -sum game in a lot of this. I think the idea that we can work together to bring a larger voice beyond the worries of two countries or three countries which are able to compete at the top. I think that’s something that they shared. And I think the role of middle powers in bringing a more inclusive conversation is really important. And I think Indo -German cooperation is an opportunity for that. Including, for example, industrial AI that the counsellor mentioned or other opportunities where we can practically create tools like what Rafael is also talking about. Where we can practically create tools that are beneficial and maybe open source.

Why should open source models only come from strategically challenging sources? There could be Indo -German open source models, smaller models, not frontier models that could be beneficial.

Raphael Leuner

Yeah, I can maybe react to that directly because I think it’s super critical. I don’t think some people believe or make us believe that the AI race is already over or kind of only being decided between the US and China. I don’t believe that. I think we’re more at the start of what’s going to come. And I think we can feel this at the summit. And Gunda, you asked the question, what comes next? I think next comes building and implementing AI in all these kind of fields that we have. I think we see so many ideas around here and first steps towards that. But we don’t really see widespread AI adoption in every field, in every kind of part of life.

I do think this is going to happen over the next five years. And I don’t believe for a second that this is only going to be done by the U .S. or China. And, yeah, I think that when it comes to middle powers, Germany, India, I think we are going to see much closer collaboration in like smaller groups that don’t try to kind of, you know, build dependence, right, making you dependent on us, making us dependent on you, but rather ensure that every country can bring to the table what they are particularly good at and make the results kind of improve the application of AI for everybody involved. I do think there is a strategy for that.

And I think, yeah, the way forward you have asked is to start with it and to build AI together. I think this is a great, you know, a great rally cry for

Gunda Ehmke

Yeah, yeah, please.

Shyam Krishnakumar

Rafal, you led me on to a very interesting trail, so I had to intervene. I think one of the interesting moments, if you think about technology again, in the 1990s was the open source revolution, right? And when you really saw operating systems, consider the frontier technology of that time being built by volunteers at a fraction of the cost. diffuse the race in a certain way or diffuse the dominance in a certain way, but also enabled accessibility across the world. So I think even coming together as middle powers, the power of open source and democratizing and reducing the factor costs of access to AI, it becomes very powerful if you draw from it. And now you led me on to a trail as well.

I just want to kind of contextualize the sovereignty thing as well.

Shahani Yaktiyami

And I do think that when we talk about artificial intelligence, it’s not just one application that we see when we use our phones or interact with a particular model, right? It’s an entire stack. And the question of sovereignty or the concerns vis -a -vis the sovereignty debate is also born out of geopolitics, right? So we don’t want to be as a particular country in which suddenly one day we wake up and our technology is not available to us because, because of something else that happened in another country. corner of the world. So the sovereignty debate is coming out of geopolitics as well. That being said, we don’t need to be beholden to it. I fully agree.

And I really like the point on us understanding what our strengths are. I mean, Germany had a high -tech strategy that came out last year. There’s also an emphasis on Germany being a space for data as a data hub. And India is trying to do that as well. Germany already has that. One of the things China is really good at actually is industrial data, because they have been collecting this data for a very long time because they automated quicker than a lot of us. And that’s something where we can collectively build competitiveness. So I do think we need to reset some of the inequalities in the AI stack and that sovereignty, as much as I kind of understand where that comes from, I don’t always think that that’s the…

best language to talk about where we are at. I do think we need more sophisticated and nuanced ways of kind of talking about a managed interdependence where I have a certain value on an AI stack. That is my strength. And the likelihood of you weaponizing that makes it very limited. So that’s why I have leverage. And I do think leveraging a country’s strength on a specific AI stack is a prominent and powerful middle power strategy.

Gunda Ehmke

Thank you. These are all beautiful closing remarks, but I would like to open the floor to the audience. Are there any questions? Yes, everyone around.

Audience

Hi, I’m Sreeni. I’m a student at Ashoka University. I have a question for everyone in the panel. Feel free to. answer. The question is, what are some parts of foreign policy research, decision making and implementation which can be automated by AI or that will use a significant uses of AI to sort of do your day -to -day tasks?

Norman Schulz

Maybe I can quickly answer this question. Well, I certainly don’t think that AI will make any decision in any time soon. So there’s always going to be the human that is making the decision. And it’s not going to be me. It’s not going to be my boss. It’s going to be a collaborative decision by the government and the legislature and all of that. But our job will also not go away. We will use AI to make our job easier to consume data, to consume, I would like to say information, but it’s nothing. Consuming information easier, quicker and which in turn will free diplomats to do the other time, which is connecting, which is connecting the dots, which is thinking out innovative ways of cooperation, which is, it’s basically like drinking coffee and shaking hands.

These start traveling to India and learn a lot about the situation here. So, AI will free us from tedious tasks of skipping through these very valuable documents written by not only NGOs, but also governments, and will make our lives easier, but our work will not go away. Thank you.

Gunda Ehmke

Okay, one more question. The lady maybe in the back.

Audience

Hi, I’m Sanjeevni, and I work in radio journalism in the UK. And my question was for you, Norman. So, specifically, So, Norman and Sharini, both of you, actually. So, you guys are doing your Masters in Journalism. I was studying about how journalists were framing the Russia -Ukraine war. And we were observing how the narratives were changing based on different outlets. But something when we were talking about how AI is coming into play, do you think AI will help change narratives for the better? And I’m not speaking from journalism point of view. In general, geopolitically, do you think the narratives will be framed in a way that’s unbiased? Or how do you think it will help in that?

Shahani Yaktiyami

I don’t take a stab. I’d be very curious to point a question, actually, to you, because one of the things I’ve been really intrigued in in my line of work is how AI has been being deployed in the media and newsrooms. And I’d be very interested to have a chat after to learn how you’re doing that in terms of methodology. But to your question on AI shaping, I think it’s a great question. narratives, I would not let AI shape narratives. I would hope we shape narratives as human beings, depending on sort of what we think and feel and analyze through empirical evidence about the world. And I would be very worried in a world in which AI, we allow AI the space to shape narratives, especially on geopolitics, because then that would depend upon what that particular AI model that is doing the narrative shaping has been trained on.

But that being said, I would also see how AI can do the harm in terms of amplifying incorrect narratives or geopolitically challenging narratives. And that’s when we know that AI cannot replace society and it cannot do. So I do think that we are in a world in which if AI, we allow AI to shape narratives, that’s not a world we want to live in. Thank you. but at the same time, if it is a world in which that can happen, we need to find the right mitigation strategies to do that. One thing that I know India is doing, Shyam, we talked about it earlier, which is these bias detection technologies that are critical. AI is a technology that, on one hand, we do need strong regulation to make sure that we can prevent the harms from doing what they can, but at the same time, we need also technological tools to deal with some of those harms and push a democratic innovation as well in AI for exactly these harms.

And I’ll stop there.

Norman Schulz

Just one sentence. If you let AI write the newspapers, they are becoming incredibly dull because it’s going to be repetitive all the time. But I agree with your point about bias. This is something that we all have to challenge and to face. And AI is helping us. That’s a good thing. AI is not only a risk, it’s also the opportunity, helping us detect bias and then contravene it. Thank you.

Raphael Leuner

Just one sentence I want to add because I found your point so important. I don’t see any risk that AI is going to shape the narratives itself somehow, but of course it’s an incredible tool for actors trying to shape narratives. And we have seen this on so many fronts already. We have colleagues in the foreign office who are actually monitoring this and seeing that it’s, for example, used to amplify certain messages across social media, increasingly now across faked websites that, with the help of AI, you can pull up in seconds and suddenly you don’t have one of them or two of them, but you have thousands of them. And that is something that AI has already used quite heavily as a tool of certain actors who are trying to influence geopolitical discussions in exactly that way.

Gunda Ehmke

Sorry, we are running out of time, but I’m sure that the speakers stay here a little bit. So thank you for listening to our panel discussion. Maybe a big applause. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (31)
Factual NotesClaims verified against the Diplo knowledge base (3)
Confirmedhigh

“The AI Impact Summit panel was moderated by Gunda Ehmke and included Raphael Leuner from the German Federal Foreign Office and Dr. Shahani Yaktiyami of the German Marshall Fund.”

The knowledge base lists Gunda Ehmke as the moderator and confirms the participation of Raphael Leuner and Dr. Shahani Yaktiyami in the panel discussion [S1] and [S2].

Confirmedmedium

“Leuner highlighted that many leading open‑source large‑language models now originate from China, raising security concerns for Europe.”

The source notes that China’s AI industry is embracing open-source LLMs such as DeepSeek’s R1 model, confirming that leading open-source models are indeed emerging from China [S106].

Additional Contextmedium

“Schulz cited recent U.S.–China talks on AI risk management as evidence that the United States and China will eventually cooperate on limits and safeguards.”

The knowledge base mentions ongoing U.S.-China discussions on AI risk management, providing context that such talks are occurring, though it does not specify the Davos speech by Mark Carney [S16].

External Sources (114)
S1
AI Algorithms and the Future of Global Diplomacy — Institute. Then we have Raphael Leuner, Data Scientist at the German Federal Foreign Office. We have Dr. Shahani Yaktiya…
S2
AI Algorithms and the Future of Global Diplomacy — Institute. Then we have Raphael Leuner, Data Scientist at the German Federal Foreign Office. We have Dr. Shahani Yaktiya…
S3
AI Algorithms and the Future of Global Diplomacy — -Gunda Ehmke: Moderator/Host of the discussion
S4
AI Algorithms and the Future of Global Diplomacy — Ehmke emphasizes the importance of ensuring AI systems are trustworthy, inclusive, and ethical, questioning how governan…
S5
Multistakeholder Partnerships for Thriving AI Ecosystems — From the perspective of the UN Development Program, certainly we see a concern with what is happening in the development…
S6
AI Algorithms and the Future of Global Diplomacy — – Shahani Yaktiyami- Norman Schulz
S7
AI Algorithms and the Future of Global Diplomacy — Speakers:Norman Schulz, Raphael Leuner Speakers:Norman Schulz, Shahani Yaktiyami Speakers:Raphael Leuner, Norman Schul…
S8
AI Algorithms and the Future of Global Diplomacy — Institute. Then we have Raphael Leuner, Data Scientist at the German Federal Foreign Office. We have Dr. Shahani Yaktiya…
S9
https://app.faicon.ai/ai-impact-summit-2026/ai-algorithms-and-the-future-of-global-diplomacy — Institute. Then we have Raphael Leuner, Data Scientist at the German Federal Foreign Office. We have Dr. Shahani Yaktiya…
S10
AI Algorithms and the Future of Global Diplomacy — – Shahani Yaktiyami- Norman Schulz – Shahani Yaktiyami- Raphael Leuner Yaktiyami advocates for sophisticated managed i…
S11
AI Algorithms and the Future of Global Diplomacy — -Shyam Krishnakumar: Works at an institute (appears to be associated with Pranav Institute based on context), focuses on…
S12
AI Algorithms and the Future of Global Diplomacy — Speakers:Raphael Leuner, Shahani Yaktiyami, Shyam Krishnakumar Speakers:Raphael Leuner, Shyam Krishnakumar Speakers:Ra…
S13
WS #280 the DNS Trust Horizon Safeguarding Digital Identity — – **Audience** – Individual from Senegal named Yuv (role/title not specified)
S14
Building the Workforce_ AI for Viksit Bharat 2047 — -Audience- Role/Title: Professor Charu from Indian Institute of Public Administration (one identified audience member), …
S15
Nri Collaborative Session Navigating Global Cyber Threats Via Local Practices — – **Audience** – Dr. Nazar (specific role/title not clearly mentioned)
S16
AI diplomacy — However, we must remain masters of our tools. The final analysis, the subtle art of negotiation, the building of trust; …
S17
Why will AI enhance, not replace, human diplomacy? — AI tools are already here to assist certain aspects of negotiations, from language translation to data analysis. However…
S18
How AI Is Transforming Diplomacy and Conflict Management — Thank you, Michael. And thank you all for being here this morning. A big welcome. Over the next 10 minutes or so, Charli…
S19
Increasing routing security globally through cooperation | IGF 2023 WS #339 — Relying on open standards can decrease dependency on specific vendors.
S20
Host Country Open Stage — Nordhaug argues that digital public goods provide governments and organizations with greater control and sovereignty com…
S21
Generative AI: Steam Engine of the Fourth Industrial Revolution? — The adoption of newer technologies is not limited to a specific industry and is prevalent across all sectors. Currently,…
S22
Global Perspectives on Openness and Trust in AI — Bouverot proposes that middle economies like Canada, France, Germany, Switzerland, India, Japan, and Australia can colla…
S23
Building Trusted AI at Scale – Keynote Anne Bouverot — This comment shifts the discussion from acknowledging competition to actively proposing strategic alliances. It introduc…
S24
Microsoft reveals Chinese groups use AI content to undermine US elections — Microsoft Corp. has identified Chinese groups using social media and AI-generated images to incite controversy and gain …
S25
Report reveals rising use of AI in manipulative online campaigns — Google-owned US cybersecurity firm Mandianthas reported an increasing trend of AI being used for manipulative informatio…
S26
Human Rights-Centered Global Governance of Quantum Technologies: Implications for AI, Digital Rights, and the Digital Divide — Approach to quantum governance – dialogue versus action Development | Legal and regulatory Multi-stakeholder and Inter…
S27
WS #64 Designing Digital Future for Cyber Peace &amp; Global Prosperity — The speaker emphasizes the need for a governance framework that caters to the lowest common denominator. They stress the…
S28
From principles to practice: Governing advanced AI in action — These key comments fundamentally shaped the discussion by establishing both the theoretical framework and practical urge…
S29
Negotiations — Artificial Intelligence (AI)has various applications in diplomacy. It can be used for data analysis to predict the outco…
S30
The role of diplomacy in AI geopolitics | AGDA — He also advised diplomatic services to start AI transformation through small projects such as the automation of administ…
S31
Secure Finance Risk-Based AI Policy for the Banking Sector — Impact:This shifted the conversation from abstract governance principles to concrete regulatory mechanisms. It provided …
S32
Closing remarks – Charting the path forward — A central theme was the need to move beyond abstract principles toward concrete implementation tools, technical standard…
S33
How can you check if AI will endanger your job? — AI will profoundly impact the text-intensive nature of diplomacy. Diplomats have long been masters of language, as demon…
S34
Networking Session #26 Transforming Diplomacy for a Shared Tomorrow — This networking session, hosted by the Data Innovation Lab of the German Federal Foreign Office, explored the role of ar…
S35
Embracing AI in diplomacy: How can Europe prepare for pivotal transformation in global affairs? — Firstly, AI is reshaping the geopolitical environment in which diplomacy operates. It facilitates the redistribution of …
S36
How to make AI governance fit for purpose? — Given that AI technologies are inherently global, effective governance requires international engagement and cooperation…
S37
Global AI Policy Framework: International Cooperation and Historical Perspectives — Mirlesse outlines practical steps for implementing open sovereignty, emphasizing domestic AI deployment in key sectors w…
S38
Discussion Report: Sovereign AI in Defence and National Security — International Cooperation vs National Autonomy International cooperation is essential while maintaining national sovere…
S39
IndoGerman AI Collaboration Driving Economic Development and Soc — Summary:Speakers consistently highlighted how Germany’s precision engineering and regulatory expertise combines effectiv…
S40
Diplomatic policy analysis — Overreliance on technology:While machine learning and analytics are powerful tools, they are not infallible. Overdepende…
S41
[Event summary] The impact of AI on diplomacy and international relations — Panel 2: AI as a cognitive tool for diplomatic practice:Andrew Tony Camilleri, Technical Attaché, Permanent Representati…
S42
Will AI take over diplomatic reporting? – WebDebate #56 summary — Third, in analytical reporting, AI may serve for background research and collecting data. But this area of diplomatic wo…
S43
Will AI take over diplomatic reporting? (WebDebate #56) — “It was kind of agreed that that would be a good use since they tend to be, they tend to follow a certain patent pattern…
S44
Discussion Report: AI Implementation and Global Accessibility — Awesome question, really. And I think, and it goes back to the point that I raised earlier, which is that the benefit of…
S45
Generative AI: Steam Engine of the Fourth Industrial Revolution? — The adoption of newer technologies is not limited to a specific industry and is prevalent across all sectors. Currently,…
S46
Comprehensive Report: China’s AI Plus Economy Initiative – A Strategic Discussion on Artificial Intelligence Development and Implementation — And that signal that the benefit, they want to see it to trickle down into the economy, into the companies, so it’s more…
S47
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — The level of disagreement is moderate but significant for implementation. While speakers share fundamental goals of resp…
S48
From principles to practice: Governing advanced AI in action — The speakers show broad agreement on fundamental goals (safety, trust, international cooperation) but significant disagr…
S49
Why science metters in global AI governance — Low to moderate disagreement level with high consensus on core principles but divergent views on implementation strategi…
S50
AI Algorithms and the Future of Global Diplomacy — High level of consensus with significant implications for AI policy coordination between India and Germany. The alignmen…
S51
WS #205 Contextualising Fairness: AI Governance in Asia — 2. Data cleaning vs. expansion: While some advocated for “cleaning” biased data, Mueller emphasised the importance of ex…
S52
Harmonizing High-Tech: The role of AI standards as an implementation tool — Philippe Metzger:Thank you, Bilel. Maybe to be as succinct as possible, just would like to mention four areas, which I t…
S53
Mediation and artificial intelligence: Notes on the future of international conflict resolution — AI tools to support the work of mediators – by providing better knowledge management, a better understanding of the conf…
S54
Day 0 Event #173 Building Ethical AI: Policy Tool for Human Centric and Responsible AI Governance — Chris Martin: Thanks, Ahmed. Well, everyone, I’ll walk through I think a little bit of this presentation here on what…
S55
Is the AI bubble about to burst? Five causes and five scenarios — It is in this context that low-cost, open-weight models, especially from China, arereshaping the competitive landscapean…
S56
Tech Transformed Cybersecurity: AI’s Role in Securing the Future — Challenges arise with the use of open-source components in coding. While open-source coding is prevalent, with 80% of co…
S57
How the Global South Is Accelerating AI Adoption_ Finance Sector Insights — See, I think the core aspects of regulation, as sir said, I generally don’t go into technology or which technology to us…
S58
IndoGerman AI Collaboration Driving Economic Development and Soc — AI systems. So at the end of the day, the aim is to translate the idea of trustworthy AI into testable criteria and prac…
S59
AI Algorithms and the Future of Global Diplomacy — Thank you. Thank you. And, Jian, let me turn to you now. And the fact that there is not a zero -sum game in a lot of thi…
S60
IndoGerman AI Collaboration Driving Economic Development and Soc — The discussion revealed significant evolution in Indo-German collaboration models, moving from traditional client-servic…
S61
Can cities tame big tech? — So what does a tech ambassador actually do? The role is a mix of a diplomat, aventure capitalist, and a policy expert. O…
S62
How AI Is Transforming Diplomacy and Conflict Management — Absolutely. And thank you so much for having me. And good morning, everyone. So my career, as you mentioned, has sort of…
S63
DC-IoT Progressing Global Good Practice for the Internet of Things | IGF 2023 — Maarten Botterman:Yes, thank you for that, Wout. What we see is the rapid developments make it more and more difficult a…
S64
AI diplomacy — However, we must remain masters of our tools. The final analysis, the subtle art of negotiation, the building of trust; …
S65
Negotiations — Artificial Intelligence (AI)has various applications in diplomacy. It can be used for data analysis to predict the outco…
S66
Diplomatic policy analysis — Policy analysis is an essential aspect ofmodern diplomacy, providing the foundational insights that enable states to nav…
S67
How to make AI governance fit for purpose? — All speakers agree that AI governance requires inclusive participation from multiple stakeholders including governments,…
S68
Scaling Enterprise-Grade Responsible AI Across the Global South — “And those engineered systems might require, for example, yes, human in the loop or on the loop, for sure, but also agen…
S69
Aligning AI Governance Across the Tech Stack ITI C-Suite Panel — Despite representing different sectors and business models, the panellists demonstrated remarkable consensus on several …
S70
AI Governance Dialogue: Steering the future of AI — This comment addresses a fundamental flaw in top-down governance approaches, highlighting that trust cannot be imposed e…
S71
AI Algorithms and the Future of Global Diplomacy — Raphael Leuner offered unique insights into how AI is being practically implemented within the German government. He exp…
S72
AI Algorithms and the Future of Global Diplomacy — Institute. Then we have Raphael Leuner, Data Scientist at the German Federal Foreign Office. We have Dr. Shahani Yaktiya…
S73
Networking Session #26 Transforming Diplomacy for a Shared Tomorrow — This networking session, hosted by the Data Innovation Lab of the German Federal Foreign Office, explored the role of ar…
S74
AI diplomacy — For centuries, power was defined by territory, armies, and economic might. Today, a new element is paramount: data and t…
S75
Embracing AI in diplomacy: How can Europe prepare for pivotal transformation in global affairs? — Firstly, AI is reshaping the geopolitical environment in which diplomacy operates. It facilitates the redistribution of …
S76
How AI Is Transforming Diplomacy and Conflict Management — I’ve been a major figure in international policy for the United States and in education at the Belfer Center, where our …
S77
How to make AI governance fit for purpose? — Given that AI technologies are inherently global, effective governance requires international engagement and cooperation…
S78
Discussion Report: Sovereign AI in Defence and National Security — International Cooperation vs National Autonomy International cooperation is essential while maintaining national sovere…
S79
Engineering Accountable AI Agents in a Global Arms Race: A Panel Discussion Report — Need for international cooperation starting with smaller groups of aligned nations before scaling to larger agreements
S80
What is it about AI that we need to regulate? — Addressing the Tension Between Digital Sovereignty and Global Internet InteroperabilityThe tension between digital sover…
S81
IndoGerman AI Collaboration Driving Economic Development and Soc — Summary:Speakers consistently highlighted how Germany’s precision engineering and regulatory expertise combines effectiv…
S82
GermanAsian AI Partnerships Driving Talent Innovation the Future — And very important, if we in Germany get used to the speed we have in India, then this is going to be unbeatable project…
S83
[Event summary] The impact of AI on diplomacy and international relations — Panel 2: AI as a cognitive tool for diplomatic practice:Andrew Tony Camilleri, Technical Attaché, Permanent Representati…
S84
Diplomatic policy analysis — Overreliance on technology:While machine learning and analytics are powerful tools, they are not infallible. Overdepende…
S85
Will AI take over diplomatic reporting? – WebDebate #56 summary — Third, in analytical reporting, AI may serve for background research and collecting data. But this area of diplomatic wo…
S86
Will AI take over diplomatic reporting? (WebDebate #56) — Overall, all speakers agreed that AI should be used with caution in diplomacy and with human oversight, as it is difficu…
S87
Open Forum: A Primer on AI — Artificial Intelligence is advancing at a rapid pace
S88
Powering the Technology Revolution / Davos 2025 — The tone was generally optimistic and forward-looking, with panelists highlighting opportunities for innovation and prog…
S89
Sticking with Start-ups / DAVOS 2025 — The overall tone was informative and optimistic. Panelists spoke candidly about challenges in the startup world but main…
S90
AI: Lifting All Boats / DAVOS 2025 — The tone was largely optimistic and solution-oriented, with speakers acknowledging challenges but focusing on opportunit…
S91
Democratizing AI: Open foundations and shared resources for global impact — The tone was consistently collaborative, optimistic, and forward-looking throughout the discussion. Speakers maintained …
S92
US-EU-China Triangle — The overall tone was analytical and somewhat cautious, with panelists offering differing perspectives on the likelihood …
S93
AI and Digital Developments Forecast for 2026 — The tone begins as analytical and educational but becomes increasingly cautionary and urgent throughout the conversation…
S94
WS #187 Bridging Internet AI Governance From Theory to Practice — The discussion maintained a thoughtful but increasingly cautious tone throughout. It began optimistically, with speakers…
S95
WS #259 Multistakeholder Cooperation Ineraof Increased Protectionism — ### Geopolitical Shifts and Erosion of Trust Anne Marie Ingtof Milgar: The question on what are the political trends af…
S96
Scaling AI for Billions_ Building Digital Public Infrastructure — The discussion maintained a balanced but cautionary tone throughout. While panelists acknowledged the tremendous opportu…
S97
Day 0 Event #161 Preparing Your Internet to Power the Digital of Tomorrow — The discussion maintained a consistently professional and collaborative tone throughout. Speakers demonstrated expertise…
S98
Skilling and Education in AI — The tone was cautiously optimistic throughout. Speakers acknowledged both the tremendous opportunities AI presents for I…
S99
How AI Drives Innovation and Economic Growth — Rodrigues emphasizes that while early AI discussions were dominated by fear about job displacement and technological thr…
S100
Defying Cognitive Atrophy in the Age of AI: A World Economic Forum Stakeholder Dialogue — The discussion began with a cautiously optimistic tone, acknowledging both opportunities and risks. However, the tone be…
S101
Swiss AI Initiatives and Policy Implementation Discussion — The discussion maintained a professional, collaborative tone throughout, with speakers presenting both opportunities and…
S102
https://dig.watch/event/india-ai-impact-summit-2026/ai-algorithms-and-the-future-of-global-diplomacy — Yeah, thanks so much. Yeah, maybe to get to take a step back and answer the question, how like someone like me as a as a…
S103
Big data: The next accelerator for diplomacy? — At the basis of all these developments isdata. The generation of data can turn into incredibly valuable insights for bus…
S104
WikiLeaks – Don’t waste the crisis: towards Diplomacy 2.0 — So why is it different this time?The Internet has revolutionised information and communication – two pillars of diplomac…
S105
Acknowledgements — In addition, MFAs have become increasingly interested in AI tools to monitor open data for early crisis detection – with…
S106
China’s AI industry is transforming with open-source models, challenging the OpenAI proprietary approach — China’s AI landscape iswitnessinga profound transformation as it embraces open-source large language models (LLMs), larg…
S107
The rise of large language models and the question of ownership — What are large language models? Large language models (LLMs) are advanced AI systems that can understand and generate va…
S108
Large Language Models on the Web: Anticipating the challenge | IGF 2023 WS #217 — In the discussion, several concerns were raised regarding web data, large language models, chat-based search engines, an…
S109
To share or not to share: the dilemma of open source vs. proprietary Large Language Models — Bilel Jamoussi:Since you mentioned Meta, I’ll go to Melinda and ask you about Meta has made significant contributions to…
S110
Military AI: Operational dangers and the regulatory void — While international forums are yet to find consensus on key issues, many states are straying further from regulation to …
S111
Scaling Trusted AI_ How France and India Are Building Industrial & Innovation Bridges — This transcript captures discussions from the AI Impact Summit, a collaborative event between France and India focused o…
S112
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — Dobbiamo condividere linee guida per orientare e guidare lo sviluppo dell ‘intelligenza artificiale nella piena concepol…
S113
Scaling Trusted AI_ How France and India Are Building Industrial &amp; Innovation Bridges — Estelle David from Business France opened by showcasing the strong French AI delegation of about 100 companies across se…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
R
Raphael Leuner
6 arguments157 words per minute1172 words446 seconds
Argument 1
Fast co‑creation through internal data labs enables rapid AI deployment, avoiding slow traditional IT cycles (Raphael Leuner)
EXPLANATION
Raphael explains that the German Foreign Office’s internal data labs allow close collaboration with diplomatic staff, shortening development cycles. This fast co‑creation contrasts with traditional IT projects that can take years and are too slow for the rapid pace of AI adoption.
EVIDENCE
He notes that the government launched data labs across ministries in 2021, creating 16 labs by 2022, and that he works in the foreign office lab [12-14]. He highlights the advantage of short contacts within the ministry and the need for speed, contrasting it with two-year IT projects that are costly and too slow for AI solutions [18-21].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The German government launched 16 internal data labs in 2021 to accelerate AI development, bypassing traditional multi-year IT projects [S2].
MAJOR DISCUSSION POINT
Accelerated AI development via internal labs
DISAGREED WITH
Norman Schulz, Gunda Ehmke
Argument 2
AI can support negotiations by processing massive document piles, but human innovation remains essential (Raphael Leuner)
EXPLANATION
Raphael describes AI tools that help diplomats analyse large volumes of documents to understand positions and impacts, thereby supporting negotiations. However, he stresses that the creative and innovative aspects of diplomacy must still come from humans.
EVIDENCE
He mentions specific applications in the foreign office that aid negotiations by digging through huge document piles and extracting insights, while noting that AI is a tool for leverage but human innovation is still required [130-133].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI tools that analyse large document sets for diplomatic negotiations are described in [S16] and [S17], while specific negotiation-support applications are detailed in [S29].
MAJOR DISCUSSION POINT
AI as a negotiation support tool
AGREED WITH
Norman Schulz, Shahani Yaktiyami
Argument 3
Prioritising open‑source technologies reduces dependence on external vendors and opens space for Indo‑German model development (Raphael Leuner)
EXPLANATION
Raphael states that the foreign office deliberately chooses open‑source AI solutions, reusing existing applications and building on community models. This strategy limits reliance on foreign proprietary systems and creates opportunities for joint Indo‑German open‑source projects.
EVIDENCE
He explains that the office focuses on open-source technologies for both models and scaffolding, reusing state-government chat applications and developing negotiation tools [127-130]. He also points out the growing adoption of Chinese open-source models and the need for alternatives, highlighting interest in Indian LLMs as potential options [133-137].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Open-source approaches that lower vendor lock-in and enable reuse are highlighted in [S19] and [S20].
MAJOR DISCUSSION POINT
Open‑source AI strategy
AGREED WITH
Gunda Ehmke, Shyam Krishnakumar, Norman Schulz
DISAGREED WITH
Norman Schulz
Argument 4
AI adoption will expand across multiple sectors within the next five years, not limited to leading powers
EXPLANATION
Raphael predicts that AI will become widespread in many fields over the next five years and that this diffusion will not be monopolised by the United States or China.
EVIDENCE
He notes that while many ideas are emerging, widespread AI adoption is still limited, but expects it to happen over the next five years and stresses that it will not be driven solely by the U.S. or China [208-213].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Widespread AI uptake across sectors over the next five years is noted in [S21]; the view that adoption will not be confined to the US or China is expressed in [S2] and reinforced by middle-power perspectives in [S22].
MAJOR DISCUSSION POINT
Projected timeline for broad AI adoption
Argument 5
Middle powers can collaborate in small, balanced groups to develop AI without creating dependency
EXPLANATION
Raphael argues that Germany and India, as middle powers, should work together in modest coalitions that avoid dependence on any single dominant AI provider.
EVIDENCE
He describes a vision where middle powers cooperate in smaller groups that do not create dependence, emphasizing mutual strengths and joint development [212-216].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Collaboration among middle powers in flexible, issue-based coalitions is discussed in [S2], [S22] and the “coalitions of the willing” concept in [S23].
MAJOR DISCUSSION POINT
Middle‑power collaborative model
Argument 6
AI is already being used by actors to amplify geopolitical narratives through rapid generation of fake websites and social‑media content
EXPLANATION
He warns that AI tools are exploited to mass‑produce misinformation, creating thousands of fake sites and amplifying targeted messages quickly.
EVIDENCE
He cites monitoring of AI-driven actors who use the technology to amplify messages across social media and generate large numbers of fake websites in seconds [294-298].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Chinese groups using AI to create thousands of fake sites and social-media posts are reported in [S24] and [S25].
MAJOR DISCUSSION POINT
AI‑enabled misinformation risks
G
Gunda Ehmke
3 arguments111 words per minute927 words500 seconds
Argument 1
Concrete, application‑focused cooperation is needed beyond abstract governance frameworks (Gunda Ehmke)
EXPLANATION
Gunda calls for moving from high‑level governance discussions to tangible, use‑case‑driven collaboration between India and Germany. She emphasizes that practical tools and open‑source solutions should be the focus of Indo‑German cooperation.
EVIDENCE
She asks the panel to shift the conversation to concrete cooperation, referencing the need for application-focused work beyond governance formats [122-126].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls to move from abstract governance to concrete pilots are echoed in the IGF workshop notes [S27] and the closing remarks on implementation tools [S32].
MAJOR DISCUSSION POINT
Call for practical Indo‑German AI projects
AGREED WITH
Raphael Leuner, Shahani Yaktiyami, Shyam Krishnakumar
DISAGREED WITH
Norman Schulz, Raphael Leuner
Argument 2
Open‑source AI can serve as a negotiation tool, providing transparent models that diplomats can audit and adapt
EXPLANATION
Gunda proposes that using open‑source AI solutions would give diplomats access to inspectable models, enhancing trust and enabling customized negotiation support.
EVIDENCE
She asks whether open source could be a solution for negotiation tools and references Raphael’s description of reusing open-source applications for negotiations [123-130].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Open-source models that enable auditability and customization are discussed in [S19] and [S20]; negotiation-specific AI use cases are described in [S29].
MAJOR DISCUSSION POINT
Open‑source AI for diplomatic negotiations
Argument 3
Governance frameworks must move beyond abstract formats to concrete, impact‑oriented pilots that demonstrate AI benefits in public services
EXPLANATION
She calls for shifting from high‑level governance documents to tangible projects that show real‑world impact, ensuring AI implementation is measurable and effective.
EVIDENCE
She emphasizes the need to focus on concrete cooperation and application-driven work rather than merely writing governance formats [122-126].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for impact-oriented pilots rather than only policy documents is highlighted in [S27] and [S32].
MAJOR DISCUSSION POINT
From governance to concrete AI impact
N
Norman Schulz
9 arguments127 words per minute1437 words675 seconds
Argument 1
AI will free diplomats from tedious data‑consumption tasks; it will not replace human decision‑making (Norman Schulz)
EXPLANATION
Norman argues that AI can automate the processing of large amounts of information, allowing diplomats to concentrate on analysis and relationship‑building. He stresses that final decisions will always remain a human, collaborative process.
EVIDENCE
He states that AI will not make decisions soon, that humans will still decide, and that AI will help consume information faster, freeing diplomats for higher-level work such as connecting the dots and innovating [250-259].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI automating information ingestion while keeping decision-making human-centric is covered in [S16], [S17] and practical diplomatic examples in [S30] and [S33].
MAJOR DISCUSSION POINT
AI as a productivity enhancer for diplomats
AGREED WITH
Raphael Leuner, Shahani Yaktiyami
Argument 2
International regulation, akin to nuclear arms control, is needed to mitigate AI risks that outweigh benefits (Norman Schulz)
EXPLANATION
Norman draws a parallel between the nuclear arms race and the current AI landscape, arguing that without international cooperation and regulation the risks of AI could surpass its benefits. He calls for coordinated limits and safety measures.
EVIDENCE
He references historical nuclear crises, the Cuban Missile Crisis, and argues that AI is at a similar point, needing years of US-China cooperation to set limits because risks outweigh potential benefits [66-78].
MAJOR DISCUSSION POINT
Need for global AI regulation
AGREED WITH
Raphael Leuner, Shahani Yaktiyami
DISAGREED WITH
Raphael Leuner, Gunda Ehmke
Argument 3
The Global Digital Compact and the UN Independent Scientific Panel on AI provide inclusive, science‑based governance mechanisms (Norman Schulz)
EXPLANATION
Norman explains that the Global Digital Compact, together with the UN’s Independent Scientific International Panel on AI, creates platforms for inclusive, evidence‑based AI governance. These bodies aim to involve a broad range of countries beyond the traditional great powers.
EVIDENCE
He describes the Compact’s role, the AI panel’s composition (two German experts, others), its upcoming report, and the July AI dialogue in Geneva that will feed into the Compact’s discussions [166-184].
MAJOR DISCUSSION POINT
UN mechanisms for inclusive AI governance
DISAGREED WITH
Gunda Ehmke, Raphael Leuner
Argument 4
Reliance on foreign (e.g., Chinese) open‑source models poses geopolitical security risks; developing home‑grown systems is advisable (Norman Schulz)
EXPLANATION
Norman warns that using open‑source AI models from China can embed foreign strategic perspectives, creating security concerns. He advocates for developing indigenous or partner‑based models to maintain control and trust.
EVIDENCE
He notes that many leading open-source models are Chinese, which may carry Chinese ways of thinking, and argues that Germany should avoid relying on them for diplomatic work [157-164].
MAJOR DISCUSSION POINT
Geopolitical risk of foreign AI models
AGREED WITH
Raphael Leuner, Gunda Ehmke, Shyam Krishnakumar
DISAGREED WITH
Raphael Leuner
Argument 5
AI can automate information consumption, allowing diplomats to focus on higher‑level analysis and relationship‑building (Norman Schulz)
EXPLANATION
Norman reiterates that AI will streamline the ingestion of reports, NGOs, and government documents, freeing diplomats to engage in strategic thinking and networking. This automation enhances efficiency without replacing human judgment.
EVIDENCE
He explains that AI will make consuming information easier and quicker, allowing diplomats to spend time connecting the dots and building relationships rather than sifting through documents [250-259].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Automation of data processing for diplomats and the shift to strategic analysis are described in [S30] and [S33].
MAJOR DISCUSSION POINT
Automation of data processing for diplomats
Argument 6
Decision‑making will remain a collaborative human process; AI serves as a tool, not a substitute (Norman Schulz)
EXPLANATION
Norman emphasizes that AI will not replace human decision‑makers; instead, policy choices will continue to be made collectively by governments and legislatures, with AI providing supportive analysis.
EVIDENCE
He states that AI will not make decisions soon, that decisions will be collaborative among government and legislature, and that AI’s role is to assist rather than decide [251-256].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Human-centric decision-making with AI as a support tool is emphasized in [S16] and [S17].
MAJOR DISCUSSION POINT
Human‑centric decision‑making
Argument 7
AI‑generated content tends to be repetitive and dull, but AI can assist in detecting and correcting bias (Norman Schulz)
EXPLANATION
Norman observes that AI‑written news can become monotonous, yet AI also offers tools to identify and mitigate bias in content. He sees this dual nature as both a risk and an opportunity.
EVIDENCE
He comments that AI-written newspapers become dull and repetitive, but agrees that AI can help detect bias and counteract it [287-293].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The repetitive nature of AI-written text is noted in [S33]; AI-assisted bias detection is mentioned in [S16].
MAJOR DISCUSSION POINT
Balancing AI content quality and bias detection
DISAGREED WITH
Shahani Yaktiyami, Raphael Leuner
Argument 8
AI functions as a cross‑sectoral tool, impacting communication, agriculture, industry, finance and diplomacy
EXPLANATION
Norman highlights the breadth of AI applications, showing that it is not limited to a single domain but is reshaping many sectors of society.
EVIDENCE
He lists AI as a tool in communication, agriculture, industrial entrepreneurship, finance and diplomacy, illustrating its wide-ranging relevance [61-64].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Cross-sectoral AI applications across communication, agriculture, industry, finance and diplomacy are documented in [S21] and [S30].
MAJOR DISCUSSION POINT
Broad sectoral impact of AI
Argument 9
While AI can automate routine analysis, innovative diplomatic thinking must remain human and cannot be replicated by AI
EXPLANATION
He stresses that AI can support diplomats by handling data‑intensive tasks, but the creative and innovative aspects of diplomacy must stay with people.
EVIDENCE
He argues that AI will not replace innovative thinking and that diplomats need to insert their own ideas, which AI cannot generate [160-164].
MAJOR DISCUSSION POINT
Human creativity versus AI automation
S
Shahani Yaktiyami
8 arguments162 words per minute1665 words615 seconds
Argument 1
AI is the latest technological revolution shaping diplomacy, continuing a historical pattern (Shahani Yaktiyami)
EXPLANATION
Shahani places AI within a long line of technologies—industrial, nuclear, space—that have transformed foreign policy. She argues that AI is simply the newest tool influencing diplomatic practice.
EVIDENCE
She references how technology has always shaped foreign policy, citing the Industrial Revolution, nuclear era, and space race, and identifies AI as the current transformative technology [35-41].
MAJOR DISCUSSION POINT
Historical continuity of technology in diplomacy
Argument 2
Great powers compete on frontier AI; middle powers like India and Germany leverage regulatory strength (Germany) and application focus (India) to exert influence (Shahani Yaktiyami)
EXPLANATION
Shahani explains that while the US and China vie for AI leadership, middle powers such as India and Germany can influence the AI value chain through regulation (Germany) and application‑driven strategies (India). This enables them to punch above their weight.
EVIDENCE
She points to policy documents from the US and China emphasizing AI race, then describes how Germany uses rules and regulation while India focuses on applications, illustrating middle-power tactics on the AI stack [46-51].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Middle-power collaboration and the regulatory-application split between Germany and India are discussed in [S2], [S22] and the “coalitions of the willing” concept in [S23].
MAJOR DISCUSSION POINT
Middle‑power AI strategy
AGREED WITH
Raphael Leuner, Shyam Krishnakumar, Gunda Ehmke
Argument 3
AI procurement must factor geopolitical risk and national security considerations (Shahani Yaktiyami)
EXPLANATION
Shahani argues that selecting AI technologies now requires assessing geopolitical and security implications, a factor previously absent from corporate procurement. Nations must evaluate how AI could be weaponized or pose risks.
EVIDENCE
She discusses how security concerns (Ukraine, India’s border disputes) demand that technology choices consider geopolitical risk, noting the emergence of geopolitical risk advisors in companies [141-148].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Geopolitical risks of using Chinese AI models, including misinformation campaigns, are highlighted in [S24] and [S25].
MAJOR DISCUSSION POINT
Geopolitical risk in AI procurement
Argument 4
Sovereignty concerns stem from geopolitics; a nuanced “managed interdependence” approach is preferable to strict sovereignty (Shahani Yaktiyami)
EXPLANATION
Shahani states that sovereignty debates arise from geopolitical competition but suggests moving beyond a binary sovereignty narrative toward managed interdependence, where countries leverage strengths in the AI stack without becoming overly dependent.
EVIDENCE
She explains that sovereignty concerns are geopolitically driven, cites Germany’s high-tech strategy and data hub ambitions, and proposes a nuanced managed interdependence rather than strict sovereignty [224-242].
MAJOR DISCUSSION POINT
Managed interdependence vs sovereignty
Argument 5
AI should not be allowed to shape geopolitical narratives; human judgment must remain central (Shahani Yaktiyami)
EXPLANATION
Shahani warns against letting AI generate or steer geopolitical narratives, insisting that humans should retain control over narrative formation based on evidence and analysis.
EVIDENCE
She explicitly says she would not let AI shape narratives and that humans must shape them, expressing concern over AI-driven narratives trained on biased data [274-280].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The importance of human judgment over AI-generated narratives is stressed in [S16] and [S17].
MAJOR DISCUSSION POINT
Human control over narrative formation
AGREED WITH
Raphael Leuner, Norman Schulz
DISAGREED WITH
Norman Schulz, Raphael Leuner
Argument 6
AI can amplify harmful or biased narratives; robust mitigation and bias‑detection tools are required (Shahani Yaktiyami)
EXPLANATION
Shahani notes that AI can be used to spread misinformation or reinforce biased viewpoints, so effective mitigation strategies and bias‑detection technologies are essential.
EVIDENCE
She mentions AI’s potential to amplify incorrect or geopolitically challenging narratives and highlights India’s work on bias-detection technologies as a mitigation measure [281-286].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI-driven misinformation and the need for bias-mitigation tools are reported in [S24], [S25] and the dull-content bias discussion in [S33].
MAJOR DISCUSSION POINT
Need for bias mitigation in AI‑driven narratives
AGREED WITH
Raphael Leuner, Norman Schulz
Argument 7
AI serves as a tool of technology diplomacy, allowing countries to signal participation in the global tech race
EXPLANATION
She frames AI as part of a broader technology diplomacy strategy, where nations use AI to demonstrate their engagement in the international technological arena.
EVIDENCE
She states that AI is a tool of technology diplomacy and that the AI summit signals a country’s desire to be part of the technological revolution [35-38].
MAJOR DISCUSSION POINT
AI as diplomatic signalling
Argument 8
Strategic communication using AI is a new tactic, even though the underlying technology is not new
EXPLANATION
Shahani notes that while technology has historically shaped foreign policy, the specific use of AI for strategic communication represents a novel diplomatic tactic.
EVIDENCE
She explains that AI is being used for strategic communication and that the tactics surrounding it are new, despite technology itself being longstanding [44-46].
MAJOR DISCUSSION POINT
Emergence of AI‑driven strategic communication
S
Shyam Krishnakumar
5 arguments198 words per minute648 words195 seconds
Argument 1
India’s contextual model innovation and Germany’s industrial expertise create opportunities for sector‑specific Indo‑German cooperation (Shyam Krishnakumar)
EXPLANATION
Shyam outlines how India’s talent pool and cost‑effective model building, combined with Germany’s industrial data and automation know‑how, open avenues for joint projects in sectors such as industrial AI and healthcare.
EVIDENCE
He describes India’s growing model innovation, large workforce, and contextual use-cases, then details potential Indo-German cooperation in industrial AI, automation, and healthcare, citing India’s surgery data and Germany’s investment capacity [92-110] and [108-119].
MAJOR DISCUSSION POINT
Sector‑specific Indo‑German AI collaboration
AGREED WITH
Raphael Leuner, Shahani Yaktiyami, Gunda Ehmke
Argument 2
Collaborative sectoral projects—industrial AI, healthcare, automation—can combine Germany’s data/automation strengths with India’s talent and cost‑effective model building (Shyam Krishnakumar)
EXPLANATION
He expands on concrete cooperation ideas, suggesting joint work on industrial automation and healthcare AI where each country contributes complementary strengths, creating value beyond the sum of parts.
EVIDENCE
He points to Germany’s industrial data and automation expertise alongside India’s ability to build models and conduct large-scale inference at lower cost, proposing joint projects in industrial AI and healthcare where India provides data and models and Germany provides automation know-how [108-119].
MAJOR DISCUSSION POINT
Joint sector projects for mutual benefit
Argument 3
The open‑source revolution democratises AI, lowering entry costs and enabling middle‑power collaboration (Shyam Krishnakumar)
EXPLANATION
Shyam draws a parallel between the 1990s open‑source software movement and today’s AI, arguing that open‑source reduces costs and barriers, allowing middle powers to collaborate effectively.
EVIDENCE
He references the 1990s open-source revolution, noting how volunteer-built operating systems reduced costs and democratized access, and connects this to current AI open-source efforts that can similarly empower middle powers [219-222].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Open-source reducing vendor dependence and fostering middle-power partnerships is described in [S19], [S20] and the middle-economy coalition model in [S22].
MAJOR DISCUSSION POINT
Open‑source as a democratizing force
AGREED WITH
Raphael Leuner, Gunda Ehmke, Norman Schulz
Argument 4
India’s large, cost‑effective AI talent pool enables it to build models at lower cost, providing a competitive advantage despite not having frontier models
EXPLANATION
He points out that India’s abundant and affordable AI expertise allows rapid model development and inference, compensating for the lack of frontier‑level models.
EVIDENCE
He mentions India’s sizable workforce, cheaper inference costs, and contextual innovation, highlighting the cost advantage and talent depth [96-105].
MAJOR DISCUSSION POINT
India’s cost‑effective AI capacity
Argument 5
Focusing on application‑driven AI rather than frontier model competition creates a non‑zero‑sum environment for cooperation
EXPLANATION
He argues that by concentrating on sector‑specific applications, countries can cooperate without viewing AI development as a winner‑takes‑all race.
EVIDENCE
He explicitly states that the situation is not a zero-sum game and that focusing on applications offers a collaborative path forward [120-121].
MAJOR DISCUSSION POINT
Application‑first, non‑zero‑sum AI strategy
A
Audience
1 argument148 words per minute183 words73 seconds
Argument 1
AI can automate substantial parts of foreign‑policy research, decision‑making and implementation, freeing diplomats to focus on higher‑level analysis
EXPLANATION
The audience member asks which aspects of foreign‑policy work can be automated, implying that AI has the potential to handle routine research and decision‑support tasks, thereby allowing diplomats to concentrate on strategic thinking.
EVIDENCE
The audience asks for examples of foreign-policy research, decision-making and implementation that could be automated by AI [246-249].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI automation of diplomatic research and the shift to strategic analysis are discussed in [S30] and [S33]; the human-centric view is reinforced in [S16].
MAJOR DISCUSSION POINT
Potential for AI‑driven automation in foreign policy work
Agreements
Agreement Points
AI should be used to augment diplomatic work (e.g., document analysis, negotiation support) while final decision‑making and narrative shaping remain human responsibilities.
Speakers: Raphael Leuner, Norman Schulz, Shahani Yaktiyami
AI can support negotiations by processing massive document piles, but human innovation remains essential (Raphael Leuner) AI will free diplomats from tedious data‑consumption tasks; it will not replace human decision‑making (Norman Schulz) AI should not be allowed to shape geopolitical narratives; human judgment must remain central (Shahani Yaktiyami)
All three speakers agree that AI is a valuable tool for handling large volumes of information and supporting negotiations, but the ultimate analysis, creative thinking and decisions must stay with human diplomats and policymakers [130-133][250-256][274-280].
POLICY CONTEXT (KNOWLEDGE BASE)
This view mirrors the consensus that AI can support but not replace human judgment in diplomacy, as highlighted in discussions on AI-assisted negotiation tools while emphasizing human mastery of the final analysis [S64][S65].
Prioritising open‑source AI solutions reduces dependence on foreign proprietary models and creates space for Indo‑German collaborative development.
Speakers: Raphael Leuner, Gunda Ehmke, Shyam Krishnakumar, Norman Schulz
Prioritising open‑source technologies reduces dependence on external vendors and opens space for Indo‑German model development (Raphael Leuner) Open‑source AI can serve as a negotiation tool, providing transparent models that diplomats can audit and adapt (Gunda Ehmke) The open‑source revolution democratises AI, lowering entry costs and enabling middle‑power collaboration (Shyam Krishnakumar) Reliance on foreign (e.g., Chinese) open‑source models poses geopolitical security risks; developing home‑grown systems is advisable (Norman Schulz)
All four speakers highlight the strategic importance of open-source AI: Raphael and Gunda stress its utility and transparency for diplomatic tools, Shyam points to its democratising effect for middle-power cooperation, while Norman warns against reliance on foreign open-source models and advocates indigenous development [127-130][123-130][219-222][157-164].
POLICY CONTEXT (KNOWLEDGE BASE)
Open-source strategies are promoted to avoid reliance on proprietary foreign AI and to enable joint Indo-German projects, a theme noted in the Indo-German collaboration reports and the debate on low-cost open models reshaping the market [S58][S55].
Middle powers such as India and Germany should cooperate on sector‑specific AI applications rather than competing in frontier model development.
Speakers: Raphael Leuner, Shahani Yaktiyami, Shyam Krishnakumar, Gunda Ehmke
Middle powers can collaborate in small, balanced groups to develop AI without creating dependency (Raphael Leuner) Great powers compete on frontier AI; middle powers like India and Germany leverage regulatory strength (Germany) and application focus (India) to exert influence (Shahani Yaktiyami) India’s contextual model innovation and Germany’s industrial expertise create opportunities for sector‑specific Indo‑German cooperation (Shyam Krishnakumar) Concrete, application‑focused cooperation is needed beyond abstract governance frameworks (Gunda Ehmke)
The speakers converge on the view that India and Germany, as middle powers, should focus on joint, sector-specific AI projects (e.g., industrial AI, healthcare) and avoid dependence on frontier model races, emphasizing complementary strengths and practical collaboration [212-216][46-51][108-119][122-126].
POLICY CONTEXT (KNOWLEDGE BASE)
Bilateral cooperation between India and Germany on applied AI, focusing on sectoral expertise instead of frontier model races, was identified as a viable pathway in recent policy dialogues [S50][S59].
AI poses significant risks (misinformation, bias, security) that require robust mitigation, regulation and inclusive governance mechanisms.
Speakers: Raphael Leuner, Norman Schulz, Shahani Yaktiyami
AI is already being used by actors to amplify geopolitical narratives through rapid generation of fake websites and social‑media content (Raphael Leuner) International regulation, akin to nuclear arms control, is needed to mitigate AI risks that outweigh benefits (Norman Schulz) AI can amplify harmful or biased narratives; robust mitigation and bias‑detection tools are required (Shahani Yaktiyami)
All three speakers acknowledge AI’s potential for misuse-through misinformation, bias, or security threats-and call for strong mitigation strategies, including regulation, bias-detection tools, and inclusive international governance frameworks [294-298][66-78][281-286].
POLICY CONTEXT (KNOWLEDGE BASE)
Broad agreement on safety, trust and inclusive governance appears across multiple AI policy forums, calling for risk-based, participatory frameworks rather than prescriptive rules [S48][S67][S70].
AI adoption will expand rapidly across multiple sectors in the next few years, and this diffusion will not be monopolised by the United States or China.
Speakers: Raphael Leuner, Norman Schulz, Shyam Krishnakumar
AI adoption will expand across multiple sectors within the next five years, not limited to leading powers (Raphael Leuner) AI functions as a cross‑sectoral tool, impacting communication, agriculture, industry, finance and diplomacy (Norman Schulz) Focusing on application‑driven AI rather than frontier model competition creates a non‑zero‑sum environment for cooperation (Shyam Krishnakumar)
The panelists agree that AI will see broad, sector-wide uptake over the coming years, driven by applications rather than dominance of any single great power, supporting a more inclusive global AI landscape [208-213][61-64][120-121].
POLICY CONTEXT (KNOWLEDGE BASE)
Analysts stress that widespread, equitable diffusion of AI across industries is essential to prevent concentration of power in a few nations or firms [S44][S45].
Similar Viewpoints
Both see AI as a productivity enhancer for diplomats that handles data‑intensive tasks while leaving strategic analysis and decision‑making to humans [130-133][250-256].
Speakers: Raphael Leuner, Norman Schulz
AI can support negotiations by processing massive document piles, but human innovation remains essential (Raphael Leuner) AI will free diplomats from tedious data‑consumption tasks; it will not replace human decision‑making (Norman Schulz)
All three advocate for open‑source AI as a means to ensure transparency, reduce vendor lock‑in and foster collaborative development, especially between India and Germany [127-130][123-130][219-222].
Speakers: Raphael Leuner, Gunda Ehmke, Shyam Krishnakumar
Prioritising open‑source technologies reduces dependence on external vendors and opens space for Indo‑German model development (Raphael Leuner) Open‑source AI can serve as a negotiation tool, providing transparent models that diplomats can audit and adapt (Gunda Ehmke) The open‑source revolution democratises AI, lowering entry costs and enabling middle‑power collaboration (Shyam Krishnakumar)
Consensus that India and Germany, as middle powers, should pursue concrete, sector‑specific AI collaborations rather than chase frontier model supremacy [212-216][46-51][108-119][122-126].
Speakers: Raphael Leuner, Shahani Yaktiyami, Shyam Krishnakumar, Gunda Ehmke
Middle powers can collaborate in small, balanced groups to develop AI without creating dependency (Raphael Leuner) Great powers compete on frontier AI; middle powers like India and Germany leverage regulatory strength (Germany) and application focus (India) to exert influence (Shahani Yaktiyami) India’s contextual model innovation and Germany’s industrial expertise create opportunities for sector‑specific Indo‑German cooperation (Shyam Krishnakumar) Concrete, application‑focused cooperation is needed beyond abstract governance frameworks (Gunda Ehmke)
Both stress the necessity of governance, regulation and technical safeguards to address AI‑driven misinformation and bias risks [66-78][281-286].
Speakers: Norman Schulz, Shahani Yaktiyami
International regulation, akin to nuclear arms control, is needed to mitigate AI risks that outweigh benefits (Norman Schulz) AI can amplify harmful or biased narratives; robust mitigation and bias‑detection tools are required (Shahani Yaktiyami)
Unexpected Consensus
Both a career diplomat (Norman Schulz) and a technology‑policy expert (Shahani Yaktiyami) assert that AI should never be allowed to autonomously shape geopolitical narratives, emphasizing human control despite their different professional lenses.
Speakers: Norman Schulz, Shahani Yaktiyami
AI should not be allowed to shape geopolitical narratives; human judgment must remain central (Shahani Yaktiyami) AI‑generated content tends to be repetitive and dull, and humans must retain creative control (Norman Schulz)
While Norman focuses on diplomatic practice and Shahani on policy analysis, both converge on the principle that narrative formation must stay human-led, which was not an obvious point of overlap given their distinct roles [274-280][287-293].
POLICY CONTEXT (KNOWLEDGE BASE)
The insistence on human-led narrative formation aligns with statements that diplomats must remain masters of AI tools, a view echoed by experts from both diplomatic and technical backgrounds [S64][S62].
Overall Assessment

The panel exhibits a high degree of consensus across technical, diplomatic and policy dimensions: AI is viewed as an augmenting tool for diplomatic work, open‑source approaches are championed, middle‑power collaboration (especially Indo‑German) is encouraged, and robust risk‑mitigation and inclusive governance are deemed essential. These shared positions suggest a coherent, collaborative trajectory for AI integration in foreign policy, emphasizing human‑centric decision‑making, transparency, and multilateral cooperation.

Strong consensus with clear alignment among all speakers, indicating that future initiatives are likely to focus on practical, open‑source, middle‑power‑driven AI projects under inclusive international governance frameworks.

Differences
Different Viewpoints
Approach to governing AI risks – top‑down international regulation versus bottom‑up middle‑power collaboration and fast internal development
Speakers: Norman Schulz, Raphael Leuner, Gunda Ehmke
International regulation, akin to nuclear arms control, is needed to mitigate AI risks that outweigh benefits (Norman Schulz) Fast co‑creation through internal data labs enables rapid AI deployment, avoiding slow traditional IT cycles (Raphael Leuner) Concrete, application‑focused cooperation is needed beyond abstract governance frameworks (Gunda Ehmke)
Norman argues that AI poses security risks that require global, treaty-like regulation and US-China cooperation, likening it to nuclear arms control [66-78]. Raphael counters that the German Foreign Office’s internal data labs allow rapid, agile AI development and that middle-power collaboration can spread AI without dependence on great-power frameworks [18-21][212-216]. Gunda stresses moving from high-level governance documents to tangible, use-case-driven projects, questioning whether broad regulatory approaches are sufficient [122-126].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy debates reveal a split between binding, top-down regulatory proposals and flexible, middle-power-driven, evidence-based approaches to AI risk management [S48][S47][S67].
Use of open‑source AI models, especially those originating from China, and associated security implications
Speakers: Raphael Leuner, Norman Schulz
Prioritising open‑source technologies reduces dependence on external vendors and opens space for Indo‑German model development (Raphael Leuner) Reliance on foreign (e.g., Chinese) open‑source models poses geopolitical security risks; developing home‑grown systems is advisable (Norman Schulz)
Raphael describes a strategy of adopting open-source AI, reusing existing applications, and seeking Indian LLM alternatives to lessen reliance on Chinese models, viewing open-source as a pragmatic choice [127-130][133-137]. Norman warns that Chinese open-source models embed Chinese ways of thinking, creating security concerns, and advocates for developing indigenous or partner-based models to maintain trust [157-164].
POLICY CONTEXT (KNOWLEDGE BASE)
Security concerns over Chinese open-weight models contrast with their competitive advantages, reflecting ongoing tensions in the open-source AI ecosystem [S55][S56].
Whether AI should be allowed to shape geopolitical narratives
Speakers: Shahani Yaktiyami, Norman Schulz, Raphael Leuner
AI should not be allowed to shape geopolitical narratives; human judgment must remain central (Shahani Yaktiyami) AI‑generated content tends to be repetitive and dull, but AI can assist in detecting and correcting bias (Norman Schulz) AI is a tool for actors to amplify geopolitical narratives, not to shape them themselves (Raphael Leuner)
Shahani explicitly opposes AI influencing narratives, insisting humans must craft them based on evidence [274-280]. Norman acknowledges AI’s limitations but highlights its utility in bias detection, suggesting a supportive role rather than narrative creation [287-293]. Raphael points out that AI is already being used by actors to mass-produce and amplify messages, indicating an indirect shaping effect [294-298].
POLICY CONTEXT (KNOWLEDGE BASE)
The debate over AI-generated geopolitical narratives pits the potential for efficiency against the risk of losing human diplomatic judgment, a tension highlighted in AI-diplomacy discussions [S64][S62].
Priority of governance mechanisms versus concrete application pilots for AI cooperation
Speakers: Gunda Ehmke, Norman Schulz, Raphael Leuner
Concrete, application‑focused cooperation is needed beyond abstract governance frameworks (Gunda Ehmke) The Global Digital Compact and the UN Independent Scientific Panel on AI provide inclusive, science‑based governance mechanisms (Norman Schulz) Focus on open‑source and applications, building tools rather than just governance (Raphael Leuner)
Gunda calls for moving past high-level policy documents to tangible Indo-German AI projects and tools [122-126]. Norman emphasizes the importance of the Global Digital Compact and UN AI panel as essential governance structures for inclusive AI policy [166-184]. Raphael aligns with Gunda on application-driven work but also references the need for broader governance through open-source standards [127-130].
POLICY CONTEXT (KNOWLEDGE BASE)
Stakeholders differ on whether to first establish governance frameworks or to launch pilot applications, a disagreement noted in panels emphasizing context-specific solutions over universal mandates [S48][S70].
Unexpected Differences
Open‑source AI from China as a security risk versus its practical adoption
Speakers: Raphael Leuner, Norman Schulz
Prioritising open‑source technologies reduces dependence on external vendors and opens space for Indo‑German model development (Raphael Leuner) Reliance on foreign (e.g., Chinese) open‑source models poses geopolitical security risks; developing home‑grown systems is advisable (Norman Schulz)
While both advocate for open-source, Raphael sees Chinese open-source models as a current reality that can be mitigated by seeking Indian alternatives, whereas Norman treats any Chinese open-source component as a fundamental security threat, urging avoidance altogether. This divergence on the acceptability of existing Chinese open-source AI was not anticipated given their shared emphasis on open-source strategies. [133-137][157-164]
POLICY CONTEXT (KNOWLEDGE BASE)
The dual view of Chinese open-source AI as both a security threat and a pragmatic tool reflects the broader discourse on balancing risk mitigation with technological uptake [S55][S56].
AI’s role in narrative formation versus AI as a bias‑detection tool
Speakers: Shahani Yaktiyami, Norman Schulz
AI should not be allowed to shape geopolitical narratives; human judgment must remain central (Shahani Yaktiyami) AI‑generated content is repetitive, but AI can help detect and correct bias (Norman Schulz)
Shahani categorically rejects any AI influence on narrative creation, while Norman acknowledges AI’s utility in identifying bias within narratives, implying a more permissive stance toward AI’s involvement in the narrative pipeline. The contrast between a total prohibition and a supportive, corrective role was not foreseen. [274-280][287-293]
POLICY CONTEXT (KNOWLEDGE BASE)
While some argue AI should support narrative creation, others stress its use for bias detection and fairness, echoing discussions on AI neutrality and dataset bias in conflict mediation and fairness panels [S53][S51].
Overall Assessment

The panel displayed several substantive disagreements: (1) the preferred governance model for AI (global regulation vs. agile middle‑power collaboration), (2) the security implications of using open‑source AI from China, (3) the extent to which AI should influence geopolitical narratives, and (4) the balance between high‑level governance mechanisms and concrete application pilots. While participants shared a common goal of leveraging AI for diplomatic advantage, they diverged on strategic pathways and risk assessments.

Moderate to high – the disagreements are foundational (governance philosophy, security risk perception, and ethical use of AI) and could shape the direction of Indo‑German AI cooperation and broader international AI policy. If unresolved, they may lead to fragmented approaches, limiting coordinated action and potentially increasing geopolitical tensions around AI deployment.

Partial Agreements
Both agree that AI should be used to automate routine information processing for diplomats, enhancing efficiency, but differ on the extent of AI’s role—Raphael emphasizes negotiation‑specific tools, while Norman stresses AI as a general productivity enhancer without decision‑making authority [130-133][250-259].
Speakers: Raphael Leuner, Norman Schulz
AI can support negotiations by processing massive document piles, but human innovation remains essential (Raphael Leuner) AI will free diplomats from tedious data‑consumption tasks; it will not replace human decision‑making (Norman Schulz)
Both recognise AI’s transformative impact on diplomacy, but Shahani frames it as a broader historical continuity, whereas Raphael focuses on its current use as a tool for narrative amplification rather than a fundamental shift in diplomatic practice [35-41][294-298].
Speakers: Shahani Yaktiyami, Raphael Leuner
AI is the latest technological revolution shaping diplomacy, continuing a historical pattern (Shahani Yaktiyami) AI is a tool for actors to amplify geopolitical narratives, not to shape them themselves (Raphael Leuner)
Takeaways
Key takeaways
Fast, internal co‑creation through data labs enables rapid AI deployment in foreign ministries, bypassing slow traditional IT projects. AI is viewed as a tool to support diplomatic work—e.g., processing large document sets for negotiations—while human judgment and innovation remain essential. AI is the latest technological revolution influencing diplomacy; great powers compete on frontier AI, whereas middle powers (India, Germany) can leverage regulatory strength and application‑focused strategies. Open‑source AI is preferred to reduce dependence on external vendors; however, reliance on foreign (e.g., Chinese) open‑source models raises security and sovereignty concerns. Indo‑German cooperation is seen as a concrete avenue for sector‑specific AI projects (industrial automation, healthcare, data hubs) that combine Germany’s industrial data/automation expertise with India’s talent and cost‑effective model development. Governance frameworks such as the UN Global Digital Compact and the Independent Scientific Panel on AI aim to provide inclusive, science‑based regulation, analogous to nuclear arms‑control models. AI can both amplify biased or malicious narratives and help detect/correct bias; human control over narrative formation is essential. Future automation in foreign‑policy work will focus on AI‑assisted information consumption, freeing diplomats for higher‑level analysis and relationship‑building.
Resolutions and action items
Adopt an open‑source‑first approach for AI projects within the German Foreign Office, reusing existing open‑source applications where possible. Pursue concrete Indo‑German collaborative projects in sectoral AI (industrial automation, healthcare, data‑hub initiatives). Contribute German expertise to the UN Independent Scientific Panel on AI and prepare for the Global Digital Compact dialogue in July (Geneva). Implement internal risk‑assessment processes that factor geopolitical and national‑security considerations when selecting AI models or vendors. Develop and deploy bias‑detection and mitigation tools as part of AI deployments in diplomatic analysis and media monitoring.
Unresolved issues
How to create a sustainable, long‑term governance model that balances rapid AI innovation with the need for international regulation and security safeguards. Specific mechanisms for Indo‑German joint development of open‑source AI models, including funding, intellectual‑property, and governance structures. The extent to which AI can be integrated into decision‑making workflows without compromising human oversight, especially in high‑stakes diplomatic negotiations. How to effectively counter AI‑generated misinformation and narrative manipulation at scale, beyond existing monitoring efforts. Clarification of the role and influence of middle powers within the Global Digital Compact and how their voices will be operationalised in practice.
Suggested compromises
Adopt a “managed interdependence” approach rather than strict technological sovereignty, allowing shared use of AI stacks while mitigating security risks. Focus on application‑oriented cooperation (sectoral AI projects) instead of competing for frontier model dominance. Leverage open‑source collaboration to democratise AI access, reducing reliance on any single geopolitical supplier. Combine regulatory leadership (Germany) with application deployment strength (India) to create a balanced middle‑power strategy.
Thought Provoking Comments
The big advantage we have is fast co‑creation from within an organization – we can develop AI tools quickly because we’re embedded in the ministry, unlike traditional IT projects that take years.
Highlights a structural advantage of having data labs inside government, emphasizing speed and agility as crucial for AI adoption in diplomacy.
Set the foundation for discussing practical AI use cases in the Foreign Office and prompted others to consider how internal collaboration can overcome bureaucratic delays.
Speaker: Raphael Leuner
Technology isn’t new, but the tactics aren’t. AI is the latest tool in a long history of tech‑driven diplomacy, and middle powers like Germany and India can leverage their specific strengths on the AI value chain rather than trying to win the frontier race.
Reframes the AI debate from a binary US‑China competition to a nuanced middle‑power strategy, introducing the concept of leveraging regulatory power and application‑focused strengths.
Shifted the conversation from a geopolitical rivalry focus to exploring collaborative roles for middle powers, leading directly to discussions on Indo‑German cooperation.
Speaker: Shahani Yaktiyami
AI is at a similar point as nuclear power in the 1950s – we need international cooperation to mitigate risks, and the middle powers must find a way to contribute, not just follow the front‑runners.
Draws a historical parallel that underscores the urgency of global governance and positions middle powers as essential actors in risk mitigation.
Reinforced Shahani’s point, deepening the analysis of governance needs and prompting further dialogue on the role of the Global Digital Compact and UN panels.
Speaker: Norman Schulz
India may not build frontier models yet, but it excels in context‑appropriate innovation, cheap large‑scale inference, and sector‑specific applications like healthcare and industrial AI – perfect for Indo‑German cooperation.
Provides concrete examples of how each country’s strengths can complement the other, moving the discussion from abstract geopolitics to actionable partnership ideas.
Guided the conversation toward specific collaboration domains (healthcare, industrial data) and supported the emerging theme of sector‑focused cooperation.
Speaker: Shyam Krishnakumar
We deliberately focus on open‑source technologies; however, many leading open‑source models today come from China, so we need alternatives – Indian LLMs could be a strategic option for Europe.
Raises the strategic implication of model provenance, linking technical choices to geopolitical considerations and the need for diversified open‑source ecosystems.
Prompted a deeper look at supply‑chain security of AI models and reinforced the earlier discussion on middle‑power leverage and sovereignty concerns.
Speaker: Raphael Leuner
We must not let AI shape narratives; humans should. Allowing AI to drive geopolitical narratives risks bias from training data, so we need strong regulation and bias‑detection tools.
Challenges the assumption that AI can be a neutral storyteller, highlighting ethical risks and the necessity of human oversight and mitigation strategies.
Shifted the tone toward ethical considerations, leading to audience questions about media narratives and eliciting responses about AI’s role in bias detection.
Speaker: Shahani Yaktiyami
The best way to align AI systems with our values is to develop them ourselves rather than procure from outside; otherwise, even open‑source models carry the originating country’s worldview.
Emphasizes sovereignty and value alignment, linking technical development to political autonomy and echoing earlier concerns about Chinese models.
Reinforced the call for indigenous or jointly developed AI solutions, influencing the concluding remarks about Indo‑German open‑source collaborations.
Speaker: Norman Schulz
AI will not replace diplomats; it will free us from tedious document‑digestion so we can focus on connecting the dots and innovative cooperation – the human element remains essential.
Balances optimism about AI efficiency with a realistic view of human decision‑making, countering fears of AI‑driven policy making.
Provided a grounding perspective that tempered earlier enthusiasm, reinforcing the theme that AI is a tool, not a decision‑maker, and closing the discussion on practical implementation.
Speaker: Norman Schulz (audience response)
Overall Assessment

The discussion was shaped by a series of pivotal insights that moved it from a broad geopolitical framing of AI to concrete, collaborative strategies for middle powers. Raphael’s point about internal fast‑co‑creation introduced the practical lens, while Shahani’s middle‑power narrative reframed the competition as an opportunity for nuanced leverage. Norman’s historical analogy and calls for sovereign development deepened the governance debate, and Shyam’s sector‑specific cooperation ideas turned abstract concepts into actionable plans. Subsequent remarks on open‑source provenance, narrative control, and the human‑AI partnership further refined the conversation, leading to a consensus that AI should be a supportive tool developed collaboratively, especially between India and Germany, to ensure ethical, secure, and value‑aligned outcomes.

Follow-up Questions
Provide detailed explanation of the fast co‑creation model used within the German Foreign Office for AI development and deployment.
Understanding this rapid internal collaboration approach is crucial for overcoming traditional IT project timelines and delivering AI tools that meet diplomatic needs promptly.
Speaker: Raphael Leuner
How can middle powers like India and Germany leverage their specific strengths in the AI value chain, focusing on sectoral expertise rather than frontier model competition?
Identifying viable strategies for middle powers to contribute to global AI governance without directly competing with the US and China is essential for inclusive technological development.
Speaker: Shahani Yaktiyami, Norman Schulz
What is the emerging role of geopolitical risk advisors in companies developing or deploying AI, and how does this affect AI governance?
The new professional function integrates security considerations into AI development, highlighting a key factor for responsible and secure AI deployment.
Speaker: Shahani Yaktiyami
What are the security and sovereignty implications of adopting open‑source AI models developed in China, and what alternatives can be cultivated by India and Germany?
Assessing hidden biases or strategic dependencies in AI tools is vital for protecting national interests and maintaining technological sovereignty.
Speaker: Raphael Leuner, Norman Schulz
What concrete Indo‑German cooperation projects can be pursued in industrial AI and healthcare AI, including data sharing, joint model development, and deployment?
Specifying practical collaboration opportunities leverages each country’s strengths and creates mutual benefits in high‑impact sectors.
Speaker: Shyam Krishnakumar
How can AI be employed to detect and mitigate bias in media narratives, especially in geopolitical reporting?
Ensuring AI supports democratic discourse rather than amplifying misinformation is critical for trustworthy information ecosystems.
Speaker: Shahani Yaktiyami, Norman Schulz
What impact will AI‑generated diplomatic reports have on analytical diversity and decision‑making quality within foreign ministries?
Evaluating whether AI summarization reduces nuance and creativity helps safeguard the quality of diplomatic analysis.
Speaker: Norman Schulz
What are the mandate, composition, and expected outcomes of the UN Independent Scientific International Panel on AI and the upcoming AI governance dialogue in Geneva?
Understanding these mechanisms is essential for tracking progress in global AI governance and ensuring inclusive participation.
Speaker: Norman Schulz
What frameworks or processes can ensure that AI systems used by governments align with democratic values, ethics, and the Global Digital Compact?
Aligning AI with shared values is a core governance challenge that requires clear, actionable standards.
Speaker: Norman Schulz
How are state and non‑state actors using AI to shape geopolitical narratives, and what mitigation strategies can be developed?
Addressing AI‑driven misinformation campaigns is necessary to protect the integrity of geopolitical discourse.
Speaker: Raphael Leuner, Shahani Yaktiyami
What is the current state of AI adoption and innovation in India, including capacity for building frontier models and grassroots model development?
Insights into India’s position inform assessments of global AI competition and collaboration potential.
Speaker: Shyam Krishnakumar
How can the open‑source AI revolution, analogous to the 1990s operating‑system movement, democratize AI access for middle powers and reduce dominance by major players?
Exploring this pathway could enable equitable AI development and lower barriers to entry for smaller nations.
Speaker: Shyam Krishnakumar
What concrete steps can be taken to operationalize Indo‑German AI cooperation beyond high‑level statements, including joint funding, pilot projects, and governance structures?
Moving from rhetoric to actionable collaboration is necessary to realize tangible benefits from bilateral AI initiatives.
Speaker: Gunda Ehmke, Raphael Leuner
Which specific tasks in foreign policy research, decision‑making, and implementation can be effectively automated using AI?
Identifying practical automation opportunities can improve efficiency and free diplomats for higher‑order analytical work.
Speaker: Audience (Sreeni)
Can AI contribute to more unbiased media narratives in geopolitics, and what safeguards are needed to prevent AI‑driven bias?
Exploring AI’s potential and risks in shaping public discourse is vital for maintaining balanced and trustworthy information flows.
Speaker: Audience (Sanjeevni), Shahani Yaktiyami, Norman Schulz

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce

AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel was convened to examine how India can develop the workforce needed for a thriving semiconductor sector, with Rangesh Raghavan introducing the theme and noting the presence of experts from METI, LAM Research and the government [1-4][16]. Speakers included Secretary S. Krishnan, LAM’s David Freed, moderator Paul Triolo, Minister Ashwini Vaishnaw and Professor Saurabh Chandorkar, each tasked with outlining strategies for scaling talent and infrastructure [25][74][78][88][134].


Krishnan highlighted the convergence of India’s AI and semiconductor missions, the launch of ISM 2.0 to cover the full ecosystem, and the commitment to ten new fab plants with production slated for 2026 [27-31][35-41][37-40]. He stressed that a resilient, trusted supply chain is essential for both geopolitical stability and global competitiveness, and cited LAM’s state-of-the-art Bengaluru lab and its integration of India’s supply chain as a model of industry support [20-22].


The discussion identified a severe talent gap, especially in advanced manufacturing and precision equipment, noting that while India has strong design and AI talent, it lacks skilled workers for fab operations [45-48][50-52]. Training programmes have already been delivered in India, Malaysia, Singapore, Taiwan and Europe, and the government plans to expand capacity domestically [53-58][59-60]. Vaishnaw reported that university participation has grown from 50 to 315 institutions, with students now using cutting-edge design tools and producing chips across the country [103-108][109-110]. Chandorkar added that hands-on fab exposure and new curricula-such as courses on process control and equipment maintenance-are being introduced, while LAM’s proposal for faculty fellowships aims to embed industry experience within academia [140-152][155-158][208-210].


All participants agreed that addressing the million-person gap requires a broad, ecosystem-wide talent pipeline rather than narrow skill training, emphasizing problem-solving, critical thinking and interdisciplinary knowledge [172-180][184-188][287-292]. Triolo underscored the three-way partnership among government, academia and industry as crucial for sustaining momentum, and Freed praised the collaborative model that is already prompting other firms, such as ASML, to replicate LAM’s initiatives [162-164][276-283][240-247].


In sum, the panel concluded that India’s ambition to become a key player in the global semiconductor supply chain hinges on coordinated policy, expansive education and hands-on training programmes that together will close the talent shortfall and support the nation’s expanding fab ecosystem [37-40][172-180][162-164].


Keypoints


Major discussion points


Urgent need for a large, skilled semiconductor workforce – The panel repeatedly stressed that India lacks enough people trained in advanced manufacturing and precision equipment, and that closing a “million-person gap” is critical for the sector’s growth.  [45-52][172-184][145-158]


India Semiconductor Mission 2.0 expands the ecosystem – The new mission will cover the whole value chain, including domestic equipment manufacture and new fab plants, positioning India as a reliable long-term partner in global supply chains. [37-40][129-130]


LAM Research’s pivotal role in building capacity – LAM highlighted its 25-year presence, a state-of-the-art systems-engineering lab in Bengaluru, integration of India’s supply chain, and concrete workforce-development programmes such as training in FRABs, OSATs and faculty fellowships. [19-22][48-51][166-170][208-214]


Academic initiatives and hands-on fab training – Universities have multiplied from an initial 50 to 315, deploying the “semi-verse” platform and establishing academic fabs that provide practical exposure; however, scaling hands-on training to a million engineers will require additional resources and government support. [103-108][134-144][145-158]


Broad, problem-solving education over narrow skill focus – Speakers argued that a holistic understanding of the semiconductor ecosystem (physics, materials, process integration) is more valuable than training for a single task, and that curricula should emphasize critical thinking and interdisciplinary knowledge. [174-184][287-292]


Overall purpose / goal


The discussion was convened to chart a coordinated, “holistic” strategy for achieving India’s semiconductor ambitions by aligning government policy (ISM 2.0), industry leadership (LAM Research), and academic capacity-building. The aim was to identify workforce-development priorities, showcase existing initiatives, and solicit commitments that will enable India to become a trusted, self-sufficient node in the global semiconductor supply chain.


Tone of the discussion


Opening (0:00-5:00) – Formal and celebratory, with polite welcomes, gratitude, and promotional remarks about the exhibition and the event’s significance.


Mid-session (5:00-30:00) – Shifts to a more technical and earnest tone as speakers detail policy milestones, skill gaps, and concrete training programmes; occasional light-hearted moments (e.g., the “picture” joke) break the seriousness.


Panel segment (30:00-60:00) – Collaborative and solution-oriented, featuring back-and-forth between industry, academia, and government, with a focus on actionable steps and shared responsibility.


Closing (60:00-end) – Slightly rushed and procedural, with rapid Q&A, reiteration of key messages, and final thank-yous, indicating a transition from discussion to concluding the event.


Overall, the tone moves from formal introduction to focused, collaborative problem-solving, ending with a concise wrap-up.


Speakers

Speakers (as listed)


Paul Triolo


– Role/Title: Partner in technology practice lead at the DGA group; moderator of the panel discussion.


– Area of Expertise: Semiconductor technology, industry-government collaboration. [S1]


Participant


– Role/Title: Audience member / questioner (no specific title mentioned).


– Area of Expertise: –


Rangesh Raghavan


– Role/Title: Host/moderator of the event, representing LAM Research (implied).


– Area of Expertise: Semiconductor ecosystem development, workforce strategy. [S5]


Professor Saurabh Chandorkar


– Role/Title: Professor at the Indian Institute of Science (IISc); key partner in the Semiverse program.


– Area of Expertise: Semiconductor research, advanced manufacturing, talent development. [S7]


S. Krishnan


– Role/Title: Secretary, Ministry of Electronics and Information Technology (METI).


– Area of Expertise: Government policy for electronics and semiconductor industry. [S9]


Harish Kumar


– Role/Title: Representative from CSTV, Access to Energy Systems.


– Area of Expertise: Energy systems, solar technology development. [S12]


David Freed


– Role/Title: Corporate Vice President, LAM Research (advanced analytical & simulation software); Leader of Semiverse Solutions (global semiconductor modeling & workforce development).


– Area of Expertise: Semiconductor modeling, talent pipeline development, AI-driven workforce solutions. [S14]


Ashwini Vaishnaw


– Role/Title: Honorable Minister for Electronics and Information Technology, Government of India.


– Area of Expertise: National semiconductor and AI policy, industry-government initiatives. [S17]


Additional speakers (not in the provided list)


Minister Vaishnoji – Honorable Minister mentioned as arriving shortly; no further details on role or expertise.


Anand Ramamurthy – Representative from Micron; no specific title given in the transcript.


Christian sir – Referred to by Rangesh Raghavan; role not specified.


Deepa sir – Referred to by Rangesh Raghavan; role not specified.


Other unnamed audience members – Various brief interjections (e.g., “Participant”) that do not have distinct names or titles.


Full session reportComprehensive analysis and detailed insights

The session opened with Rangesh Raghavan emphasizing that a skilled workforce is essential for “the growth of the semiconductor industry and support this era” and stating that the meeting would explore “scalable, holistic workforce strategies” for India’s semiconductor ambitions [1-4][16]. He noted that the current exhibition had been extended for another day and invited participants to visit the venue tomorrow [6-15], and he introduced Secretary S. Krishnan (Ministry of Electronics and Information Technology, METI), David Freed (Corporate Vice-President, LAM Research), and Paul Triolo (moderator). Raghavan framed 2025 as a breakthrough year for India’s semiconductor sector, driven by government focus and the India Semiconductor Mission [16-18].


Secretary S. Krishnan used his opening remarks to illustrate the convergence of the India AI Mission and the India Semiconductor Mission, stating that “semiconductors are so central to the AI story as AI is increasingly to the semiconductor story” [28-30]. He announced that India had joined the Pax Silica consortium to build a “trusted supply chain” and argued that a resilient, diversified global supply chain is needed both for geopolitical stability and to avoid the pandemic-era over-reliance on single geographies [31-33]. Krishnan outlined the government’s commitment to ten new fab plants, with four slated to start production in 2026 and the remainder within a year [35-37]. He highlighted the launch of India Semiconductor Mission 2.0 (ISM 2.0), which will cover the entire ecosystem, including domestic semiconductor-equipment manufacturing [37-40]. Citing market forecasts, he projected a $100 billion domestic semiconductor market by the end of the decade and stressed the need to build capacity for both domestic consumption and export [40-42]. He also pointed out that while India already supplies about 20 % of global semiconductor-design talent, it “lacks people in advanced manufacturing,” especially in precision equipment [43-48][50-52].


Before the panel began, David Freed offered a brief opening comment that “even design… needs a strong manufacturing backbone,” underscoring the industry’s long-term involvement in India [19-22][18].


Minister Ashwini Vaishnaw (Electronics and Information Technology) provided quantitative evidence of rapid academic expansion: the semiconductor-design target of 60 000 clean-room operators and 80 000 design engineers is being supported by growth from 50 to 315 universities using the “semi-verse” platform, with students across Assam, J & K, Kerala and Tamil Nadu now designing chips and seeing them fabricated at SCL Mohali [103-108][109-110]. He reiterated that semiconductors constitute a critical layer in the AI architecture and urged all stakeholders to participate in this ecosystem [111-114][119-122]. Vaishnaw also announced a new fab in Uttar Pradesh, inaugurated by the Prime Minister [123-124].


During a brief interlude, Raghavan presented the minister with a piece of Bidriware, symbolically linking the traditional art of metal-inlay to semiconductor etching processes [150-152].


The moderator, Paul Triolo, introduced the panel (noting the absence of Anand Ramamurthy) and identified the participants: David Freed and Professor Saurabh Chandorkar (IISc). He repeatedly emphasized the necessity of a three-way partnership among government, academia and industry, tying it to ISM 2.0’s focus on skilling, supply-chain integration and manufacturing [129-133][131-133].


Professor Saurabh Chandorkar described the academic side of talent-building. He noted that IISc’s academic fab ranks among the world’s top-three, but a single fab cannot train the “one-million” engineers required [143-146]. IISc is therefore revising curricula to include fab-centric courses such as SPC and process-control, has launched a training fab and the INUP programme that brings students from across India into hands-on fab work [148-152][154-158][157-160]. Chandorkar stressed the need for a second layer of practical training beyond tool-level knowledge and advocated for industry-run short courses (e.g., pressure-gauge and P&ID training) and expanded collaborations with companies like LAM [197-199][200-202]. He welcomed the idea of faculty fellowships as a way to embed industry experience within universities [208-210].


David Freed expanded on the industry perspective, describing a “million-person gap” that spans roles from field-service engineers to process, equipment, metrology, device and reliability engineers [172-179]. He argued that the gap cannot be closed by teaching isolated skills; instead, a “broad talent” approach that gives students a holistic understanding of what is being produced and why is required [181-188]. Freed proposed faculty fellowships that would place university staff inside semiconductor firms for six to nine months, thereby transferring industry-relevant knowledge back to academia [208-214][207-214]. He also highlighted LAM’s semi-verse platform as a vehicle for ecosystem-wide education [166-170].


When Paul Triolo asked the panel to clarify the term “IAS,” the moderator noted that its meaning was not defined in the transcript [250-251]. He then queried how ISM 2.0 could support IISc, and Freed was asked to identify gaps and suggest areas for expanded collaboration [129-130][131-133][162-164][170-174][184-188].


In the audience Q&A, Harish Kumar asked about developing a domestic wafer-production capability for solar cells, noting the current lack of such a programme in India [262-267]. Chandorkar responded that efforts on poly-crystalline silicon growth are underway, though details remain confidential [269-272]. Another participant asked how a young person could enter the semiconductor market; Freed advised focusing on “critical thinking, problem-solving and a broad-based understanding of physics, chemistry and material science” rather than early specialization [287-292].


The discussion revealed different emphases rather than a direct disagreement: Freed championed a broad, problem-solving curriculum that builds ecosystem awareness [181-188], while Chandorkar emphasized immediate, hands-on skill modules and short-term industry courses [197-199][154-158]. Both agreed that faculty fellowships and practical short courses are valuable mechanisms for strengthening academia-industry linkages [208-210][197-199][200-202].


The panel discussed several possible actions, including:


* LAM Research continuing to expand its semi-verse platform and exploring faculty-fellowship schemes [166-170][208-214];


* IISc and partner universities developing additional hands-on courses, scaling up training fabs, and aligning PhD projects with industry needs [148-152][154-158][221-224];


* The government providing funding and policy support for these training facilities and ensuring curriculum alignment with fab-relevant skills under ISM 2.0 [129-133][154-158].


In summary, the discussion underscored a high level of consensus that India must develop a multi-disciplinary talent pipeline of roughly one million workers, that education should prioritize holistic, problem-solving understanding alongside concrete, hands-on training, and that coordinated three-way collaboration among government, academia and industry is essential. The panel linked semiconductor capability directly to AI advancement and global supply-chain resilience, highlighted the urgent need for advanced-manufacturing skills, and identified concrete steps-faculty fellowships, expanded hands-on training, and policy support under ISM 2.0-to bridge the talent gap and position India as a trusted node in the worldwide semiconductor ecosystem [172-180][37-40][162-164][181-188][208-214][145-158].


Session transcriptComplete transcript of the session
Rangesh Raghavan

required workers to enable the growth of the semiconductor industry and support this era. We’re here today to just talk about that. Thank you for the opportunity to engage in this important conversation. We have experts here who can talk about how we build scalable, holistic workforce strategies to develop India’s semiconductor ambitions. We extend a warm welcome to our guests today. I’ll start with Sri Krishnanji, Secretary of METI. Thank you, sir, for joining us today. We know you’re very busy, but if I may add, excellent job by the METI team and all of, you know, we’re very proud to be here at this event. It was a mind -blowing exhibition. For those of you who have not enjoyed the exhibition, I urge you.

It has apparently been extended by a day. So I urge you to visit tomorrow. Tomorrow, if you get the chance to do so. You can visit till 8 p .m. today. You can visit till 8 p .m. today, sir. Sir, thank you, thank you sir well we have also here with us David Freed, Corporate Vice President and leader of LAM Research’s advanced analytical and simulation software business that supports the development of the semiconductor industry we also have Mr. Paul Triolo Mr. Paul Triolo is a partner in technology practice lead at the DGA group who graciously agreed to be a moderator for our panel discussion which is to follow shortly to set some context to both these sessions 2025 was a great year it was a great year for the India semiconductor industry as well with the right focus of the government and thanks to the India semiconductor mission years of policy vision are finally translating ambition into reality and we are beginning to see the fruits of that now and rightfully so the government has expanded their focus beyond just wafer fabrication to the larger ecosystem and to the larger because we realize that it takes the whole village to make this happen.

How do we ensure that we have the right talent, the research infrastructure, the technology expertise, the supply chain, all of the other things that it takes to support this sector? With the industry accelerating past a trillion dollars, we at LAM recognize the importance of supporting a globally distributed innovation -led ecosystem. We’ve been in India for 25 years, and we are committed to being a long -term partner and contributor to this. We have a state -of -the -art systems engineering lab for semiconductors in Bengaluru, which continues to grow and is significantly expanding India’s contribution to the global industry. We are also making rapid progress in integrating India’s supply chain into our global supply chain. But most importantly, we have taken big strides in supporting the development of the workforce in India, and David will talk about that a little bit more shortly.

so it won’t take any much more time but I’ll invite Secretary Krishnan to share a few of his remarks. Thank you. Do you want a picture? He wants a picture now.

S. Krishnan

Part of the planning for many of these sessions included instructions that the picture of the panellist needs to be taken right in the beginning so that if somebody goes missing midway through they’re not missed. So I guess he was getting to do his job. Lamb research in some ways is a bit of a a lucky charm as far as I’m concerned and I think Rangesh will understand what I’m trying to say but more importantly I think this is, I’m really happy to be part of this session because this is one of those sessions which represents what the convergence is in what India is attempting. We have two major missions, we have the India AI mission and we have the India semiconductor mission and this session kind of represents how those two missions are converging or getting together.

It represents how semiconductors are so central to the AI story as AI is increasingly to the semiconductor story. So this morning we also signed the Pax Silica, we were added to the Pax Silica so which again represents a very important step forward in building a trusted supply chain in the semiconductor space. What the world needs is a resilient and reliable supply chain where, I mean, it is not just for geopolitical reasons, but even for other reasons. We saw in the COVID pandemic issues relating to the supply chain prop up and therefore over -reliance on any one geography is always going to be a problem and India needs to be part of this game. And for India to be a reliable long -term partner in this game, it is also very important that we are not just part of the design teams, which we already are, including for land research and including for many other leading semiconductor companies in the world, but we also need to be part of the manufacturing.

And manufacturing not just of the chips. And this year we are going to have 10 of the, we already have committed to 10 major semiconductor plants across the country, four of them at least. We will commence production during the current year, during 2026. and the remaining in due course in about a year or so. But more importantly, I think the India Semiconductor Mission 2 .0 has also been announced, which will cover the entire ecosystem, including the manufacture of semiconductor equipment in the country. And I think that is a very, very critical and important step. And this is important from a context where I think the use of semiconductors is only going to grow and not come down. India’s own market for semiconductors is going to be about $100 billion by the end of this decade, and a fairly substantial part of what the global market is.

And we need to build capacity to actually cater to a significant part of this market, and in some senses also for export. And the export part is important, not from the perspective, not just from the perspective of… being competitive and being efficient because if you’re not able to export then it obviously means you’re not competitive and efficient globally but also because when you are part of a global supply chain you are never going to manufacture everything in the chain but you need to have a significantly important and you need to be an indispensable part of it somewhere so that you don’t sort of get knocked out of it that somebody else’s way so it is it’s it’s the way that this entire system works it’s the way the global value chain works and that’s where we are coming together in this entire space and what lamb is doing in the space is extremely important and equally what’s very important if we are to do this kind of advanced manufacturing in the country is actually the capacity building to have the skills to do this we keep talking about STEM skills in this country we keep talking about the number of people who are we we have 20 % of the semiconductor design team in the country, in the world.

We also are recognized as having one of the largest talent pools for manufacturing, for AI in the world. Both of these are true. But where we lack is people in advanced manufacturing. In the actual manufacture of semiconductors. Where we lack is in the precision manufacturing of the equipment needed for semiconductors. And LAM Research and companies of that nature, in building the semiconductor ecosystem in this country, are looking to develop precisely that. The precision manufacture of semiconductor equipment. That means we will have to skill people in that space. We will have to skill people in that line of work. And that’s the real challenge that we will be facing in the next five years. As part of the India Semiconductor Mission, we have trained workers.

In FRABS and in… in OSATs, not just in India, but like in the semiconductor lab at Mohali, but also in Malaysia. We have trained people in Singapore. We have trained people in Taiwan. We have trained people in Europe. We have trained people in different parts of the world. And we will continue to do that, but we will also need more capacity to do it here. And training and research capacity being built by companies like LAND will have an important implication there, and the government will support those initiatives as part of the India Semiconductor Mission 2 .0, and make sure that India becomes a key player in this space as well and becomes a key partner in global supply chains.

It’s an investment that the world is making in India, which I can assure you will be paid back in no uncertain terms in terms of building a resilient, trusted value chain for semiconductors for the world, and that’s precisely what… We are attempting… to do through the series of initiatives and today we can’t any longer speak of ai without speaking of semiconductors or vice versa and which is why what lamb is doing and what we are attempting to do in terms of skill building in this critical space is so important and which is why i’m extremely happy to be part of this event and all strength to you in lamb may you continue to be a lucky charm thank you

Rangesh Raghavan

thank you very much christian sir uh deepa sir is in such a hurry that you’re in such a hurry uh we want to make sure you get your gifts I just wanted to wind down. Five minutes. Okay. We are eagerly awaiting the arrival of Honorable Minister Vaishnoji. He is five minutes away, is what I’m just told. Minister Vaishnoji has been instrumental in getting this industry where it is in India over the past few years. We look forward to his presence here shortly. And in the interim, I’d just like to invite David Freed to give a few comments. David is a leader of our global semiconductor modeling and workforce development organization called Semiverse Solutions. David has played a key role.

in building India’s workforce training on advanced semiconductor manufacturing. He’ll give a few words about that. Thank you. Thank you very much.

David Freed

even design. And so the objective here is really to drive across the country for full scaling of our talent development. So with that I’ll wrap up. Thank you very much for your attention and I think we’ll kick off our panel pretty soon. I’m sorry.

Rangesh Raghavan

Thank you very much David. Welcome sir. It’s a pleasure to see you again. We know you’re very busy and this is one of the marquee events for the country of the whole year. The scale and the impression of this event is mind boggling truly at the scale that we have been able to do it. So congratulations to you sir and the team for inspiring us with the exhibits that we saw today were amazing. And it speaks to the potential of AI. It also speaks to the importance of the semiconductor industry to enable this transition and the role that companies like LAM play in that. and we are very grateful to you sir for your support.

You’ve always been very supportive of us in our journey here and you continue to be so we’d like to hear from you a few remarks. We know you’re a very busy person so we’d appreciate it. Thank you.

Ashwini Vaishnaw

This is LAM team or people who have come to listen to LAM. How many people work in LAM? Mostly people who are mostly here. LAM supplier ecosystem. Okay, very good. Solar technology. You’re in solar, very good. The way the semiconductor industry is growing in India, this is an unprecedented thing. Just in a few years, in the beginning of 2014, I was told that I was going to be a member of the LAM team. I was told that I was going to be a member of the LAM team. I was told that I was going to be a member of the LAM team. I was told that I was going to be a member of the LAM team.

I was told that I was going to be a member of the LAM team. I was told that I was going to be a member of the LAM team. I was told that I was going to be a member of the LAM team. Initially, we were focused on design and we had a lot of new capabilities in design. Then we came to manufacturing and now we are going much deeper in equipment and materials. In 2022, when the semiconductor mission started, we had a target of 60 ,000 talent for clean room operations and 80 ,000 overall design engineers. We thought we will start in 50 universities. Today, we have 315 universities. We already have students using world’s latest design tools, designing chips, getting them manufactured in SCL Mohali and validating them.

And throughout the country, from Assam, J &K, Kerala, Tamil Nadu, Students from all over the country are doing chip design themselves. This capability is going to become a great power for the coming years. And we all know that in this world of AI, in the age of intelligence, semiconductors will be one of the most important layers. In this architecture of five layers, semiconductor is going to be a very important layer. So, all of you please participate in this. I would like to thank LAM for taking this initiative. I would like to thank all the people who have got associated, especially the universities. How many people have come from the universities? How was your experience? How was your experience coming from the university?

Very good. How easy was it to use this entire semi -verse? Very easy. Actually, my good friends from LAM… It was easy. Did anyone find it difficult? Talent gap has to be filled by India only. That means all that work is going to come to India. That will be a huge opportunity, space for our young people. And tomorrow, in Uttar Pradesh, a new semiconductor plant will be founded by our Prime Minister, Shri Narendra Modi. Many congratulations

Rangesh Raghavan

As you know sir we are in the business of deposition and etching this is an old 14th century Indian technology called Bidariware from the district of Bidar in North Karnataka where they also do this damascene process which is what is used for the most advanced semiconductors today so this is a plate which is showing the skill of the artisans who have manually etched these features and deposited metal within those etched features and then polished it which is exactly the process used today for semiconductor manufacturing so we thought it would be very appropriate for you to have this gift so thank you very much sir thank you thank you so much thank you so much sir Thank you.

Thank you. so now we can proceed with the panel discussion with the remaining time we have we have Paul Trielo here to conduct the panel discussion we had Mr. Anand Ramamurthy from Micron due to join us unfortunately he had a personal emergency and he had to leave town so we wish him well in the meanwhile we’ll have David and we’ll have Professor Saurabh Chandorkar Professor Chandorkar is one of our key partners at Indian Institute of Science he has been instrumental in the launch and execution of the Semiverse program he is also very busy advancing the state of the nation in the most advanced research areas for semiconductors and its applications so we’d love to hear from him as well thank you very much thank you Paul

Paul Triolo

Thank you. So, okay, I’m going to pick up on some of the themes that were discussed earlier. I was going to grill Secretary Krishnan on ISM 2 .0, but unfortunately we can’t do that. But I think it’s really important looking forward to, as was mentioned, ISM 2 .0 will focus on skilling and on supply chains and manufacturing. So let me start with Dr. Chandakar. We know that IIS is hosting a really rich center in Bangalore with LAM and other companies that is critical for the skilling issue in the semiconductor industry going forward. How do you see the future shaping up in 2026? And what does IIS need, for example, from the government under ISM 2 .0?

Professor Saurabh Chandorkar

Sure. Sure. So let me just start by saying that. It’s actually quite amazing for me to just have. the dream of having FABs come up in India. It was something that actually happened from my father’s time, who was also a professor in IIT Bombay. And since his time, he was also a semiconductor manufacturer and technology and such. So anyway, fast forward, we are in this amazing position where we are actually getting FABs here, which obviously, as has been discussed, leads us to realize that we actually need a lot of workforce. And it’s not that we didn’t have people out here who were learning, say, semiconductor technology. It wasn’t that we were not doing semiconductor design.

But what was actually missing was the ability to actually see how FAB actually works, where you actually go and interact with tools. And that’s where the semi -verse comes in and, you know, and basically we in ISE do, in fact, have a really good FAB. as an academic fab, I would say we are probably in the top three or four in the world. So we are pretty good there, but that’s not the case for most of the universities in here. And we alone cannot take the role of training one million people. That’s just impossible. So when this whole program came, this was an ideal opportunity. And so, of course, that’s very exciting. And I see ourselves, we also recognize that this needs a certain re -look at the way we teach our coursework.

So, for example, we started teaching courses such as advanced notes from the perspective of fab, and that’s where, in fact, we do teach and make use of this software. and I, for example, teach SPC, which is basically process control. How does one do that? And those are the kinds of things that are actually really required for FAB. And so the way I see it is I think the foundation has been laid down, and I am sure that if this continues along with the support of the government, I’m sure we’ll do just fine. But the ask is not small, by the way. If you just look at it, it’s not just that once you get trained on tools like this that you become really actually immediately ready to go and start working in the FABs.

That’s not the case. And what needs to be, therefore, understood is that there is a second layer of hands -on training that needs to happen. We ourselves, in fact, have started, we have a training FAB that’s currently getting established, and this needs to happen across India far more. We already do these kinds of programs called INUP where people come from all around India and do some sort of FAB in our FABs. But this would be more intended towards training. And so we are gearing ourselves up for that. And I think this needs to happen everywhere else where more FABs need to come up and show this up.

Paul Triolo

Great, great. Those are great. So, I mean, as we’ve heard, I think this integration of government support for both the academic piece of this and the industry piece is really important, a really important three -way relationship. So I’m going to go back to David with a great presentation on Semiverse and say, you know, LAM as I think everybody understands is such a critical. part of the supply chain. I mean, you know, no land, no semiconductors, right? So, David, how do you envision this workforce? As Professor Chandakar has noted, you know, the foundation has been laid, but I think that as AI is taking off and as we look forward to the next three to four years, you know, we’re going to see this huge demand.

And the million -person shortage really sort of blows my mind here. That’s a huge number. So in terms of support from the government to help close that gap, to continue the momentum that land has generated here, what are the gaps you see here? And are there areas you’d like to see expanded in terms of this collaboration between both the government and academia?

David Freed

Okay, so I’ll start just thinking about the gaps, right? This million. This million -person gap. I think it’s important to recognize that that gap is not a single type of person, a single type of skill. Right. There’s gaps across the entire ecosystem. And that ecosystem spans from even just from LAM’s perspective, field service engineers who maintain the tools in the lab and in the fab all the way to process engineers, process developers, equipment engineers. And then if you expand out to the rest of the ecosystem, our customers, they will have demands in metrology engineers. They will have demands in device engineers, simulation and reliability. So the span of disciplines that make up that million person gap is very, very broad.

OK. And so one of the things that we tend to focus more on developing talent and a talent pipeline rather than just. Educating on individual skills. And I think that’s super important for the future of semiconductors in India that we focus on broad talent. And I want to I actually want to touch a word that you said. I think you said it five different times in your response, Professor. Use the word understand. the understanding of what we’re producing the understanding of what our products are is so much more important than a singular skill to go do one thing and so the semi -verse program at IISC as we’ve expanded out across the country is more about teaching students what are we making what are the devices what does process integration mean what are we creating so that those students can go off into various different areas of the ecosystem are they ready for all of those jobs with one class?

no, of course not they need the additional hands -on training they need additional education in those areas but my recommendation I think the recommendation broadly based is focus on talent rather than skill okay combining a broad understanding of the industry and what we’re what we’re trying to accomplish and what we’re building it’s taken the the the countries that have historically led this industry have been working at this and for 50 to 70 years we’ve developed that understanding and that broad swath of knowledge over 50 to 70 years if we’re going to do it here in two years it’s going to take a very different focus on how we develop the understanding of the of the industry so that that’s my expectation but by doing that we can address all of those gaps sort of at

Paul Triolo

the same time great great yeah i mean i think that the the the skill skilling is the sort of popular word here but it may not be the right way to think about this industry given what we discussed about the complexity uh of manufacturing and and the the disciplines that are needed it really is a commitment to a to a you know a huge set of uh to a huge set of uh to a huge set of uh to a huge set of uh to a huge set of uh to a huge set of talent development um that again uh collaboration with IAS and the academic world is so important. So let’s turn back to Professor Chandekar.

I know we’re going to have a little bit of time for questions, I hope, at the end. What is IAS? So I’ve talked about what is IAS looking for from the government. What is IAS looking for the industry as we enter, particularly think in terms of ISM 2 .0, which I think is really important. We may not know all the details. And then are there areas where things can be improved or streamlined? And what are the challenges? Because this is, as we know, that now this is a complex

Professor Saurabh Chandorkar

Right. So from the industry, some of the things that we already are actually in a process of talking with industry in this regard, which is he just mentioned right now that you don’t necessarily have to focus on one particular skill. But still making the coursework tailored to. what is actually essential for some of the skills that are needed is something that needs to happen. And so as an example, we recently started a course for just giving hands -on training to students, sort of people working in labs, on how do pressure gauges work, how do you build PNID systems. Those are the kinds of things that, for example, he just talked about, how you need to be able to maintain tools.

And that’s the kind of training that we are, in fact, giving in our own courses as well. In fact, one of the rather interesting ways in which ISC is currently sort of providing service to the industry is by just training. Our own 50 -odd employees who work in our fabs. those actually are surprisingly in demand are immensely in demand and it’s very hard for us to keep them in so what we would like more from industry is maybe more of this kind of hand holding that so for example we talked with LAM and did this together with them this needs to actually sort of grow across and to some extent we can do it but I think LAM since you guys already are giving out this software to so many other places maybe it would be easier to do the same elsewhere as well and I’m sure that’s something that’s going to be of great use

David Freed

just one comment I’ll make is this is one of the few situations where industry doesn’t need to be convinced to be involved here if we don’t fill that talent gap we will fail Like all of our business objectives and our growth objectives for the next 10 years require the talent pipeline to be developed. So this is not something where you’re trying to crack into industry or trying to convince us to do something we don’t want to do. We fail if this doesn’t happen. And so I think it’s like one of these examples where we have mutually perfectly aligned objectives. And so we’re trying. I’ve had meetings for the last two days with different ministers and different agencies here in India where we’re trying to find the ways we can be more involved.

One way, and I hope I’m not ruining any surprise, an idea that came up over the last couple days is faculty fellowships at these companies. Right? If we could take the faculty and give them a job, if we can figure out a way to get that funded, give the faculty a job for six to nine months inside our companies, in the industry, and really drive more industry -relevant knowledge to the faculty, to the universities, I think this is a brilliant idea. And we’re going to try to pursue this. And this idea only comes when we sit down at the table and we start talking. What do the universities need? What do we need? What can we provide?

How do we make this work? But nobody needs to convince us. We need this to happen.

Professor Saurabh Chandorkar

Right, right. Yeah, along the same lines, maybe more projects that these students do for PhD, if they are aligned with not just LAM, actually, all the entire center.

David Freed

No, no, no, just LAM. Just LAM. Just LAM.

Professor Saurabh Chandorkar

Yeah, so I think that would really work out. And I think that’s kind of important. And I truly believe that unless you do projects along the lines of something like, which is aligned with industry, it’s not necessarily. He did say that, you did say that talent matters. But I think the fact that we have small time window actually means that we don’t have as much time. Yeah. As for example, so. So I think that’s a really good point. So as an example, I myself did my PhD in, you know, men’s. And in industry, when I joined Intel, I started out with no knowledge of all the SPC stuff, no knowledge of, you know, how they do stuff on the floor and whatnot.

But I had to learn it, and I had enough time. I had no problems. This is not the case here. They’re going to have, so for example, sure enough, now once data starts their fab, they’re going to quickly find out how hard it really is, how quickly and how often you fail, and how it’s important to pick yourselves up and to move forward. And sort of that sort of, I think that’s something that PhDs, for example, have a lot in them, sort of built into them, because they fail and mostly just fail and then eventually succeed at some point. And so I think that’s another thing that probably needs to happen at a bigger scale.

I think that’s a big deal within India where PhDs, more PhDs now also start looking into these kinds of jobs and just sort of. having at least some bent towards them. So that would be a thing.

Paul Triolo

And I think it’s important that having the manufacturing, having the fact that there’s going to be fabs, I mean, Japan is going through a similar thing, right, where for a long time they weren’t doing advanced logic, and now that’s one of the reasons they attracted TSMC to come to build a fab. And now within the academic sector, there’s a lot of interest in hardware engineering because it’s a hard discipline, but at the end of the day, if the country is building fabs and there’s a need for engineers, then that makes it more attractive because it has to be, so that’s part of the whole ecosystem building.

David Freed

I was just going to say, I think, I joke around that I only want LAM to benefit from this, but I think we’re seeing other companies in the industry follow us. Obviously, LAM is leading this effort. Obviously, LAM is… benefiting from this already, right? We’re already seeing the talent pipeline develop. We’re scaling the team in Bangalore. We’re already getting the benefits from this. And so because of that, our competitors, but also our partner companies have started doing the same. And so I think we are seeing, you know, I can say ASML, they’re not a competitor. They’re a very good partner. We work with them very closely. We see them following suit. They’re jumping in and trying to do some of the same things that we’re doing here in India because, again, their business objectives are reliant on closing that talent gap.

So I do think we’re seeing, I’m very, very proud of LAM. I’m very proud that we’re leading this, that we’re out in front. But I’m also very proud to see the rest of the industry jumping in, copying what we’re doing because we all need it to happen.

Paul Triolo

Great. Do we want to take a couple questions from the audience? Okay, wow, we got a lot of them. Okay, let’s go right here.

Harish Kumar

Thank you very much Chairman. I am Harish Kumar from CSTV, Access to Energy Systems. Question, first of all I would like to thank the Minister for having a very good start -up in the semiconductor industries in India. So the question is how to make a skilling, skilling India, energizing India. Skilling India, there are two questions. How to make the lamp research, make a skilling activity like in wafer development, wafer in solar technology. The solar cells and solar module came from the wafers. So there is no unit of any kind in India on wafer development. So there is any program on wafer development for the solar manufacturing, solar cell manufacturing and marketing in India, not import anything.

I don’t know if you…

Professor Saurabh Chandorkar

So I can actually answer to some extent and let him take over from there. Actually, there are efforts going on in India for, in fact, polycrystalline silicon growth for wafers, and that’s something that is coming up. I won’t reveal because I don’t know exactly if they want to reveal it, but it’s a big company. They’ll be bringing it in. So it’s happening. It’s going to happen.

Harish Kumar

Because of the skill development, India has a youth, 40 % youth in India. The question is skilling, skilling in India, energizing in India, solar technology. We’re bringing solar technology to marketing.

David Freed

Sure. I think, I mean, one thing I would say is, like, leveraging the connection between, between industry, academia, the government. And it’s been incredibly fruitful. It’s also just been, frankly, pleasant. It’s been such a joy to work together between the government, academia, and our industry. And I think solar should follow a similar model, right, where there’s business opportunity, where there’s an educational opportunity, where there’s an incentive to be successful as a country. We put those pieces together, and wonderful things can happen. And I cannot express how wonderful, how enjoyable this experience has been in India because the faculty we’ve worked with at IASC and the other schools are such consummate professionals, are so invested in this vision of the future, and the government is backing it.

So I would urge, you know, copycat this model of putting the three pieces together and one day… Wonderful things can happen because the demand is here, the supply is here, and the commitment to the vision is here.

Participant

Okay, one question May I? This feels very palpably like a Y2K moment where the demand is there and you have this great opportunity if somebody was listening to this and they have a young person in the family and they’re looking to pivot in a flowchart, what is the first thing that the young person needs to do to get into this market?

David Freed

As a young person problem solving, critical thinking whether they want to be building Legos or doing coding exercises critical thinking, problem solving and then some specialization will occur naturally later but what I would urge against and it goes back to some of my messages before is focusing exclusively on a specific skill because this is the path to success Thank you and just look at what’s happening with our previous focus on coding. Okay, everybody said coding is our way to the future. Coding is the way to success. And now AI is writing all the code. So I would stress, like, avoid the urge to focus on a very single skill, a single solution, and I would focus on a broad -based understanding, problem -solving critical thinking, physics, chemistry, material science, the broad, hard physical sciences lead to these disciplines across the ecosystem.

Now, I say this as a father of two who has failed miserably to get his daughters into STEM. But I tried. I tried really, really hard, and I think that’s where the kids, that’s where the talent is going to come from, by thinking broadly, by thinking critically and thinking about problem -solving, rather than picking one skill to get very good at.

Paul Triolo

I got my daughter into chemical engineering.

Participant

Just a minute. Sir, I have one intervention directly to you, David. I was listening to you with rapt attention. Excuse me. I would come to know that about the talent. I was a student of English Literature of Calcutta University 30 years ago. There is a very famous essay by T .S. Eliot where he mentioned about traditional and individual talent. It is a talent pool which matters a lot. I have a specific question with respect to optimization, which you mentioned. About the semiconductor is AI, AI is semiconductor, and it’s optimization policy. And it says that could you just please just highlight as much as possible.

Paul Triolo

All right. Well, that will be our last question.

David Freed

So the interesting thing, I think, again, optimization and some of these technologies have to be really discipline focused. And so when we’re doing R &D, we’re in a small data environment. We don’t have a lot of data. Optimization isn’t. Isn’t very helpful when we’re in manufacturing. We have lots of data. Optimization is extremely helpful. And so we’re developing machine learning and AI techniques. But you have to bring the right tool to the job. Optimism.

Participant

Sir, I have one intervention directly to you, David. I was listening to you with rapt attention. Excuse me. I would come to know that about the talent. I was a student of English Literature of Calcutta University 30 years ago. There is a very famous essay by T .S. Eliot where he mentioned about traditional individual talent. It is the talent pool which matters a lot. I have a specific question with respect to optimization, which you mentioned, about the semiconductor is AI, AI is semiconductor, and it’s optimization policy, and it says that could you please just highlight as much as possible.

Paul Triolo

All right. Well, that will be our last question.

David Freed

So the interesting thing, I think, again, optimization and some of these technologies have to be really discipline -focused. And so when we’re doing R &D, we’re in a small data environment. We don’t have a lot of data. Optimization isn’t very helpful. When we’re in manufacturing, we have lots of data. Optimization is extremely helpful. And so we’re developing machine learning and AI techniques. But you have to bring the right. You have to bring the right tool to the job. Optimization is a great tool to the job. Organization in a small data R &D mode isn’t always super helpful. Very, very helpful in a big data manufacturing mode. So I think we really have to focus on the discipline.

Paul Triolo

All right. Well, with that, we have to call it an end because we have exceeded the time allotted to us. There are other people waiting to use this room. So thank you very much, David. Thank you, Paul, for hosting. Thank you very much, Professor Chandakar. Appreciate it. Thank you very much, Paul. Thank you. All right. Yeah. He gets a black dress. Come over here for a photo op. Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (21)
Factual NotesClaims verified against the Diplo knowledge base (5)
Confirmedmedium

“Rangesh Raghavan emphasized that a skilled workforce is essential for the growth of the semiconductor industry and that the meeting would explore scalable, holistic workforce strategies for India’s semiconductor ambitions.”

The knowledge base notes that all three speakers stressed that successful semiconductor workforce development requires close collaboration between industry, academia and government, aligning with Raghavan’s emphasis on a skilled, holistic workforce strategy [S6] and lists the speakers including Raghavan [S1].

Confirmedmedium

“Secretary S. Krishnan announced that India had joined the Pax Silica consortium to build a “trusted supply chain”.”

The Pax Silica Declaration signing, which formalised India’s participation in the partnership to build trusted and resilient technology supply chains, is recorded in the knowledge base [S68].

Confirmedhigh

“Krishnan stated that India already supplies about 20 % of global semiconductor‑design talent.”

The knowledge base reports that Indian engineers conduct 20 % of worldwide chip design, confirming the 20 % figure [S11].

Confirmedhigh

“Krishnan highlighted that India “lacks people in advanced manufacturing”, especially in precision equipment.”

A speaker in the knowledge base explicitly says India has a large talent pool but lacks people in advanced manufacturing and precision equipment manufacturing [S67].

Additional Contextmedium

“India supplies about 20 % of global semiconductor‑design talent.”

Beyond the 20 % design share, the knowledge base adds that India produces roughly 1.5 million engineering graduates each year, providing additional depth to the talent-pool claim [S11].

External Sources (68)
S1
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — -Paul Triolo- Role/Title: Partner in technology practice lead at the DGA group; Panel discussion moderator
S2
WS #280 the DNS Trust Horizon Safeguarding Digital Identity — – **Participant** – (Role/title not specified – appears to be Dr. Esther Yarmitsky based on context)
S3
Keynote Address_Revanth Reddy_Chief Minister Telangana — -Participant: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or organizer…
S4
Leaders TalkX: Moral pixels: painting an ethical landscape in the information society — – **Participant**: Role/Title: Not specified, Area of expertise: Not specified
S5
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — -Rangesh Raghavan- Role/Title: Not explicitly mentioned, but appears to be moderating/hosting the event and representing…
S6
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — Agreed with:Ashwini Vaishnaw, Rangesh Raghavan — Comprehensive semiconductor ecosystem development beyond just chip manu…
S7
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — -Professor Saurabh Chandorkar- Role/Title: Professor at Indian Institute of Science (IISc); Key partner in the launch an…
S8
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — Speakers:David Freed, Harish Kumar, Professor Saurabh Chandorkar Speakers:David Freed, Paul Triolo, Professor Saurabh C…
S9
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — -S. Krishnan- Role/Title: Secretary of METI (Ministry of Electronics and Information Technology)
S10
Empowering India &amp; the Global South Through AI Literacy — -Shri S. Krishnan: Secretary, Ministry of Electronics and Information Technology (MeitY), Government of India
S12
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — -Harish Kumar- Role/Title: From CSTV, Access to Energy Systems
S13
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — 158 words | 140 words per minute | Duration: 67 secondss Because of the skill development, India has a youth, 40 % yout…
S14
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — -David Freed- Role/Title: Corporate Vice President and leader of LAM Research’s advanced analytical and simulation softw…
S15
https://app.faicon.ai/ai-impact-summit-2026/ai-powered-chips-and-skills-shaping-indias-next-gen-workforce — thank you very much christian sir uh deepa sir is in such a hurry that you’re in such a hurry uh we want to make sure yo…
S16
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — Agreed with:David Freed — Broad talent development over narrow skill specialization Agreed with:David Freed — Massive s…
S17
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — -Ashwini Vaishnaw- Role/Title: Honorable Minister (appears to be instrumental in India’s semiconductor industry developm…
S18
Announcement of New Delhi Frontier AI Commitments — -Shri Ashwini Vaishnaw: Role/Title: Honorable Minister for Electronics and Information Technology, Area of expertise: El…
S19
AI and Global Power Dynamics: A Comprehensive Analysis of Economic Transformation and Geopolitical Implications — -Ashwini Vaishnaw- Minister for Economic Electronics and Information Technology of India
S20
Press Briefing by HMIT Ashwani Vaishnav on AI Impact Summit 2026 l Day 5 — The semiconductor sector represents a parallel track of development, with Vaishnaw specifically mentioning the foundatio…
S21
Keynote Adresses at India AI Impact Summit 2026 — “This capability we have to develop.”[8]. “This scale we have to develop.”[9]. Vaishnav stresses that India must build …
S22
EU Digital Diplomacy: Geopolitical shift from focus on values to economic security  — The EU emphasises ‘resilient ICT supply chains’ and the use of trusted suppliers. In practice, this means diversifying a…
S23
Partnering on American AI Exports Powering the Future India AI Impact Summit 2026 — Secretary Krishnan argues that countries need to align with partners who share similar values to create secure supply ch…
S24
Socially, Economically, Environmentally Responsible Campuses | IGF 2023 Open Forum #159 — Hiroshi Esaki:Well, simple thing is we love technology, and we love Earth, and we love globe. So also, we really love th…
S25
https://app.faicon.ai/ai-impact-summit-2026/nextgen-ai-skills-safety-and-social-value-technical-mastery-aligned-with-ethical-standards — But I’ll tell you that we need to really work out an infrastructure. We need to work out on academic strength. We need t…
S26
The Gig Economy: Positioning Higher Education at the Center of the Future of Work (USAID Higher Education Learning Network) — Focusing on unique strengths and resources is important, as well as addressing the needs of local communities and making…
S27
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — Massive Workforce Development Challenge: The industry faces a critical shortage of approximately 1 million skilled worke…
S28
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — -Massive Workforce Development Challenge: The industry faces a critical shortage of approximately 1 million skilled work…
S29
Next Steps for Digital Worlds — In summary, the analysis highlighted concerns over the consolidation of semiconductors and its potential impact on infor…
S30
Approaches Towards Meaningful Connectivity in the Global South — This comment set a sobering tone for the entire discussion and established implementation gaps as a central theme. It in…
S31
Strategy outline — –  Absence of governmental policies, strategies and programs supporting the industrial sector. –  Lack of political an…
S32
Closing Session  — Sustained collaboration between governments, industry, and other stakeholders is essential for translating recommendatio…
S33
Panel 3 – Innovations in Submarine Cable Technology and Maintenance &amp; Panel 4 – Legal and Regulatory Frameworks for Cable Protection — Sandra Maximiano stresses the importance of creating an ecosystem that balances connectivity, security, and innovation. …
S34
Secure Finance Risk-Based AI Policy for the Banking Sector — It calls for institutional mechanisms that allow individuals to seek clarification and redress where automated decisions…
S35
Semiconductors — In summary, the semiconductor industry 2025 will experience robust growth driven by AI and demand from data centers. Thi…
S36
The Battle for Chips — India is placing a strong emphasis on developing a comprehensive ecosystem for the semiconductor industry. The country b…
S37
Future-Ready Education: Enhancing Accessibility &amp; Building | IGF 2023 — 1. Nepal requires more practical and skills-based education to enhance employability. Despite having years of formal edu…
S38
Socially, Economically, Environmentally Responsible Campuses | IGF 2023 Open Forum #159 — Collaboration between academia and industry is essential for effective decarbonization strategies. An example is provide…
S39
Empowering Inclusive and Sustainable Trade in Asia-Pacific: Perspectives on the WTO E-commerce Moratorium — To ensure successful integration, bridging the gap between academia and industry is essential. Due to the rapid advancem…
S40
SEMI calls for stronger EU semiconductor policy — Industry groupSEMIEurope has urged the incomingEuropean Commissionto adopt a more unified industrial strategy and expand…
S41
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — -Building India’s Role in Global Supply Chains: Discussion of making India an indispensable part of the global semicondu…
S42
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — While acknowledging India’s strengths in design and general manufacturing talent, Krishnan identifies a specific gap in …
S43
Socially, Economically, Environmentally Responsible Campuses | IGF 2023 Open Forum #159 — Collaboration between the United States and India was emphasized, particularly in the field of building energy research …
S44
Fireside Chat The Future of AI & STEM Education in India — Additionally, a major project in partnership with TATA consortiums will implement Industry 4.0 learning across approxima…
S45
Opening Remarks (50th IFDT) — The overall tone was formal yet warm and celebratory. Speakers expressed pride in the IFDT’s accomplishments and gratitu…
S46
Open Mic &amp; Closing Ceremony — The overall tone was formal yet appreciative. There was a sense of accomplishment and gratitude expressed throughout, wi…
S47
WSIS Prizes 2025 Winner’s Ceremony — The tone throughout the ceremony was consistently celebratory, formal, and appreciative. It maintained a positive and co…
S48
World Economic Forum Annual Meeting Closing Remarks: Summary — The tone is consistently positive, celebratory, and grateful throughout the discussion. It begins with formal appreciati…
S49
Opening Ceremony — The tone is consistently formal, diplomatic, and optimistic yet cautionary. Speakers maintain a celebratory atmosphere a…
S50
AI Algorithms and the Future of Global Diplomacy — The tone was professional and collaborative throughout, with participants demonstrating mutual respect and shared intere…
S51
Session — Marilia Maciel: Thank you, Jovan. I’ll do that, but I’ll do that by going back to your question about what predominates,…
S52
Session — The tone was primarily analytical and forward-looking, with the speaker presenting evidence-based predictions while ackn…
S53
Bridging the Digital Skills Gap: Strategies for Reskilling and Upskilling in a Changing World — The discussion maintained a consistently collaborative and solution-oriented tone throughout. Speakers were optimistic a…
S54
What policy levers can bridge the AI divide? — The discussion maintained a collaborative and optimistic tone throughout, with participants sharing experiences construc…
S55
WS #283 AI Agents: Ensuring Responsible Deployment — The discussion maintained a balanced, thoughtful tone throughout, combining cautious optimism with realistic concern. Pa…
S56
Closing Session  — Sustained collaboration between governments, industry, and other stakeholders is essential for translating recommendatio…
S57
AI That Empowers Safety Growth and Social Inclusion in Action — Collaborative approach between governments, industry, academia and civil society rather than siloed regulatory or self-r…
S58
Closure of the session — Decision-making procedures.
S59
Closing remarks — Minimal to no disagreement present. This transcript represents a closing ceremony where speakers (Doreen Bogdan Martin, …
S60
WS #148 Making the Internet greener and more sustainable — The tone of the discussion was generally constructive and solution-oriented. Speakers approached the topic seriously but…
S61
Criss-cross of digital margins for effective inclusion | IGF 2023 Town Hall #150 — In many cultures, it is customary to exchange pleasantries and bid farewell before leaving a conversation or gathering. …
S62
The Global Power Shift India’s Rise in AI & Semiconductors — Absolutely, totally agree. You know, I have to share this thing. I was actually conducting a panel discussion within AMD…
S63
https://dig.watch/event/india-ai-impact-summit-2026/ai-automation-in-telecom_-ensuring-accountability-and-public-trust-india-ai-impact-summit-2026 — So we have, for all these issues and more, we have eminent speakers here, both from the service providers, from the R &D…
S64
Agenda item 5 : Day 4 Afternoon session — Chair:Good afternoon, distinguished delegates. The eighth meeting of the seventh substantive session of the Open-Ended W…
S65
WAIGF Opening Ceremony &amp; Keynote — Hajia Sani: I’m sure we can do much better than that. Another round of applause for the Minister. Thank you so much. You…
S66
Agenda item 6: other matters — Brazil: Thank you very much, Mr Chair. Brazil aligns itself with the statement made by Argentina on behalf of a number…
S67
https://dig.watch/event/india-ai-impact-summit-2026/ai-powered-chips-and-skills-shaping-indias-next-gen-workforce — We also are recognized as having one of the largest talent pools for manufacturing, for AI in the world. Both of these a…
S68
Keynote Adresses at India AI Impact Summit 2026 — The Pax Silica Declaration signing: A historic agreement between India and the United States aimed at strengthening secu…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
D
David Freed
4 arguments164 words per minute1536 words560 seconds
Argument 1
Emphasis on the critical need for a million‑person talent pipeline covering design, fab operations, equipment, metrology, reliability, etc.
EXPLANATION
David Freed stresses that the semiconductor sector in India requires a workforce of roughly one million people across a wide range of roles, from design engineers to metrology and reliability specialists. He argues that without such a scale of talent the industry cannot meet its growth targets.
EVIDENCE
He identifies the “million-person gap” and lists the specific categories of workers needed, including field service engineers, process engineers, equipment engineers, metrology engineers, device engineers, and reliability experts [172-180].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Freed’s projection of a one-million-person talent gap and the list of required roles are corroborated in the AI-Powered Chips briefing, which highlights the same gap and discipline breadth [S1] and [S6].
MAJOR DISCUSSION POINT
Talent pipeline size
AGREED WITH
S. Krishnan, Ashwini Vaishnaw, Professor Saurabh Chandorkar
Argument 2
Call for broad, problem‑solving based education rather than narrow skill training; focus on understanding the whole ecosystem.
EXPLANATION
Freed argues that education should prioritize a holistic understanding of semiconductor products and processes rather than teaching isolated, single‑skill tasks. He believes this broad, problem‑solving approach will better prepare graduates for the diverse roles in the ecosystem.
EVIDENCE
He emphasizes the importance of understanding what is being produced and the overall process integration, stating that “understanding … is so much more important than a singular skill” and that curricula should develop broad talent rather than narrow skill sets [181-188].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The emphasis on developing broad talent and problem-solving abilities over narrow, single-skill training is echoed in the external summary of Freed’s remarks, which stresses “talent rather than skill” and critical thinking [S1] and [S6].
MAJOR DISCUSSION POINT
Broad-based education
AGREED WITH
Professor Saurabh Chandorkar
DISAGREED WITH
Professor Saurabh Chandorkar
Argument 3
Recommendation for faculty fellowships and industry‑embedded research to bring university staff into practical semiconductor work.
EXPLANATION
Freed proposes creating faculty fellowships that place academic staff inside semiconductor companies for six to nine months, enabling them to acquire industry‑relevant knowledge that can be transferred back to universities. He sees this as a way to bridge the talent gap.
EVIDENCE
He describes the idea of funding faculty fellowships that would give professors a temporary industry role, allowing them to bring back practical expertise to academia [208-210].
MAJOR DISCUSSION POINT
Faculty fellowships
AGREED WITH
Professor Saurabh Chandorkar, S. Krishnan, Ashwini Vaishnaw
DISAGREED WITH
Professor Saurabh Chandorkar
Argument 4
Emphasis that optimization techniques differ between R&D (small data) and manufacturing (big data) and must be applied appropriately.
EXPLANATION
Freed explains that in research environments data is scarce, making optimization less effective, whereas in manufacturing large datasets enable powerful optimization and AI methods. He stresses the need to match the right tools to the data context.
EVIDENCE
He notes that “optimization isn’t very helpful when we’re in a small-data R&D mode” but becomes “extremely helpful” in big-data manufacturing, and that they are developing machine-learning techniques accordingly [306-311].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Freed’s distinction between small-data R&D environments and big-data manufacturing contexts for optimization and AI tools is documented in the external briefing, confirming his nuanced view [S1] and [S6].
MAJOR DISCUSSION POINT
Optimization in different contexts
AGREED WITH
S. Krishnan, Ashwini Vaishnaw
A
Ashwini Vaishnaw
3 arguments129 words per minute464 words215 seconds
Argument 1
Highlight of India’s target of 60,000 clean‑room operators and 80,000 design engineers, and rapid expansion of university participation to 315 institutions.
EXPLANATION
Vaishnaw outlines the quantitative goals set by the India Semiconductor Mission, aiming for 60,000 clean‑room staff and 80,000 design engineers, and notes that university involvement has grown from an initial 50 to 315 institutions. He presents these figures as evidence of rapid capacity building.
EVIDENCE
He cites the 2022 target of 60,000 clean-room operators and 80,000 design engineers, the original plan to start with 50 universities, and the current participation of 315 universities with students using advanced design tools and fabricating chips at SCL Mohali [103-106].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The government’s quantitative workforce goals and the growth from 50 to 315 participating universities are reported in the external source covering Vaishnaw’s statements on talent targets [S1] and [S6].
MAJOR DISCUSSION POINT
Workforce targets and university expansion
AGREED WITH
Professor Saurabh Chandorkar, David Freed, S. Krishnan
Argument 2
Description of the government’s commitment to 10 major semiconductor plants, with four starting production in 2026, and a new fab in Uttar Pradesh.
EXPLANATION
Vaishnaw reports that the government has committed to establishing ten large semiconductor manufacturing facilities, four of which are slated to begin operations in 2026, and mentions an upcoming plant in Uttar Pradesh announced by the Prime Minister. This demonstrates policy backing for the sector.
EVIDENCE
He states that “we are going to have 10 of the … major semiconductor plants … four of them at least will commence production during 2026” and later notes “a new semiconductor plant will be founded … in Uttar Pradesh” [35-37][123-124].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Vaishnaw’s announcement of ten major plants, four slated for 2026, and the upcoming Uttar Pradesh fab is confirmed in multiple external briefs, including the press briefing on the AI Impact Summit [S1], [S6] and [S20].
MAJOR DISCUSSION POINT
Plant rollout commitment
AGREED WITH
Paul Triolo, David Freed, Professor Saurabh Chandorkar
Argument 3
Assertion that semiconductors constitute a critical layer in the AI architecture and must be developed domestically to avoid over‑reliance on any single geography.
EXPLANATION
Vaishnaw emphasizes that semiconductors form a foundational layer within a five‑layer AI architecture, making domestic capability essential for resilience. He links this to the broader need for a diversified global supply chain.
EVIDENCE
He explains that “in this architecture of five layers, semiconductor is going to be a very important layer” and stresses the importance of domestic development to avoid dependence on any one geography [109-111].
MAJOR DISCUSSION POINT
Semiconductors as AI layer
AGREED WITH
S. Krishnan, David Freed
S
S. Krishnan
4 arguments157 words per minute1069 words407 seconds
Argument 1
Assertion that India already has 20 % of global semiconductor design talent but lacks skilled workers for advanced manufacturing and equipment precision.
EXPLANATION
Krishnan points out that while India contributes a sizable share of global design talent, the country is deficient in advanced manufacturing expertise, especially in precision equipment. He highlights this gap as a barrier to full ecosystem development.
EVIDENCE
He notes that “we have 20 % of the semiconductor design team in the country, in the world” yet “we lack … people in advanced manufacturing” and “precision manufacturing of the equipment needed for semiconductors” [42-48].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Krishnan’s claim about India’s 20 % share of global design talent and the gap in advanced manufacturing and equipment precision is reflected in the external summary of his remarks [S1] and [S6].
MAJOR DISCUSSION POINT
Design talent vs manufacturing gap
AGREED WITH
David Freed, Ashwini Vaishnaw, Professor Saurabh Chandorkar
Argument 2
Announcement of ISM 2.0 covering the entire ecosystem, including semiconductor equipment manufacturing, to build capacity for a $100 bn domestic market.
EXPLANATION
Krishnan announces the second phase of the India Semiconductor Mission, which expands its scope to cover the full value chain, including equipment production, aiming to support a projected $100 billion domestic market by decade’s end.
EVIDENCE
He states that “India Semiconductor Mission 2.0 has also been announced, which will cover the entire ecosystem, including the manufacture of semiconductor equipment in the country” and references the $100 bn market projection [37-40].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The launch of India Semiconductor Mission 2.0, its ecosystem-wide scope and the $100 billion market projection are detailed in the external source on ISM 2.0 [S6].
MAJOR DISCUSSION POINT
ISM 2.0 scope
Argument 3
Emphasis on the need for a resilient and reliable supply chain where over‑reliance on any one geography is avoided.
EXPLANATION
Krishnan argues that global supply‑chain resilience requires diversification, noting that the COVID‑19 pandemic exposed risks of dependence on single regions. He positions India as a long‑term, trustworthy partner in this diversified network.
EVIDENCE
He says “What the world needs is a resilient and reliable supply chain… it is not just for geopolitical reasons… over-reliance on any one geography is always going to be a problem” [31-33].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Krishnan’s call for diversified, resilient supply chains and avoidance of single-geography dependence aligns with external discussions on supply-chain resilience in the AI-Powered Chips briefing and EU/US policy analyses [S1], [S6], [S22], [S23].
MAJOR DISCUSSION POINT
Supply‑chain resilience
AGREED WITH
Ashwini Vaishnaw, David Freed
Argument 4
Mention of India’s participation in the Pax Silica consortium to build a trusted semiconductor supply chain.
EXPLANATION
Krishnan notes that India has joined the Pax Silica initiative, which aims to create a trustworthy, transparent semiconductor supply chain, reinforcing the country’s commitment to global standards.
EVIDENCE
He reports that “we also signed the Pax Silica, we were added to the Pax Silica” as a step forward for a trusted supply chain [30].
MAJOR DISCUSSION POINT
Pax Silica membership
P
Professor Saurabh Chandorkar
4 arguments143 words per minute1178 words491 seconds
Argument 1
Presentation of the “semi‑verse” platform at IISc, enabling students to experience fab tools and process integration concepts.
EXPLANATION
Chandorkar describes the semi‑verse as a simulation environment that lets students interact with fab equipment and understand process integration, thereby providing practical exposure without a physical fab.
EVIDENCE
He explains that “that’s where the semi-verse comes in… we have a really good FAB… we use this software… I teach SPC, which is basically process control” [143-150].
MAJOR DISCUSSION POINT
Semi‑verse training tool
Argument 2
Statement that academic fabs are world‑class but cannot alone train a million workers; need for additional training fabs and INUP programs across the country.
EXPLANATION
Chandorkar acknowledges that IISc’s academic fab ranks among the top globally, yet stresses that a single institution cannot meet the massive training demand, calling for more training facilities and nationwide programs.
EVIDENCE
He notes that “we are probably in the top three or four in the world… but we alone cannot take the role of training one million people” and calls for expanding training fabs and INUP programs [144-146][154-158].
MAJOR DISCUSSION POINT
Scaling training capacity
AGREED WITH
David Freed, S. Krishnan, Ashwini Vaishnaw
Argument 3
Call for government support to scale hands‑on training facilities and to align curricula with fab‑relevant skills.
EXPLANATION
Chandorkar urges the government to fund and expand hands‑on training infrastructure and to adapt university curricula so that graduates acquire the practical skills needed for fab operations.
EVIDENCE
He emphasizes the need for a “second layer of hands-on training” and mentions existing programs like INUP, advocating for broader rollout across India [154-158].
MAJOR DISCUSSION POINT
Government‑backed hands‑on training
AGREED WITH
Paul Triolo, David Freed, Ashwini Vaishnaw
Argument 4
Suggestion that industry should provide hands‑on courses (e.g., pressure‑gauge, P&ID training) and expand collaborations with companies like LAM.
EXPLANATION
Chandorkar proposes that semiconductor firms develop practical short courses covering equipment maintenance and process control, and that such collaborations be broadened beyond current pilots.
EVIDENCE
He cites a newly started course on pressure-gauge and P&ID training, and notes ongoing joint efforts with LAM that could be replicated elsewhere [197-199].
MAJOR DISCUSSION POINT
Industry‑led practical courses
DISAGREED WITH
David Freed
P
Paul Triolo
2 arguments143 words per minute767 words320 seconds
Argument 1
Moderator’s observation that three‑way collaboration (government, academia, industry) is essential for scaling the ecosystem.
EXPLANATION
Triolo highlights that successful semiconductor ecosystem development depends on coordinated efforts among the government, academic institutions, and industry partners, framing it as a three‑way relationship.
EVIDENCE
He remarks that “the integration of government support for both the academic piece of this and the industry piece is really important, a really important three-way relationship” [162-164].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The necessity of coordinated government, academic and industry effort is reiterated in the external summary of the panel, which highlights the three-way relationship as a key success factor [S1] and [S6].
MAJOR DISCUSSION POINT
Three‑way collaboration
AGREED WITH
David Freed, Professor Saurabh Chandorkar, Ashwini Vaishnaw
Argument 2
Reminder that the presence of fabs makes hardware engineering more attractive to students, mirroring Japan’s experience with TSMC.
EXPLANATION
Triolo points out that establishing domestic fabs raises the profile of hardware engineering, making it a more appealing career path for students, similar to how Japan attracted TSMC by building fabs.
EVIDENCE
He notes “Japan is going through a similar thing… now that there are fabs, hardware engineering becomes more attractive” [238-239].
MAJOR DISCUSSION POINT
Fabs boosting hardware engineering appeal
H
Harish Kumar
1 argument140 words per minute158 words67 seconds
Argument 1
Query about developing domestic wafer capability for solar technology and the need to avoid imports, indicating a broader materials‑manufacturing agenda.
EXPLANATION
Kumar asks whether India has any program for developing wafer production for solar cells and modules, emphasizing the desire to build a self‑sufficient domestic supply chain rather than relying on imports.
EVIDENCE
He asks “How to make the lamp research, make a skilling activity like in wafer development… there is no unit of any kind in India on wafer development… any program on wafer development for the solar manufacturing… not import anything?” [262-267].
MAJOR DISCUSSION POINT
Domestic solar wafer development
P
Participant
1 argument170 words per minute277 words97 seconds
Argument 1
Advice to young aspirants to cultivate critical thinking, problem‑solving, and a broad science foundation rather than focusing on a single skill such as coding.
EXPLANATION
The participant (through a question) seeks guidance for young people entering the semiconductor market, prompting a response that stresses broad-based problem‑solving abilities over narrow specialization.
EVIDENCE
The participant asks “what is the first thing that the young person needs to do to get into this market?” and later the discussion leads to advice about critical thinking, problem-solving, and avoiding exclusive focus on a single skill [285-286][295-302].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The recommendation for broad, problem-solving talent over narrow skill specialization matches the external commentary on Freed’s and other speakers’ emphasis on holistic talent development [S1] and [S6].
MAJOR DISCUSSION POINT
Guidance for youth entrants
R
Rangesh Raghavan
2 arguments123 words per minute1070 words521 seconds
Argument 1
Opening remarks praising the exhibition, thanking METI, LAM, and all participants, and highlighting the importance of the gathering for India’s semiconductor future.
EXPLANATION
Raghavan welcomes attendees, commends the exhibition’s impact, thanks the Ministry of Economy, Trade and Industry (METI) and LAM Research, and frames the event as pivotal for advancing India’s semiconductor ambitions.
EVIDENCE
He thanks the audience, welcomes guests, praises the exhibition as “mind-blowing,” notes its extension, and acknowledges the presence of METI and LAM representatives [1-15][16-22].
MAJOR DISCUSSION POINT
Event opening and appreciation
Argument 2
Acknowledgement of the minister’s role in advancing the industry and gratitude for LAM’s continued support.
EXPLANATION
Raghavan highlights Minister Vaishnoji’s instrumental contributions to the semiconductor sector, announces his imminent arrival, and expresses gratitude for LAM’s ongoing partnership.
EVIDENCE
He states “Minister Vaishnoji has been instrumental in getting this industry where it is… we look forward to his presence” and then invites David Freed to speak, thanking LAM for its support [65-68][69-71].
MAJOR DISCUSSION POINT
Ministerial recognition and LAM appreciation
Agreements
Agreement Points
A large, multi‑disciplinary talent pipeline of roughly one million workers is required to support India’s semiconductor ambitions.
Speakers: David Freed, S. Krishnan, Ashwini Vaishnaw, Professor Saurabh Chandorkar
Emphasis on the critical need for a million‑person talent pipeline covering design, fab operations, equipment, metrology, reliability, etc. Assertion that India already has 20 % of global semiconductor design talent but lacks skilled workers for advanced manufacturing and equipment precision. Highlight of India’s target of 60,000 clean‑room operators and 80,000 design engineers, and rapid expansion of university participation to 315 institutions. Statement that academic fabs are world‑class but cannot alone train a million workers; need for additional training fabs and INUP programs across the country.
All four speakers stress that India must develop a workforce on the order of a million people across many semiconductor roles to meet the sector’s growth targets [172-180][42-48][103-106][145-146].
POLICY CONTEXT (KNOWLEDGE BASE)
Industry analyses estimate a global shortage of about one million skilled semiconductor workers, and India’s strategic plans aim to build a comparable pipeline, reflecting the workforce challenge highlighted in recent reports [S27][S28][S36].
Education and training should emphasize broad, problem‑solving understanding of the semiconductor ecosystem rather than narrow, single‑skill instruction.
Speakers: David Freed, Professor Saurabh Chandorkar
Call for broad, problem‑solving based education rather than narrow skill training; focus on understanding the whole ecosystem. Call for government support to scale hands‑on training facilities and to align curricula with fab‑relevant skills.
Both speakers argue that curricula must develop a holistic grasp of semiconductor processes and systems, avoiding overly narrow skill-specific training [181-188][148-152].
POLICY CONTEXT (KNOWLEDGE BASE)
Workforce development studies stress the importance of cultivating broad, problem-solving talent and critical thinking over narrowly focused skill sets to meet semiconductor industry needs [S27][S28].
Effective progress depends on coordinated three‑way collaboration among government, academia and industry.
Speakers: Paul Triolo, David Freed, Professor Saurabh Chandorkar, Ashwini Vaishnaw
Moderator’s observation that three‑way collaboration (government, academia, industry) is essential for scaling the ecosystem. Recommendation for faculty fellowships and industry‑embedded research to bring university staff into practical semiconductor work. Call for government support to scale hands‑on training facilities and to align curricula with fab‑relevant skills. Description of the government’s commitment to 10 major semiconductor plants, with four starting production in 2026, and a new fab in Uttar Pradesh.
The panel repeatedly highlights that government policy, academic programmes and industry initiatives must work together to build capacity and scale the ecosystem [162-164][207-214][154-158][103-106].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy briefs consistently call for sustained tri-sector collaboration as essential for turning semiconductor strategies into tangible outcomes, emphasizing joint action among governments, academia, and industry [S32][S38][S39].
Semiconductors are a foundational layer for AI and must be developed domestically to ensure a resilient, diversified supply chain.
Speakers: S. Krishnan, Ashwini Vaishnaw, David Freed
Emphasis on the need for a resilient and reliable supply chain where over‑reliance on any one geography is avoided. Assertion that semiconductors constitute a critical layer in the AI architecture and must be developed domestically to avoid over‑reliance on any single geography. Emphasis that optimization techniques differ between R&D (small data) and manufacturing (big data) and must be applied appropriately.
All three speakers link semiconductor capability to AI advancement and to supply-chain resilience, arguing that domestic capacity is essential for future AI-driven growth [29-30][31-33][109-111][311-313].
POLICY CONTEXT (KNOWLEDGE BASE)
Analyses of AI-driven demand identify semiconductors as a strategic asset, prompting nations to pursue domestic production and diversified supply chains for resilience and security [S35].
Hands‑on training facilities, such as training fabs and faculty fellowships, are essential to scale up the semiconductor workforce.
Speakers: Professor Saurabh Chandorkar, David Freed, S. Krishnan, Ashwini Vaishnaw
Statement that academic fabs are world‑class but cannot alone train a million workers; need for additional training fabs and INUP programs across the country. Recommendation for faculty fellowships and industry‑embedded research to bring university staff into practical semiconductor work. As part of the India Semiconductor Mission, we have trained workers in FRABS and in OSATs … in Singapore, Taiwan, Europe, etc. Highlight of India’s target of 60,000 clean‑room operators and 80,000 design engineers, and rapid expansion of university participation to 315 institutions.
Consensus emerges that practical, hands-on training-whether via dedicated training fabs, industry-university fellowships, or extensive multi-country skill programmes-is crucial for meeting workforce goals [154-158][208-210][53-60][103-106].
POLICY CONTEXT (KNOWLEDGE BASE)
Case studies demonstrate that hands-on labs, training fabs, and faculty fellowship programs are effective mechanisms for bridging academia-industry gaps and expanding practical skills in semiconductor manufacturing [S38][S39][S32].
Similar Viewpoints
Both emphasize that education should build a holistic understanding of semiconductor processes and avoid overly narrow, single‑skill training programmes [181-188][148-152].
Speakers: David Freed, Professor Saurabh Chandorkar
Call for broad, problem‑solving based education rather than narrow skill training; focus on understanding the whole ecosystem. Call for government support to scale hands‑on training facilities and to align curricula with fab‑relevant skills.
All three stress that coordinated action among government, academia and industry is the cornerstone for building capacity and scaling the semiconductor ecosystem [162-164][207-214][154-158].
Speakers: Paul Triolo, David Freed, Professor Saurabh Chandorkar
Moderator’s observation that three‑way collaboration (government, academia, industry) is essential for scaling the ecosystem. Recommendation for faculty fellowships and industry‑embedded research to bring university staff into practical semiconductor work. Call for government support to scale hands‑on training facilities and to align curricula with fab‑relevant skills.
Both link semiconductor self‑sufficiency to supply‑chain resilience and to the strategic AI layer, arguing that domestic capability reduces geopolitical risk [31-33][109-111].
Speakers: S. Krishnan, Ashwini Vaishnaw
Emphasis on the need for a resilient and reliable supply chain where over‑reliance on any one geography is avoided. Assertion that semiconductors constitute a critical layer in the AI architecture and must be developed domestically to avoid over‑reliance on any single geography.
Both acknowledge the relevance of solar‑related wafer technology within the broader semiconductor and materials agenda [92-93][262-267].
Speakers: Ashwini Vaishnaw, Harish Kumar
Solar technology. You’re in solar, very good. Query about developing domestic wafer capability for solar technology and the need to avoid imports, indicating a broader materials‑manufacturing agenda.
Unexpected Consensus
Recognition of solar‑technology relevance within a semiconductor‑focused forum.
Speakers: Ashwini Vaishnaw, Harish Kumar
Solar technology. You’re in solar, very good. Query about developing domestic wafer capability for solar technology and the need to avoid imports, indicating a broader materials‑manufacturing agenda.
While the panel primarily discussed semiconductor design, manufacturing and AI, both a senior government official and an audience member highlighted solar wafer development, showing an unexpected alignment on the importance of solar-related semiconductor processes [92-93][262-267].
Overall Assessment

There is strong consensus among speakers that India needs a massive, multi‑disciplinary semiconductor workforce, that education should be broad and problem‑solving oriented, and that coordinated three‑way collaboration between government, academia and industry is essential. All agree that semiconductors are a strategic AI layer and that hands‑on training facilities are critical to scale capacity.

High consensus – the repeated convergence on workforce size, education philosophy, collaborative governance and the AI‑semiconductor link suggests a unified policy direction and a solid foundation for coordinated action.

Differences
Different Viewpoints
Education approach – broad, problem‑solving based versus narrowly focused hands‑on skill training
Speakers: David Freed, Professor Saurabh Chandorkar
Call for broad, problem‑solving based education rather than narrow skill training; focus on understanding the whole ecosystem. Suggestion that industry should provide hands‑on courses (e.g., pressure‑gauge, P&ID training) and expand collaborations with companies like LAM.
Freed argues that curricula should develop a wide-range, problem-solving mindset and a holistic understanding of semiconductor products rather than teaching isolated, single-skill tasks [181-188]. Chandorkar stresses the need for concrete, hands-on courses and curriculum alignment with fab-relevant skills, emphasizing a more skill-specific training model [197-199][154-158].
POLICY CONTEXT (KNOWLEDGE BASE)
Research highlights a tension between broad, problem-solving curricula advocated for systemic understanding and short, skills-based courses that quickly improve employability, reflecting divergent views on optimal education models [S27][S37].
Preferred mechanism for linking academia and industry – faculty fellowships versus short practical courses and broader collaborations
Speakers: David Freed, Professor Saurabh Chandorkar
Recommendation for faculty fellowships and industry‑embedded research to bring university staff into practical semiconductor work. Suggestion that industry should provide hands‑on courses (e.g., pressure‑gauge, P&ID training) and expand collaborations with companies like LAM.
Freed proposes formal, funded faculty fellowships that place academics inside semiconductor firms for 6-9 months to transfer industry knowledge back to universities [208-210]. Chandorkar advocates for industry-run short, practical training modules and expanded joint projects, rather than a fellowship model, focusing on immediate hands-on skill development [197-199][200-202].
POLICY CONTEXT (KNOWLEDGE BASE)
Evidence from collaborative initiatives shows faculty fellowships provide deep, sustained engagement, while rapid upskilling programs favor short practical courses; both are recognized as viable linkage mechanisms between academia and industry [S38][S39].
Unexpected Differences
Domestic solar‑wafer capability versus perceived lack of programs
Speakers: Harish Kumar, Professor Saurabh Chandorkar
Query about developing domestic wafer capability for solar technology and the need to avoid imports. Statement that there are efforts in India for polycrystalline silicon growth for wafers, but details are not disclosed.
Kumar asks for a clear, publicly disclosed program to produce solar wafers domestically, implying none exists, while Chandorkar indicates that such efforts are underway but not publicly detailed, revealing a mismatch in expectations and an unexpected shift from semiconductor focus to solar-wafer production [262-267][269-272].
Overall Assessment

The panel largely concurs on the urgency of building a large, skilled semiconductor workforce and the importance of multi‑stakeholder collaboration. Disagreements centre on the preferred educational strategy (broad understanding vs. specific hands‑on skills) and the optimal mechanism for industry‑academia linkage (faculty fellowships vs. short practical courses). An unexpected tension arose around solar‑wafer capability, reflecting a broader materials‑manufacturing agenda beyond the core semiconductor discussion.

Moderate – while the participants share common goals, the divergent views on training methodology and partnership models could affect policy design and implementation timelines, requiring careful alignment to avoid fragmented efforts.

Partial Agreements
All speakers agree that India must develop a massive semiconductor workforce, but they differ on the framing (Freed quantifies a million‑person gap, Krishnan focuses on ecosystem‑wide ISM 2.0, Vaishnaw cites specific numeric targets, and Chandorkar stresses the need for expanded training infrastructure) [172-180][37-40][103-106][144-146][154-158].
Speakers: David Freed, S. Krishnan, Ashwini Vaishnaw, Professor Saurabh Chandorkar
Emphasis on the critical need for a million‑person talent pipeline covering design, fab operations, equipment, metrology, reliability, etc. Announcement of ISM 2.0 covering the entire ecosystem, including semiconductor equipment manufacturing, to build capacity for a $100 billion domestic market. Highlight of India’s target of 60,000 clean‑room operators and 80,000 design engineers, and rapid expansion of university participation to 315 institutions. Statement that academic fabs are world‑class but cannot alone train a million workers; need for additional training fabs and INUP programs across the country.
All three emphasize the necessity of coordinated government‑academia‑industry action, but Triolo highlights the relational aspect, Freed stresses the alignment of objectives, and Chandorkar focuses on concrete government‑backed training initiatives [162-164][202-207][154-158].
Speakers: Paul Triolo, David Freed, Professor Saurabh Chandorkar
Moderator’s observation that three‑way collaboration (government, academia, industry) is essential for scaling the ecosystem. Statement that industry, academia and government have mutually aligned objectives and must work together to close the talent gap. Call for government support to scale hands‑on training facilities and to align curricula with fab‑relevant skills.
Takeaways
Key takeaways
India’s semiconductor ecosystem requires a massive talent pipeline (≈1 million workers) spanning design, fab operations, equipment, metrology, reliability and related disciplines. Broad, problem‑solving‑oriented education is preferred over narrowly focused skill training; understanding the full semiconductor value chain is essential. The government’s India Semiconductor Mission (ISM) 2.0 will expand support to the entire ecosystem, including semiconductor equipment manufacturing, and aims to enable a $100 bn domestic market. Ten major semiconductor plants are committed, with at least four beginning production in 2026 and a new fab announced for Uttar Pradesh. LAM Research has a 25‑year presence in India, a state‑of‑the‑art systems engineering lab in Bengaluru, and is leading workforce‑development initiatives such as the “semi‑verse” platform. Academic fabs (e.g., IISc) are world‑class but cannot alone train a million workers; additional hands‑on training fabs and INUP programs are needed across the country. Collaboration among government, industry, and academia is critical; examples include faculty fellowships, industry‑embedded courses, and joint projects. AI and semiconductors are mutually reinforcing; domestic chip capability is vital for supply‑chain resilience and to avoid over‑reliance on any single geography. India has about 20 % of global semiconductor design talent but a shortage in advanced manufacturing and precision equipment skills.
Resolutions and action items
LAM Research will continue expanding its semi‑verse training platform and will explore faculty fellowship programmes that place university staff in industry for 6‑9 months. IISc and partner universities will develop additional hands‑on courses (e.g., pressure‑gauge, P&ID, SPC) and scale up training fabs to support the talent pipeline. Government agencies (under ISM 2.0) will be asked to provide funding and policy support for scaling hands‑on training facilities and aligning curricula with fab‑relevant skills. Commitment to commence production at four of the ten announced semiconductor plants in 2026, with the remaining plants to follow within the next year. Industry partners (including ASML) will replicate LAM’s workforce‑development model across their own Indian operations. A coordinated three‑way collaboration framework (government‑industry‑academia) will be formalised to monitor progress on talent development and supply‑chain integration.
Unresolved issues
Exact funding mechanisms and budget allocations for faculty fellowships and the expansion of training fabs were not defined. Specific timelines and responsible agencies for scaling the semi‑verse platform to reach the 1 million‑person target remain unclear. Details of the domestic wafer‑production programme for solar technology (raised by Harish Kumar) were not disclosed. Implementation plan for aligning PhD research projects with industry needs beyond LAM was not finalized. Mechanisms for monitoring and ensuring the quality and certification of newly trained clean‑room operators and equipment engineers were not specified.
Suggested compromises
Shift focus from narrow, single‑skill training to broader, ecosystem‑wide understanding while still offering targeted hands‑on modules for critical equipment skills. Introduce faculty fellowships as a middle ground, allowing universities to retain staff while providing industry exposure, thereby bridging academic‑industry gaps. Encourage industry to provide specific short‑term courses (e.g., pressure‑gauge, P&ID) within existing academic programmes rather than building entirely new curricula. Balance the immediate need for large‑scale talent with the longer‑term development of advanced manufacturing capabilities by prioritising both design talent and precision‑equipment skills.
Thought Provoking Comments
Semiconductors are central to the AI story and AI is increasingly central to the semiconductor story – the two missions (India AI Mission and India Semiconductor Mission) are converging.
This framing links two major national initiatives, highlighting that progress in one cannot be isolated from the other and that policy, investment, and talent development must be coordinated across both domains.
It set the thematic foundation for the rest of the discussion, prompting speakers to address talent, supply‑chain resilience, and manufacturing as shared challenges for both AI and semiconductor growth.
Speaker: S. Krishnan
We need a resilient and reliable global supply chain, not just for geopolitical reasons but also because over‑reliance on any one geography proved problematic during COVID‑19.
It broadens the conversation from a purely domestic focus to the strategic importance of India’s role in the worldwide semiconductor ecosystem.
Shifted the tone from celebrating domestic milestones to emphasizing the necessity of integrating India into a diversified global supply chain, leading to later remarks about export capability and partnership with companies like LAM.
Speaker: S. Krishnan
The real challenge in the next five years is the shortage of people skilled in advanced manufacturing and precision equipment for semiconductors, not just design talent.
Identifies a specific, under‑addressed gap in the talent ecosystem, moving the discussion beyond the usual focus on design engineers.
Prompted subsequent speakers (David Freed, Professor Chandorkar) to discuss hands‑on training, faculty fellowships, and the need for a broader talent pipeline rather than narrow skill training.
Speaker: S. Krishnan
The million‑person talent gap is not a single type of skill; we must develop broad talent and understanding of the industry, not just isolated technical abilities.
Challenges the common industry narrative of “skill‑specific training” and proposes a paradigm shift toward holistic education and industry awareness.
Reoriented the conversation toward curriculum redesign, interdisciplinary learning, and the importance of conceptual knowledge, influencing Professor Chandorkar’s suggestions on hands‑on courses and faculty fellowships.
Speaker: David Freed
Faculty fellowships – placing university faculty inside industry for 6‑9 months – could bring industry‑relevant knowledge back to academia and help close the talent gap.
Introduces a concrete, innovative mechanism for deeper academia‑industry integration, moving beyond traditional internships or student projects.
Generated agreement from Professor Chandorkar, who saw it as a way to align PhD projects with industry needs, and set a practical action point for ISM 2.0 discussions.
Speaker: David Freed
We have expanded from 50 to 315 universities using world‑class design tools; students across the country are now designing and fabricating chips, creating a new national capability.
Provides quantitative evidence of rapid ecosystem scaling, reinforcing the urgency of supporting this growth with appropriate talent development.
Validated earlier claims about talent pool size, reinforced the need for coordinated policy support, and served as a transition to discussing future university‑industry collaborations.
Speaker: Ashwini Vaishnaw
Hands‑on training (e.g., pressure‑gauge operation, PNID systems) is essential; academic fabs alone cannot train a million people, so industry must partner to provide practical exposure.
Highlights the limitation of purely academic training and the necessity of industry‑driven practical modules, adding depth to the talent‑pipeline conversation.
Led to a consensus on expanding training facilities, reinforced David Freed’s fellowship idea, and steered the panel toward actionable steps for ISM 2.0.
Speaker: Professor Saurabh Chandorkar
For a young person wanting to enter this market, focus on broad problem‑solving, critical thinking, and fundamentals (physics, chemistry, material science) rather than a single narrow skill.
Distills the earlier discussion into actionable advice for the next generation, emphasizing the strategic viewpoint that the industry needs versatile thinkers.
Provided a clear takeaway for the audience, reinforced the earlier theme of broad talent over narrow skill sets, and concluded the session with a forward‑looking, inclusive message.
Speaker: David Freed (in response to audience question)
Overall Assessment

The discussion was driven forward by a series of pivotal remarks that linked India’s AI and semiconductor ambitions, exposed critical gaps in advanced‑manufacturing talent, and proposed concrete, collaborative solutions such as faculty fellowships and expanded hands‑on training. Each of these comments reframed the conversation—from celebrating policy milestones to confronting systemic workforce challenges—and prompted participants to align on actionable strategies for ISM 2.0. Collectively, they shifted the tone from descriptive to prescriptive, ensuring the panel moved toward concrete policy and industry initiatives rather than remaining at a high‑level overview.

Follow-up Questions
How many people work in LAM?
Understanding the size of LAM’s workforce helps gauge industry capacity and partnership potential.
Speaker: Ashwini Vaishnaw
How many people have come from the universities, and how was their experience using the semi‑verse platform?
Assessing university participation and tool usability informs the effectiveness of academic‑industry engagement.
Speaker: Ashwini Vaishnaw
How do you see the future shaping up in 2026, and what does IIS need from the government under ISM 2.0?
Strategic outlook and policy support are crucial for aligning academic capabilities with national semiconductor goals.
Speaker: Paul Triolo (to Professor Saurabh Chandorkar)
What are the gaps you see in the talent pipeline, and what areas should be expanded in collaboration between government and academia?
Identifying specific skill shortages and collaboration opportunities guides targeted interventions to close the million‑person gap.
Speaker: Paul Triolo (to David Freed)
What is IAS, what is IAS looking for from the government and industry, and what challenges or improvements are needed?
Clarifying IAS’s role and its requirements helps streamline coordination among stakeholders and address systemic obstacles.
Speaker: Paul Triolo (to Professor Saurabh Chandorkar)
How can we create a skilling program for wafer development in solar technology, and is there any domestic program for solar wafer manufacturing and marketing in India?
Developing indigenous solar wafer capabilities reduces import dependence and expands the renewable‑energy supply chain.
Speaker: Harish Kumar
What is the first step a young person should take to enter the semiconductor market?
Providing clear entry pathways will help channel India’s large youth population into needed semiconductor roles.
Speaker: Audience participant
Could you highlight the optimization policy linking AI and semiconductors?
Understanding policy on AI‑driven optimization is essential for aligning research, industry practices, and regulatory frameworks.
Speaker: Audience participant (English Literature background)
How can hands‑on training FABs be scaled across India to meet the talent demand?
Expanding practical fab training facilities is critical to convert theoretical knowledge into industry‑ready skills.
Speaker: Professor Saurabh Chandorkar
Can faculty fellowships be established within semiconductor companies to bring industry‑relevant expertise to academia?
Embedding faculty in industry for 6‑9 months would accelerate knowledge transfer and improve curriculum relevance.
Speaker: David Freed
How can more PhD projects be aligned with industry needs to improve employability and innovation?
Aligning doctoral research with real‑world semiconductor challenges ensures a pipeline of highly skilled graduates.
Speaker: Professor Saurabh Chandorkar
What data is needed to accurately quantify the talent gap across the semiconductor ecosystem?
Robust data on skill shortages enables precise planning of training programs and policy measures.
Speaker: David Freed
What optimization techniques are most effective for small‑data R&D versus big‑data manufacturing environments?
Tailoring AI/ML optimization methods to the data context can enhance efficiency and accelerate development cycles.
Speaker: David Freed

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Aligning AI Governance Across the Tech Stack ITI C-Suite Panel

Aligning AI Governance Across the Tech Stack ITI C-Suite Panel

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel opened by stressing that AI must be governed through coordinated global policies because the technology “doesn’t want to stop at borders” and governments need to align their approaches to avoid fragmentation [12-15][17-20]. Jay Chaudhry warned that a patchwork of national rules would create “a lot of issues” for multinational firms and that excessive compliance “kills innovations,” while Aparna Bawa emphasized that cross-border data flows are essential for services like Zoom and that over-restrictive rules would “impede their own citizens’ progress” [22-28][46-55].


Amazon’s David Zapolsky illustrated how its global business model-spanning cloud, entertainment, and satellite services-relies on “free flow of goods, free flow of information, open skies,” and argued that premature regulation adds cost and uncertainty, urging a focus on common principles such as “high-risk uses that affect life, health, or civil rights” [58-66][67-68]. DeepL’s Jarek Kutylowski added that a transparent, roughly uniform framework would enable the company to scale AI services worldwide while respecting differing privacy norms, noting that “a common layer… would be incredibly valuable” for both firms and users [75-82][140-151].


Both security experts highlighted that trust and safety must accompany rapid AI adoption: Chaudhry described AI as “dangerous because this technology can be abused” and called for security across all five layers of sovereignty [87-95], whereas Bawa described a partnership model where enterprises provide built-in controls (e.g., guardrails, privacy settings) and users are educated to avoid risky behaviors such as exposing personal data to AI models [102-108][119-128].


Finally, the discussion converged on the need for an international consensus and standards: participants expressed hope that governments will focus on emerging threats like AI-enabled ransomware rather than over-prescribing, and that a shared standard such as ISO 42001 could give “a common set of principles” while still allowing national sovereignty [340-348][381-390]. They concluded that fostering inclusive, cross-border AI collaboration-exemplified by Zoom’s outreach to low-bandwidth villages and DeepL’s mission to unite languages-will be the key measure of success a year from now [364-371][395-398].


Keypoints


Major discussion points


Global alignment of AI governance is essential to avoid fragmentation and sustain innovation.


The moderator frames the need for “global innovation and interoperability” and “reducing fragmentation” (​[1-2][12-13][18-20]). Panelists echo this: Jay warns that “over-alignment doesn’t help” and that excessive rules “kill innovations” (​[22-28]). Aparna stresses that cross-border data flows are the lifeblood of services like Zoom and that “when governments start putting more and more restrictions… it impedes their own citizens’ progress” (​[39-51][55-56]). David calls for “common principles” and a risk-based approach rather than a “unified field theory of AI regulation” (​[58-68]). Jarek adds that a “common layer… with a right balance of protecting sovereignty” would be “incredibly valuable” (​[75-82]).


Finding the right balance between regulation (risk management) and innovation.


Jason notes that “acting too much… can stifle innovation” (​[30-34]), while Jay stresses that “compliance… kills innovation” and that “over-regulation… makes the stuff we put in place old” (​[27-28][180-202]). David points out that premature regulation creates “costs, uncertainty and inhibits innovation” (​[64-68][281-286]). Aparna describes a “sliding scale” with “customer choice” that varies by user type, emphasizing that “every risk-based decision is you are a user” (​[229-250][260-269]). Jarek highlights that different use-cases (e.g., translation vs. patent translation) demand different risk assessments (​[283-286][324-327]).


Security and trust are non-negotiable foundations for AI deployment.


The moderator asks about the “trust and security conversation” (​[84-86]). Jay explains that AI can be “abused” through data poisoning, rogue agents, and nation-state attacks, and calls for a “layer of security across all five layers” (​[87-96][97-103][110-112]). David describes how Amazon builds security into its cloud services-guardrails, data-ownership guarantees, and built-in disclosures-to give enterprises control (​[138-160][162]). Jarek stresses that “creating a layer of trust into the outcomes of the AI” is essential for both translation and agentic AI (​[167-176][179-182]).


Upstream vs. downstream responsibilities: how platform decisions affect end-users.


Jason asks how Amazon’s “upstream governance decisions” shape downstream customer behavior (​[131-136]). David outlines Amazon Bedrock’s model: providing a secure, choice-rich environment where “the data they use… stays their data” and enterprises can set guardrails (​[138-160]). Aparna explains Zoom’s dual responsibility-providing enterprise-grade controls while educating individual users (e.g., “don’t put your address into ChatGPT”) (​[102-130][152-158]). Jay adds that zero-trust architectures must evolve to protect “AI agents… the weakest link tomorrow” (​[180-212]).


Future aspirations: inclusive AI, international standards, and concrete progress in the next year.


The panel is asked what they hope to see a year from now (​[330-339]). Jay focuses on preventing AI-enabled ransomware and nation-state threats (​[340-360]). Aparna envisions broader “upskilling… so a farmer in a Karnataka village can adopt AI” (​[364-374]). David calls for a converging “international standard… like ISO 42001” to give confidence to regulators and industry (​[384-392]). Jarek looks forward to “global collaboration… no matter which language they speak” (​[395-399]).


Overall purpose / goal of the discussion


The session was convened to explore how governments and industry can collaborate to create coherent, risk-balanced AI governance frameworks that protect citizens, preserve security, and avoid regulatory fragmentation while still enabling global innovation, interoperability, and inclusive access to AI technologies.


Overall tone and its evolution


– The conversation begins with a formal, forward-looking tone, emphasizing the strategic importance of alignment and partnership.


– As panelists share perspectives, the tone becomes balanced and pragmatic, acknowledging both the dangers of over-regulation and the necessity of security safeguards.


– When discussing concrete product decisions (Zoom, Amazon Bedrock), the tone shifts to operational and user-centric, highlighting real-world trade-offs and the need for flexibility.


– The final segment adopts an optimistic, aspirational tone, focusing on inclusive AI, international standards, and a hopeful vision for the next year. Throughout, the dialogue remains collaborative and constructive, with occasional light-hearted remarks (e.g., Zoom “pretend to be a cat”) that soften the technical depth.


Speakers


David Zapolsky


Area of expertise: AI governance, legal and regulatory affairs, global commerce


Role / Title: Chief Global Affairs and Legal Officer at Amazon [S1][S3]


Jarek Kutylowski


Area of expertise: Multilingual AI, language translation, AI governance


Role / Title: CEO of DeepL [S5][S4]


Aparna Bawa


Area of expertise: Cloud communications, AI-enabled collaboration tools, product governance


Role / Title: Chief Operating Officer (COO) of Zoom [S8]


Jason Oxman


Area of expertise: Technology policy, AI industry advocacy, standards development


Role / Title: President & CEO, Information Technology Industry Council (ITI) [S9][S10]


Jay Chaudhry


Area of expertise: Cybersecurity, zero-trust architecture, AI risk management


Role / Title: CEO, Chairman and Founder of Zscaler [S11]


Additional speakers:


None (all participants in the transcript are accounted for in the list above).


Full session reportComprehensive analysis and detailed insights

Jason Oxman opened the session by framing the core dilemma: managing AI risk must not choke global innovation or interoperability, and governments need to coordinate because “technology … doesn’t want to stop at borders” and “wants to cross borders and unite people around the world” [1-5]. He introduced the four panelists-Jay Chaudhry (Zscaler), Aparna Bawa (Zoom), David Zapolsky (Amazon), and Jarek Kutylowski (DeepL)-and asked each to comment on the worldwide AI-governance conversation [11-14].


Alignment of AI governance across borders


Jay Chaudhry warned that a multinational firm operating in dozens of jurisdictions would face “a lot of issues” under a patchwork of national rules and that “some alignment is good, but over-alignment doesn’t help” [24-28]. He added that “when we start doing too much governance … we start killing innovations” [24-28] and described India’s “five layers” model of AI sovereignty, insisting that security must overlay all five [90-95]. He also explained Zscaler’s firewall-less, zero-trust architecture and noted a regulator’s three-month education effort to understand this model [210-225].


Aparna Bawa reflected on the COVID-19 “haves and have-nots,” emphasizing that Zoom’s service depends on “cross-border data flow” and that “we would not exist if we didn’t have cross-border data flows and free unencumbered data flow” [47-56]. She argued that sovereign privacy rules create a trade-off with economic progress and called for a “basic level framework” of common norms that respects national sovereignty [55-57]. She also described Zoom’s pandemic shift to a consumer-facing product, the resulting security trade-offs (waiting rooms, passcodes), and the company’s policy of not using customer content to train AI models [102-130][124-130].


David Zapolsky highlighted Amazon’s reliance on “free flow of goods, free flow of information, open skies,” noting that any governmental barrier creates “friction” and “potential problems” [58-68]. He warned that premature regulation adds “costs, uncertainty and you inhibit innovation” because the industry “still doesn’t really know how it’s going to be used” [58-68]. He cited the “Hiroshima agreements” as an emerging consensus [320-325] and pointed to regulatory uncertainty in both Colorado and the EU, which has already delayed product launches [292-303][300-304]. Internally, Amazon follows a “launch everywhere at once” mantra, tempered by the reality that uncertain jurisdictions can force postponements [260-270][280-285].


Jarek Kutylowski argued that any successful technology must be “inherently global” and that a “transparent, roughly uniform framework” balancing sovereignty and privacy would be “incredibly valuable” [75-83][85-90]. He stressed that regulation should be driven by use case rather than geography, contrasting low-stakes email translation with high-stakes R&D or patent translation, and noted that Europe’s earlier regulatory environment gave DeepL an edge while the company is already handling Colorado-style rules [167-182][185-190][195-200][205-210].


Security and trust


Jay Chaudhry warned that AI can be weaponised – “AI-enabled ransomware, AI-generated phishing, nation-state exploitation” – and called for security across all five layers of AI sovereignty [90-105][340-360]. David Zapolsky described Amazon Bedrock’s built-in “guardrails,” data-ownership guarantees (“the data they use … stays their data”), and disclosures that let enterprises control model outputs, positioning the cloud as the safest upstream environment [138-160][162-165].


Enterprise-user partnership and product controls


Aparna Bawa explained Zoom’s pandemic-driven expansion to consumers, the balance between rapid innovation and governance, and the use of “toggles” that let large enterprises enable or disable features (waiting rooms, meeting-ID visibility) while providing a simpler, safer default for individual users [229-250]. She also highlighted user education, noting she tells her children not to “put your address into ChatGPT” because prompts become training data [124-130].


David Zapolsky emphasized Amazon’s upstream design that gives downstream customers choice among 100+ models, built-in security, and the ability to set their own guardrails, all while pursuing the “launch everywhere” principle [138-154][260-270]. He acknowledged that regulatory uncertainty can force delays, as seen in Colorado and the EU [292-303][300-304].


Agentic AI and use-case-driven regulation


Jarek Kutylowski contrasted low-risk translation with high-risk applications such as “R&D documentation for new drugs” or autonomous agents, arguing that trust must be reinforced through transparent governance and that privacy remains a “table-stakes” requirement [167-182][185-190]. He reiterated that regulation should reflect the specific use case, not merely the location, and cited DeepL’s experience with early European rules and Colorado-style compliance [195-200][205-210].


Risk-based regulatory approach


Jay Chaudhry warned that “compliance doesn’t mean security” and recounted a regulator’s misunderstanding of Zscaler’s firewall-less model, urging flexible, evolving policy rather than static mandates [210-225][230-240]. David Zapolsky advised regulators to “work backwards from the harms we can see today,” targeting “high-risk uses that affect life, health, or civil rights” and avoiding a “unified field theory of AI regulation” [67-68][281-286]. Jarek Kutylowski added that risk varies by application and that companies must help customers navigate differing regimes [195-200][205-210].


One-year-ahead aspirations


Jay Chaudhry hopes governments will focus on emerging AI-enabled threats-ransomware, phishing, rogue agents-so they do not overreact and stifle innovation [340-360].


Aparna Bawa wishes for concrete progress on up-skilling and inclusive AI access, citing a farmer in a low-bandwidth village in Karnataka who could benefit from AI tools [364-374].


David Zapolsky calls for an international standard (e.g., ISO 42001) that provides a “common set of principles and a common set of technical standards” while respecting sovereign perspectives [384-392].


Jarek Kutylowski envisions a world where AI-driven multilingual collaboration bridges continents, enabled by a global governance framework [395-398].


Jason Oxman closed the session, noting the evolution from “AI Action” to “AI Impact,” thanking the panelists, and looking forward to the next summit.


Session transcriptComplete transcript of the session
Jason Oxman

The challenge of not just managing risk as an industry, but also doing so in a way that supports global innovation and interoperability. So today’s discussion, we’re very fortunate to have leaders from across the AI stack, if you will, who are here with us to discuss how governments can help industry work in partnership with industry, if you will, to align responsibilities, to reduce fragmentation, and to build trust in AI systems that are built for scale. We are very pleased to have with us some luminaries from across the tech ecosystem. Jay Choudhury is the CEO of Zscaler. Aparna will be joining us in just a moment. David Zapolsky. I almost missed that. David Zapolsky, who made it, is the Chief Global Affairs and Legal Officer at Amazon.

And Dr. Jarek Kutylowski. How did I do there? Thank you, is the CEO of DeepL. So to set up the conversation, I wanted to ask each of our panelists to help us think through the AI governance conversation that’s taking place globally. So as we’ve seen here at the AI Impact Summit, there are efforts among global governments to align their approach, even though they may take different directions. Hi, Aparna. And as Aparna is now joining us, I will introduce Aparna Bawa, who is the chief operating officer of Zoom, which is not only a technology company, it is also a verb. And so thank you, Aparna, for being here with us today. So as we were getting ready to talk about AI governance conversations, it is absolutely the case that there is a need for governments around the world to align their approaches to AI governance, because, of course, technology doesn’t, by its very nature, want to stop at borders.

It wants to cross borders and unite people around the world. So I wanted to ask each of our esteemed panelists, and, Jay, I’ll start with you. for perhaps your philosophical perspective on how AI alignment can take place across governments. Why is it that that alignment matters? And perhaps even share your perspective on what happens if that AI alignment breaks down and governments are going off in different directions and taking different approaches. Where do you see the biggest challenges around this idea of alignment of AI governance around the world? Jay, thank you.

Jay Chaudhry

Thank you. So we are a highly connected world. Imagine any large corporation that’s doing business in 50 countries. If each country has its own governance rules and all but using AI, and you’re using some systems locally, some systems globally, it’ll create a lot of issues. some line of alignment is good, but over -alignment doesn’t help either. In fact, I have similar thoughts on governance too. Some level of governance is needed. When we start doing too much governance, too much compliance, we start killing innovations. So that’s personally my view. No,

Jason Oxman

it’s an important viewpoint because there is this idea that governments need to act. They need to protect citizens. They need to ensure security. But acting too much, perhaps in advance, can stifle innovation. So, Aparna, I want to go to you with the same question. As we’re having this global AI governance conversation here at the AI Impact Summit, governments are going in different directions in many cases. This is the first time the conversation has taken place in the global south, so I think that’s a good thing for aligning governance approaches. So from where you sit, why is alignment across the AI governance ecosystem internationally so important, and what can happen when it doesn’t happen and goes wrong?

I

Aparna Bawa

will say, just to start, as an Indian American and someone who has lived in India, and we talked about this this morning at a breakfast we were at, it is quite striking to me some of the haves and have -nots. Like even we were talking about this morning, for example, during COVID, how some countries were fighting for PPE and fighting for oxygen tanks. And, you know, we in California were stockpiling toilet paper. I mean, the contrast is so stark. And I remember during COVID thinking to myself, that doesn’t seem right. And so I do feel like countries should protect the rights of their citizens and should want to advance their economies. But it is a tradeoff.

And I think it’s very well put to say it’s a tradeoff. So, for example, Zoom. imagine you would not be able to connect with people globally if we did not have cross -border data flow. So when we’re talking about AI, you can talk about AI, but it’s no different at the data layer. But we would not exist if we didn’t have cross -border data flows and free unencumbered data flow. And when governments start putting more and more restrictions on them within their own countries, it impedes their own citizens’ progress. And so at some point, it becomes a tradeoff. Now, obviously, the requirements around privacy and security are table stakes. If you get on a Zoom meeting with someone, you want to know that the person on the other side is that person.

That is sort of table stakes. But I’m with Jay on this one. I think there’s a basic level framework that is necessary. to be honest we live today with multiple in the United States we live with multiple states privacy frameworks and is it great no is it inefficient yes there’s something in between where you have a framework that is commonly understood with common set of norms and values I also respect a right of sovereignty for a nation so something there has to be a balance that

Jason Oxman

makes sense David Amazon operates pretty much in every country on the planet although I’m sure you can name a few that you’re not in yet there’s a few yeah there’s a few small number can you share your view on how this AI governance conversation needs to have some perhaps some unity to it

David Zapolsky

sure and first of all I I’m going to try not to repeat Aparna’s view because I basically agree with everything you just said if you think about every one of Amazon’s business models our stores the way we’re able to export 20 billion of Indian small to medium sized businesses to overseas markets we’re able to export 20 billion of Indian small to medium sized businesses to overseas markets we’re able to export 20 billion of Indian small to medium sized businesses to overseas markets We’re looking to take that to 80. If you look at the cloud, if you look at our entertainment business, if you look at the satellites that we’re launching to launch a global Internet service, every one of them depends on free flow of goods, free flow of information, open skies.

That’s just kind of the way we’ve designed the company, to be global and to have interoperable services. And so every time a government erects barriers to that, it creates friction. It creates potential problems. And I think the global trend towards more of that is concerning. With AI particularly, I think the danger of some of the regulation that we’ve seen around the world is that we all still don’t really know how it’s going to be used, where it’s going to be most effective, where it’s going to be dangerous. There’s a lot of theories about it. There’s a lot of fear, uncertainty, and doubt about that, a lot of science fiction. And I think the danger in regulation, before you really understand the technology or how it’s going to play out, is that you create costs.

you create uncertainty and you inhibit innovation you inhibit adoption and that’s kind of what we’re seeing a couple years into this large language model journey there are parts of the world that were quick to regulate and civil society was all over that we’re going to regulate all these things we’re going to come up with these theoretical constructs of high risk, low risk and we don’t really know what that means in practice yet and so what’s happening? well, look at Colorado Colorado was one of the first states out of the box with comprehensive AI regulation which, by the way, isn’t bad in principle but they don’t know how to apply it no one really knows how to apply it and I think you’re seeing some buyers regret they put the implementation on hold they want to figure out standards I won’t even talk about the EU but they’re pretty much in the same boat they’re all looking for ways to not have to put the thing into practice because they don’t really know how it’s going to play out so I think what we need to do is step back look for some common principles what what is a high risk use?

what can we all agree are high risk? well, if you’re using a technology to make decisions that’s going to affect the life, health, or civil rights of an individual let’s talk about that are there laws that protect that already? do we need to supplement? them. Let’s work backwards from the harms we can see today and regulate there versus trying to come up with the unified field theory of AI regulation because that’s only going to slow us down.

Jason Oxman

Great. Yarek, we’ve been talking about unifying global governance approaches, making sure one might say that they all speak a common language. That’s what DeepL does. See what I did there? Your language AI platform is all about making sure everyone can communicate with each other regardless of the language they speak. From your perspective, you’re our European headquartered representative here, but you do business around the world. What can you share with us about how AI governance conversations being unified across governments is important to DeepL?

Jarek Kutylowski

I truly believe that any successful technology needs to be inherently global. That holds both for the commercial models of the companies that we’re representing, but it also holds for the AI. just the access and the ability of reach towards the whole globe with what we are building. I think this creates the economies of scale on everything that we’re building. And when you are in AI, like obviously you’re running very, very high R &D costs and you have to be able to offset that with a huge customer base. So having a global market and being able to deploy to the whole world and therefore also to fulfill the mission of our companies, whether it’s just enabling communication, maybe in the case of Zoom, or making sure that this communication can happen multilingually, as in the case of DeepL, that really depends on a framework that is transparent and on a framework that is maybe not too different in all of the parts of this world.

And therefore, having some common layer, having this right balance of… of protecting the sovereignty and… And protecting maybe like a slightly different approach and slightly different mindset to certain topics like privacy, where we do have differences across the world. But doing that in a way that has a common understanding, that would be incredibly valuable. I think not only for the companies that we represent, but also really for our users and for our customers who depend on the best possible solutions.

Jason Oxman

Jay, I want to come back to you because you are our resident security expert and sometimes doomsayer about what happens if we don’t include trust and security as part of the conversation. I’ve heard you remind members of the government of India, indeed, that although the five pillars are enormously valuable, if you don’t have security overlaying them, we’re all in trouble. Talk to us about how the trust and security conversation is still a vital component around all the excitement.

Jay Chaudhry

Yeah, I have said that AI is powerful. but AI is dangerous because this technology can be abused. In India there’s a great focus on five layers and the focus is about being sovereign having everything that you can control. It starts with application then models underneath and so on and so forth. While it’s good to have that sovereign stuff imagine a bad guy can control all of that sovereign stuff sitting somewhere out there. Data poisoning can be done. All kinds of stuff can be done. So having a layer of security across all five layers becomes very important. So we should think about sovereignty not just in terms of this thing is sitting in my country but also in terms of who can access, who can do some of these things with it which is often overlooked.

And also the adoption of AI is happening very fast. And it’s wonderful And I’m not saying we should slow it down. I think we should embrace fast, but we should also start thinking about embracing cyber to make sure things are used securely at the same pace.

Jason Oxman

And in order to make sure that security is part of the AI ecosystem, Aparna, I want to ask you about what we all have responsibility to be thinking about as users, what enterprises have a responsibility to be thinking about. You know, we’ve talked about governance from the policy perspective, but, of course, users and enterprises also have a responsibility around AI. And as the COO of Zoom, you look over both the public policy and business aspects of what you’re deploying. How does the conversation about what we all should be thinking about factor into product development and deployment conversations?

Aparna Bawa

It is a true partnership. And you know what? When Jay was talking, it resonated with me. When you work for a technology company, you’re not just working for a company. that is what you want to develop technology and you want people to adopt it as fast as possible you want them to be early adopters it’s so exciting in fact in our company you know companies have lots of different functions obviously our engineers our developers our product people are super they’re super early adopting their first to to take any sort of app that’s come out with its cursor etc and use it in their day -to -day and then there’s other people who have other day jobs i mean there’s finance people and the people people the hr people they have day jobs and they’re learning ai at night because they’re realizing that if i’m not on the ai bandwagon i’m going to get left behind and by the way if you’re looking to develop apps it’s actually yes you can focus on the sort of the the tech applications but the real so the the secret that not getting a ton of attention maybe a little bit of attention is this non -technical roles that could be augmented with ai so in that frame of mind i think it’s a really important thing to do and i think it’s a really important framework when you work for that kind of technology company it can be difficult to then start saying, but wait a minute, you need to slow down because you need to make sure that your CICD work is still going and it’s amplified because of the risks of AI, your security certifications, your red teaming, your privacy standards, all of that stuff is maintained.

I will tell you, the user plus the enterprise that is pushing out this technology, it’s a partnership. It is so important. The one thing that we learned during the pandemic, if you think about Zoom before pandemic, it was an enterprise -focused company, a work -focused company. And basically, when the pandemic hit, we said, okay, all you consumers, we will just hand you a platform that we usually give to IT administrators. And what do IT administrators at our customers do? They decide whether to turn up the security and privacy controls, turn down usability because it’s a tradeoff. It’s a definite tradeoff. They decide. We, in turn, just handed it to consumers and said, you can’t do that.

Who decides? and we realize, okay, public schools, they don’t have IT administrators. They don’t know how to turn on waiting rooms. They don’t know how to, you know, hide the meeting invite. They don’t know how to do these kinds of things. You have an obligation as an enterprise to make sure that there’s sufficient controls for the individual user and it scales all the way up to the enterprise and maintain that level of flexibility. You have that obligation. But on the same side, I would say the user, to be smart, has to understand some basic levels. I’ll tell you for an example. My kids use all the AI engines, ChatGPT, Cloud. They use it all. And it is a conversation we have to say is you don’t put all your information into your prompt because if you put all your information in your prompt, it is going into that engine and it will train that engine.

On the flip side, we as an enterprise provider, we have made the statement and we have made the policy decision that we will not use our customer content to train data. When I’m training my kids, I have to tell them, you can’t put your address into ChatGPT. You have to make sure that you’re safe in some way. So those are the kinds of things that you have to keep in mind. It’s a partnership between the user and the enterprise. And I think the enterprise obligation scales as you get down into the consumer use.

Jason Oxman

And I want to stay on this theme of training the user, if you will, whether they’re your children or a customer, because it is important for the tech industry to be mindful of the downstream. And, David, I want to come to you with this question. Amazon is, in a lot of ways, an upstream operator. You enable business and consumer customers on everything you do, from content to e -commerce to broadband in the future to your cloud customers. So how do you think about the upstream governance decisions that you’re making at Amazon and how they impact the downstream? How do you think about the downstream decisions or ways of operating that your customers are going to have to make as a result of those decisions you make at the Amazon level?

David Zapolsky

Well, we’re fortunate to have the scale to be able to serve enterprises in the cloud at the service layer. And so we have, you know, even before the AI, the current AI craze, we have a couple of decades of experience in thinking through what does governance and security look like for our enterprise customers. And as we’ve moved into this, you know, newer age where there’s AI services available, you know, one of the best solutions that we could come up with is creating an environment within the cloud services that so many hundreds of thousands of enterprises already use to give them access to models, not just our own, and we do our own models, and there’s upstream governance on those, you know, testing, making sure there’s, you know, we correct for bias, the things that a responsible model builder will do.

But at this enterprise level and the services, this is called bedrock. You know, we try to think through what are customers going to need. So we build in security. We build in the type of infrastructure that allows customers to scale up or down. We build in choice. Enterprises can choose from over 100 different models, open source and closed source. Not just ours, but, you know, all of the leading models from all around the world. And so we try to create an environment, a platform, where enterprise customers can come to use this new technology. First of all, get access to it without having to build their own servers and train their own models. And secondly, to do it in a way where they can rely on the security of the infrastructure.

The other thing that we will provide customers is that the data they use to employ those models, you know, stays their data. It doesn’t go to the model builders and it doesn’t go to us. So, you know, you can build that into the system. And then on top of that, given the way… that enterprises are using this technology, we try to build as many tools as possible to put the control of how this technology is deployed into the hands of enterprises and users. And so, for instance, on the Bedrock platform, we provide guardrails that allow you, as an enterprise, to basically control what types of outputs the models are going to give you. Now, are they more toxic?

Are they less biased? Can you filter for certain types of content? We build those controls right into the interface so enterprises can have that control. We build disclosures into the types of services that we offer so that we provide some visibility and transparency into here’s how this thing is built, here’s what you should use it for, here’s what you probably shouldn’t use it for, and we provide those kinds of choices to consumers. And so you have to think through the overall security in the system. in the environment. and the accessibility of this technology. And as far as our approach is, the cloud is probably the best place to do that. It’s certainly the easiest way to access the technology and likely the safest.

Jason Oxman

Jarek, you’ve moved DeepL’s business model from it started as translation. Now it’s getting into agentic AI, and you have agents on your platform that can execute tasks on behalf of your customers. Which I can imagine raises very different governance policy decisions that you have to make on behalf of your customers when you’re just translating versus when agents can act autonomously, particularly because you’re a global business and they can act autonomously across borders. How are you thinking about the policies and procedures for governance that you have to put in place in an agentic AI world that are different than perhaps you did in a language translation? world?

Jarek Kutylowski

I think generally, but also in the language space, it’s just like the stakes are becoming higher and higher. AI is becoming more and more powerful. And even if you look into translation, like a couple of years ago, Diebel would be translating your typical email to your customer. And that is important, of course. You want to look great in front of the customer. You want to be eloquent. You want to be able to connect with them, maybe like really on a human level when it comes to the language that this customer is speaking. And you’re enabling your business to basically become global very, very easily. But now what Diebel is translating, it’s plain maintenance records. It’s R &D documentation for new drugs that actually influences how those drugs are developed and whether they’re being approved by the FDA or not.

So these are highly critical use cases. And I think it has been mentioned that like privacy and… it is just the table stakes it’s just the beginning I think creating a layer of trust into the outcomes of the AI whether that’s translation whether that’s agentic AI that those decisions are really following what the enterprise is expecting of the AI that is really where kind of the battle is right now and that is where both the governance aspect of that that’s coming from the political side and from the governmental side needs to obviously be included but there’s also the aspect of how do the enterprise how do our customers want to regulate the AI that is being deployed and how flexible the products that we all are providing can be towards those very different approaches that we’re seeing across the world and with different types of enterprises maybe even

Jason Oxman

Each of you mentioned the concept of risk management in your comments and I want to come back to the balance that Jay alluded to earlier between risk management between promoting innovation and balancing risks and obviously there is a trade -off it’s a sliding scale the more you regulate risk the less room there is for innovation I want to ask each of our panelists, Jay, I’ll start with you, about how you’ve seen a flexible risk -based approach from government be the most effective, where you see that flexible approach still leave room for innovation, or the flip side to that, if you want to give any examples, where you’ve seen it go wrong, where a more prescriptive approach to regulation has denied you the opportunity to bring products or services to market or has generally been more of a challenge for industry because a government didn’t get the balance right between managing risk and promoting innovation?

Jay Chaudhry

There are many facets of governance and risk. Take, for example, data privacy. Obviously, that’s one kind of factor. But potentially, hacker attacks from a cyber point of view is a different kind of factor. We look at more in terms of two things. One, making sure your data is not lost. So the data becomes very important. There’s a consumer end of data, but there’s a bigger issue on the data side is enterprises. And you don’t try to treat the same data the same way in the practical business world. I’ll give you an example. When I worked with General Electric, the CISO, a very smart guy, Larry Virginia, would say, when I tried to secure everything, I secured nothing.

So then he would give an example. He says, as a CISO, I need to protect IP or intellectual property of my products. But my washers and dryers are out there. I don’t spend time trying to protect its IP at all. You can buy them in a store. And figure it out. But I’m dead serious about protecting IP on my jet engine. That’s very important. Trying to just say all consumer data, all this data, it just starts creating issues. That’s why I also like to say compliance doesn’t mean security. In fact, when you work on compliance, all this thing works through the government entities, pros, cons, and it takes a lot longer. And by the time it’s out there, the cyber and compliance needs have moved on.

So the stuff you put in place many, many times is old. In fact, when Zscaler came out with our Zero Trust cloud -based architecture, a lot of these regulators came in, wait a second, where is your firewall? So what do you mean firewall? Firewalls don’t, we don’t use firewalls. We are anti -firewalls. And they said, no, no, no, wait a second. The banks can use it. If you know. It’s not a firewall. When we went through certification for the federal government in the U .S. the certifying body first came firewalls no it took us three months to educate them so that’s why I think over regulation I really don’t like it there needs to be a way of saying what’s the impact of this thing on what kind of stuff that’s the right approach all data is not created equal trying to put the onus off securing all data gets hard then classifying data gets hard so these are not simple issues AI makes it very hard we don’t even fully understand how AI does what it does so I think a flexible policy that evolves is a better thing while keeping track of the most important data and then beyond data hackers too, that’s a big problem we talked about agents today a user is the weakest link tomorrow AI agents will be your weakest link and they’ll be all over they are maturing they’ll come Imagine an agent getting hacked or hijacked in your company with access to all kinds of stuff.

So that’s where companies like Zscaler, we are focused on making sure our zero trust change can be extended to deal with agents, starting with understanding their identity, authorization, all those things. Those things are very important the way we look at it. Otherwise, business will shut down.

Jason Oxman

So Aparna Zoom brings some amazing innovations using AI to the platform that we’re all familiar with. It makes it a lot easier for us to do everything from transcribe meetings to pretend to be a cat when you’re in court. No, that’s not a – that’s a –

Aparna Bawa

I was going to say it can summarize your meeting. It can take notes for you. It can send action items to your teams. It can calendar those action item follow -ups. It can give them deadlines. All done.

Jason Oxman

There it is. But I can imagine you’ve had some challenges around the world in that balance between innovation. and risk management from governments. Can you either share a positive example of where that’s gone well in your mind, or if you want to, an example of where it hasn’t gone well, where consumers and businesses have been denied Zoom innovation because that balance isn’t struck? Or perhaps you can keep it at a higher level if you prefer.

Aparna Bawa

between innovation, our product team, innovate, innovate, innovate, our governance team, security, privacy, et cetera, is always thinking about that as well. And so how do you strike that balance? And I think I’ll start at the top level. It’s a sliding scale on many different fronts. But if you look at it like a layer cake or even a data stack, but the top level, it’s customer choice. So David was very appropriate when he said customer choice, but customer choice is different by the category of customer. If you are an enterprise and you have 200 people on an IT admin team or under the CIO, and you are buying Zoom and you have a giant security team and a giant compliance team, you’re going to be making choices for yourself.

I’m not going to tell HSBC what they’re going to do. They’re going to decide what they’re going to do. And we deliver the platform and we have toggles for them to decide what they want to deploy, what they don’t want to deploy, who they want to deploy it to. We make it very easy. So we provide a lot of choice. So the same platform services Fortune One. The same platform also services my mother -in -law, who is on the free account and who is chatting with her friends and won’t upgrade. I tell her, please upgrade. She gets off, waits five minutes, gets back on, and that’s how they do it. So for her, it’s very different.

So for her, you have to mandate a few things. You can’t give your meeting ID to everybody. It cannot be on the top of the UI. You know, those are some basic things. You have to have waiting rooms. If you’re in a school environment, you have to have mandatory passcodes. These are sorts of things that you – so that’s a sliding scale. I would say take it one level deeper. I think the biggest thing I have learned from working at Zoom, and in all honesty, I credit our founder for this. The biggest thing I’ve learned working at Zoom is everything goes back to the user experience. And our customers are not monoliths. They don’t just want to take down.

They don’t want to take down all the technology. They want to do it in a safe and secure way. They don’t want to be surprised. So you have to think, I am a user. I’m an end user. It doesn’t matter that I sell to Zscaler. Thank you very much. It doesn’t matter that I sell to Zscaler. I need to worry about how Jay Choudhury’s engineer feels when he gets on Zoom. And that’s the user experience I’m going for. So if you are a user and you feel like, wait a second, I don’t really want – if I’m a finance person in Jay Choudhury’s team and I say I don’t really want my meeting to be automatically transcribed and then spit into an AI engine because I’m worried, or if I’m a lawyer, I’m worried about attorney -client privilege, well, I need to give them the option to say I opt out of that.

I need to be able to give them choice. And I think that’s how I think about it. Every risk -based decision is you are a user. You’re not one kind of user. You have multiple types of users. How do you make it easy? How do you make it easy for, at a very lowest common denominator, for them to trust you? And that’s really the answer that you go through.

Jason Oxman

That’s great. David, let’s go from different kinds of users to different kind of products. You were the first on the panel to use the phrase risk -based approach, and nowhere is that more evident than Amazon’s wide range of products and services to your customers. I can imagine it’s a very different internal conversation about governance and risk when determining how AI is going to, on Amazon Prime, recommend my next series or show. Not a lot of risk there. But other Amazon products could have more risk to them. So on the sliding scale, and you also, you travel the world, quite literally now you’re doing it, talking to governments about that innovation versus risk management and the risk of getting that balance wrong.

How do you communicate that to governments and also make the internal product decisions that you need to around those issues?

David Zapolsky

Well, you sort of… kind of stole one of my talking points when I have some of these conversations, which is it does matter how this technology is used and where. It’s a different set of considerations when we think about what kind of protections or risks arise from an AI -assisted shopping assistant versus a tool we might make available to help doctors document how they’re treating patients and make it easier for people to prescribe medications. Those are two very different risk profiles. But if you start with a regulation that doesn’t differentiate between those, you’re going to inhibit innovation. You’re going to prevent adoption of really useful ways that this technology can be used. And so that’s…

You know, that’s the pitch I make when I get to talk… to people whose business it is to think about regulation. It is about risk. It’s about how the technology is used. And my point earlier was that we don’t really know yet how the technology is going to be used. When we see it, we can analyze it. I can’t, you know, and on that point generally, you know, there are cases where technology companies have made a decision to not bring certain types of technology into, say, Europe because of regulatory uncertainty. And typically those get worked through. But I can’t tell you how many conversations I’ve had internally where folks have come up with an idea or a product and our sort of internal mantra is we want to launch something everywhere all at once.

We want to serve customers. If we have convictions, something’s going to happen. If it’s good for customers, why just do it in one place? And sometimes the answer to that is. it’s too costly. It’s going to take more time. We can’t really figure out how this is going to fit within, you know, the regulatory scheme in a certain other jurisdiction because they haven’t thought of it either. And so we’re going to wait. We’re just going to, you know, wait on that. We’ll launch it in this place first and we’ll see if it works. And then if it works, then we’ll think about, you know, the costs associated with scaling it globally. And so that’s a real world issue that governments have to understand and deal with when they make decisions about how prescriptive their regulations are going to be, especially in the abstract.

And so those are the sorts of conversations I have. I think, you know, in the AI space, I think you can look at countries like Peru. You can look at countries like Japan that have proceeded cautiously. I think India has the same approach and I’m very encouraged by the way India is approaching these issues. You have to you can’t rule out regulation completely. And Amazon’s an advocate. of regulation that mandates that people developing and deploying this technology do it responsibly. But we have to understand what we’re regulating before you can really pull the trigger. And so those are the – I think those types of examples are useful for people to keep in mind when they’re considering how to resolve that balance.

Jason Oxman

And the results of those conversations not going in the right direction, David, is that consumers or businesses might get denied the technology that their neighbors are enjoying. So, Jarek, I wanted to ask you, as the CEO of DeepL, in the process of expanding around the globe, are there examples that you can think of where you’ve had to make a go -no -go decision entering a particular country or launching a particular product, including your new Agentech AI products? because of the regulatory environment or because of the way in which a country looks at? Or the flip side of that, if you want to take the positive, is are you attracted to a particular market because, as David said, it’s done the right thing, like Peru or Japan or even India is endeavoring to do, where they’re more likely to get deep L service because of the decisions they’ve made, the approach they take to these AI governance decisions?

Jarek Kutylowski

Yeah, Jason, let me maybe first start with a principle. I’m a scientist by heart, so I’m really excited about bringing the best possible technology to each and every one of our customers and users. I think they all deserve it. I think they all should be equipped with that. But yes, there is kind of some of those things that we need to take into account. And actually, quite often, those are not really location -based or country -based or regulation -based, but really also based on the use cases of those. Of those customers. AI can be incredibly powerful, but that power also demonstrates its possibilities in different ways in different applications. And going back to my example from earlier, like the translation of an email has just a different criticality grade than a translation of a patent application.

The execution of an agent in a particular environment versus in an enterprise environment has a different grade of complexity. But going back to kind of the regulation aspect of it, I think we’re lucky as a company to have grown in Europe in kind of an environment which is maybe like slightly earlier on regulation than other places in the world. And I think that gives us an edge to be able to understand how to work with this regulation and how to prepare and then also be very, very early in other markets, like you mentioned Colorado earlier, and be able to handle that complexity and be able to handle that complexity for our customers, really. Because most often it is our customers who do not understand this space.

We do. And we have to go all of the way to give them the possibility to figure this out for themselves, for their applications, for their use cases, and across a whole range of products. So in short, I think it can be managed, but it is really like part of the excellency of a company to be able to manage it together with the customer.

Jason Oxman

The last question that we have time. I want to address to each of you is a forward -looking question. It used to be possible to have conversations about policy outcomes years in advance. I think the best we can hope for. is for me to ask this question in advance of Switzerland hosting the next AI Impact Summit or whatever they choose to call it next year at this time. So my question to all of you on the panel is, a year from now, if we are to gather, and something had happened in the AI governance, AI regulatory space over the course of that year that you’d like to see happen and you were looking backwards to India and say, I’m really glad that one thing happened or that one thing changed or this government or this international body did this thing over the course of the last year to really help unleash the innovation and power of AI in a secure way that we all want to see, what could that one thing be that you’re looking at?

And it can be something that you’re focused on in your business as well over the course of the next year that government can help make a reality. So, Jay, I’ll start with you with this question. Then we’ll go down. I’ll go down the panel to bring our time to a close together. What’s the one thing you’re hoping if we’re talking a year from now has happened in global AI governance that’s going to make everything that we’re talking about and excited about a huge success?

Jay Chaudhry

The AI train is moving at a pretty fast pace. It will keep on moving. Then you look at the things that could go wrong. That’s where governance comes in. I think there’s too much focus on data. There’s less focus on bad things that bad guys can do. I think probably the biggest issue will be, hey, today we hear all about these ransom attacks, ransomware. AI can make it so much easier. Bad guys are very motivated to make money. Today, when they do attack, they have to find your attack surface. They’re finding those IP addresses that are open to the Internet, those firewalls and VPNs and everything. AI, you can discover it in 30 seconds. AI can write beautiful emails for phishing.

as if they come from your CFO. Once you get in, AI agents can discover your whole network to figure out what those things are. It can bring those things down. So I think we need to focus more on to make sure we can protect against those risks. I talked about AI agents going rogue. Those are one kind of risk. And then the second kind of risks government needs to worry about is nation states trying to use AI to really have advantage, understanding, getting these backdoors planted and all that kind of stuff. I think if you’re sitting next year and we’ve done enough in those areas that we don’t have some of these things that blow up.

If they blow up, then government starts tightening things more and more, which doesn’t sometimes help. So proactive areas to secure it will be very, very important.

Jason Oxman

All right. So protecting against these threats so that government doesn’t overreact and stifle innovation as a result. Aparna, what’s your one thing that you hope for for next year?

Aparna Bawa

You know, it really struck me in this impact summit, the focus on inclusivity, upskilling, skilling and upskilling people who wouldn’t otherwise have access to technology. And if you think about why we got started, we were founded because we wanted to provide free and open access to collaboration and have people from all walks of life connect. I think our founder had to travel to date his wife, you know, and didn’t want to see her more than once the next number of weeks. So, you know, it’s something powerful. In a year, I would like to actually see that happen. Now, it’s not. I think it’s completely altruistic. I do firmly believe that even enterprises who have more of a chance of adopting AI and gaining some of the efficiencies of AI, they need a market.

And the market is you, me, and all of us. And the more people in a village somewhere in a corner of India, even near – we were just talking about Karnataka in another meeting, in a village that has low bandwidth, et cetera, in Karnataka. If a farmer can adopt AI and can change their lives in successive generations, that is good for business. And so for me, progress on that. I still think it’s very – it’s all talk. But I love the idea. I love seeing a billboard where Prime Minister Modi is talking about inclusivity. That’s wonderful to hear. It’s good for business. Maybe it’s a bit altruistic, but I would think it would be good for Zoom.

Jason Oxman

I love it. AI lifting up more broadly the world. David?

David Zapolsky

I’ll take a much higher level approach. You know, I think there’s a sort of consensus around AI regulation that’s kind of yearning to get out. Like it’s sort of gelling a little bit. We saw it sort of in the Hiroshima agreements. We see it, you know, talked about in forums like this. You know, there is sort of an emerging consensus about how to approach this technology. In a responsible way, and I totally, again, agree violently with Aparna in adding the inclusiveness piece and commend the Prime Minister and India for making that a big part of the debate. But I think I would like to see countries around the world start to converge on this basic consensus.

It doesn’t mean that countries can’t have their own perspectives or sovereign outlooks, but there is sort of a… a movement toward an international standard that – and there’s a parallel with the technical standards. There’s ISO 42001, which everybody can abide by and give people a common set of principles and a common set of technical standards they need to make so that we can all be more confident in the way we roll out this technology.

Jason Oxman

I love that. A move toward more global industry consensus -based standards to help govern all that we do, hopefully put government regulators out of business if we can all do it right. Jarek, you get to bring us home with your aspiration for us as we gather together next year in Switzerland.

Jarek Kutylowski

Yeah, I think there’s place for those government regulators too. I would love, as you just explained, getting them all together and creating a framework. But I think there is a – bigger role for AI in this world. I think there’s so many amazing humans across all of the continents of this world and I would love to see in a year and once again that goes back a little bit to DeepL’s mission for them to be able to collaborate as much as they can no matter where they sit geographically no matter which language they speak, no matter what they do in their job just giving the opportunity to each and everyone in every place of this world and there’s amazing examples of cooperation between India and other countries and strengthening that even more and I think AI gives us even more possibilities to do that in the upcoming year so maybe in Switzerland we’re going to be able to look at that and see hey in India we’ve just set the cornerstone of making this possible and making this world a better place.

Jason Oxman

I bet they will. You know, it was AI action last year. Now it’s AI impact. Hopefully it will be AI collaboration or something of the sort next year. I love that that image of everybody across borders, across geographies, across languages collaborating together. What a great discussion. I love how we were both philosophical and practical. I really appreciate all of you sharing your deep insight on these important AI governance issues. And I appreciate all of you being here in the audience to hear this discussion. Please join me in recognizing and thanking our terrific panelists. And please enjoy the rest of the summit. Thank you. Now we’ve got to get a picture. Are we going to take a picture?

We have to get a picture, yeah. We’re going to have to hang back behind there. We’re going to have to hang back behind there.

Related ResourcesKnowledge base sources related to the discussion topics (37)
Factual NotesClaims verified against the Diplo knowledge base (6)
Confirmedhigh

“Technology doesn’t want to stop at borders and wants to cross borders and unite people worldwide.”

The knowledge base explicitly states that technology by its nature doesn’t want to stop at borders and seeks to cross them, supporting this framing [S1] and [S22].

Confirmedhigh

“When we start doing too much governance … we start killing innovations.”

The same wording appears in the knowledge base, confirming the warning about over-governance stifling innovation [S3].

Additional Contextmedium

“Zscaler’s firewall‑less, zero‑trust architecture.”

The knowledge base records Chaudhry advocating a Zero Trust architecture as a paradigm shift in cybersecurity, confirming the concept though not the specific firewall-less detail [S11].

Confirmedhigh

“Zoom’s policy of not using customer content to train AI models.”

Zoom revised its Terms of Service to state it will not use customer communications data for AI training, confirming the claim [S47].

Additional Contextmedium

“Call for a basic level framework of common norms that respects national sovereignty.”

International discussions emphasize the need for a regulatory framework that balances economic development, national sovereignty, and privacy, providing nuance to this call [S104] and [S103].

Additional Contextmedium

“Regulation should be driven by use case rather than geography; a transparent, roughly uniform framework would be valuable.”

Sources highlight the importance of aligning standards globally rather than fragmented national rules, supporting a use-case-driven, uniform regulatory approach [S103] and [S104].

External Sources (104)
S1
Aligning AI Governance Across the Tech Stack ITI C-Suite Panel — The challenge of not just managing risk as an industry, but also doing so in a way that supports global innovation and i…
S2
https://dig.watch/event/india-ai-impact-summit-2026/aligning-ai-governance-across-the-tech-stack-iti-c-suite-panel — The challenge of not just managing risk as an industry, but also doing so in a way that supports global innovation and i…
S3
Aligning AI Governance Across the Tech Stack ITI C-Suite Panel — -David Zapolsky: Chief Global Affairs and Legal Officer at Amazon
S4
The Role of Government and Innovators in Citizen-Centric AI — – Arthur Mensch- Jarek Kutylowski
S5
Aligning AI Governance Across the Tech Stack ITI C-Suite Panel — – Jarek Kutylowski envisioned enhanced global collaboration that transcends language and geographic barriers And Dr. Ja…
S6
https://app.faicon.ai/ai-impact-summit-2026/how-the-eus-gpai-code-shapes-safe-and-trustworthy-ai-governance-india-ai-impact-summit-2026 — is the CEO of DeepL. So to set up the conversation. I wanted to ask each of our panelists to help us think through the A…
S7
https://app.faicon.ai/ai-impact-summit-2026/aligning-ai-governance-across-the-tech-stack-iti-c-suite-panel — And Dr. Jarek Kutylowski. How did I do there? Thank you, is the CEO of DeepL. So to set up the conversation, I wanted to…
S8
Aligning AI Governance Across the Tech Stack ITI C-Suite Panel — -Aparna Bawa: Chief Operating Officer (COO) of Zoom
S9
Driving U.S. Innovation in Artificial Intelligence — 7. Jason Oxman – President & CEO, Information Technology Industry Council 8. Julia Stoyanovich – Associate Professor, De…
S10
Agentic AI in Focus Opportunities Risks and Governance — -Jason Oxman- Moderator/Host, appears to be with ITI (Information Technology Industry Council)
S11
Cutting through Cyber Complexity / DAVOS 2025 — – Jay Chaudhry: CEO, Chairman, and Founder of Zscaler 3. Zero Trust Architecture: Jay Chaudhry, CEO of Zscaler, argued …
S12
https://app.faicon.ai/ai-impact-summit-2026/aligning-ai-governance-across-the-tech-stack-iti-c-suite-panel — The challenge of not just managing risk as an industry, but also doing so in a way that supports global innovation and i…
S13
Aligning AI Governance Across the Tech Stack ITI C-Suite Panel — -Jay Chaudhry: CEO of Zscaler (security expert)
S14
The Internet in 20 Years Time: Avoiding Fragmentation | IGF 2023 WS #109 — The argument presented is that the future concern lies more in the legal and regulatory aspect of technology rather than…
S15
How Trust and Safety Drive Innovation and Sustainable Growth — Fantastic. And I misspoke. It’s prospective, not prescriptive regulation. But John and Denise, maybe talk to us a little…
S16
US tech leaders oppose proposed export limits — A prominenttechnology trade grouphas urged the Biden administration to reconsider a proposed rule that would restrict gl…
S17
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — Yeah, thank you, Madhu. And thank you to Rebecca and Partnership on AI for convening this really important conversation….
S18
Partnering on American AI Exports Powering the Future India AI Impact Summit 2026 — This comment demonstrates sophisticated understanding that ‘AI sovereignty’ isn’t a monolithic concept but represents di…
S19
U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence — – Owen Lauder- Michael Brown- Austin Marin Industry-led, consensus-based approach to standards development is preferred…
S20
Tech Transformed Cybersecurity: AI’s Role in Securing the Future — AI governance involves identifying different roles like AI users, technical vendors or AI providers, government regulato…
S21
WS #283 AI Agents: Ensuring Responsible Deployment — Enhanced education and relationship building between policymakers, private sector, and civil society stakeholders is ess…
S22
How the EU’s GPAI Code Shapes Safe and Trustworthy AI Governance India AI Impact Summit 2026 — Summary:While both speakers oppose excessive regulation, Jay focuses on the balance between alignment and innovation, wh…
S23
Discussion Report: Sovereign AI in Defence and National Security — The presentation outlines six key dimensions of AI sovereignty: data control, model control, training and alignment over…
S24
Building Sovereign and Responsible AI Beyond Proof of Concepts — AI sovereignty encompasses who controls the AI system, including data location, model access, security measures, and the…
S25
World Economic Forum 2025 at Davos — During the DAVOS 2025 sessions, the topic of how AI-driven cybersecurity measures can avoid creating new vulnerabilities…
S26
Open Forum #30 High Level Review of AI Governance Including the Discussion — High level of consensus with significant implications for AI governance development. The alignment suggests that despite…
S27
Evolving AI, evolving governance: from principles to action | IGF 2023 WS #196 — Both speakers agree that stakeholder engagement is essential in complementing global governance. Galia specifically high…
S28
Tokenisation and the Future of Global Finance: A World Economic Forum 2026 Panel Discussion — Legal and regulatory | Economic References to financial crises being born from misleaded or dangerous financial innovat…
S29
WS #162 Overregulation: Balance Policy and Innovation in Technology — It prompted discussion of specific examples where regulation enabled or catalyzed innovation, adding nuance to the debat…
S30
How Trust and Safety Drive Innovation and Sustainable Growth — Summary:All speakers agreed that trust is the foundational requirement for AI adoption. Without trust, people simply won…
S31
Agents of Change AI for Government Services & Climate Resilience — Summary:There is unanimous agreement that while AI agents offer significant benefits, robust guardrails, transparency, a…
S32
Towards a Resilient Information Ecosystem: Balancing Platform Governance and Technology — Ana Cristina Ruelas: Thank you, Frédéric. That gives me the leads directly to talk to Nadja, because I think that one of…
S33
Lightning Talk #7 Privacy Redefined: equitable Access in the AI Age — This comment critically examines the limitations of media literacy as a primary solution, arguing that over-reliance on …
S34
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — Larissa Zutter stands out as a senior AI policy advisor, closely studying the socio-economic implications of artificial …
S35
Global AI Policy Framework: International Cooperation and Historical Perspectives — A lot of good things are happening. We might not seeing any impact right now maybe due to some temporary political leade…
S36
The role of standards in shaping a safe and sustainable AI-driven future — Onoe acknowledged the rise of a novel AI innovation ecosystem and the indispensable role of standards in extending this …
S37
Aligning AI Governance Across the Tech Stack ITI C-Suite Panel — International consensus and standards should emerge while respecting sovereignty
S38
Building Indias Digital and Industrial Future with AI — Deepak Maheshwari from the Centre for Social and Economic Progress provided historical context, tracing India’s digital …
S39
Discussion Report: Sovereign AI in Defence and National Security — Faisal argues that sovereign AI is not about isolationism but about being a credible and independent partner within alli…
S40
The Global Economic Outlook — Panelists expressed concern about rising tariffs and protectionist measures globally. Shanmugaratnam advocated for a glo…
S41
WS #162 Overregulation: Balance Policy and Innovation in Technology — The overall tone was thoughtful and constructive. Panelists acknowledged the complexity of the issues and the need to ba…
S42
Aligning AI Governance Across the Tech Stack ITI C-Suite Panel — Arguments:Some level of alignment is needed but over-alignment can kill innovation Cross-border data flows are essential…
S43
Searching for Standards: The Global Competition to Govern AI | IGF 2023 — In conclusion, the UNESCO recommendation on AI ethics provides crucial guidance for global AI governance. By grounding A…
S44
Open Forum #30 High Level Review of AI Governance Including the Discussion — High level of consensus with significant implications for AI governance development. The alignment suggests that despite…
S45
Global dialogue on AI governance highlights the need for an inclusive, coordinated international approach — Global AI governance was the focus of a high-levelforumat the IGF 2024 in Riyadhthat brought together leaders from gover…
S46
Why science metters in global AI governance — This discussion focused on the critical role of science in international AI governance, centered around the United Natio…
S47
Zoom revises TOS clarifying it won’t use customer data for AI training — Zoom has once againrevised its terms of service(TOS) to remove any mention of the use of content collected from its comm…
S48
Building the Next Wave of AI_ Responsible Frameworks &amp; Standards — And I think the second point we should think about is I think the human state of mind works well in default versus optio…
S49
Building the Next Wave of AI_ Responsible Frameworks & Standards — yeah so I think to the point Ankush was mentioning AI technology is fundamentally designed on probabilistic model and an…
S50
Microsoft rejects AI training allegations — Microsofthas refutedallegations that it uses data from its Microsoft 365 applications, including Word and Excel, to trai…
S51
How Trust and Safety Drive Innovation and Sustainable Growth — Summary:All speakers agreed that trust is the foundational requirement for AI adoption. Without trust, people simply won…
S52
Conversation: 02 — “So that’s why without trust and safety and understanding of what’s happening in your underlying environment, it becomes…
S53
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — And this requires proactive and coherent policy responses. First, people must be at the center of AI strategy, as we hea…
S54
What is it about AI that we need to regulate? — InWS #193, when directly asked about universal cybersecurity standards, Osei Keija responded that”There’s nothing like o…
S55
Comprehensive Report: World Economic Forum Panel Discussion on Cybersecurity Resilience — Miebach elaborated on the zero-trust approach, explaining that it requires verification at every access point using mult…
S56
Acknowledgements — For example, edge computing, enabled by 5G, cloud computing, and ubiquitous access to digital devices, will fundamentall…
S57
Introduction — The scale, complexity, and decentralized design of 5G architectures make it infeasible to depend upon perimeter security…
S58
IGF 2016 – Dynamic coalition on connecting the unconnected — Having prescriptive regulation may hinder innovation and impede reaching some areas.
S59
[Parliamentary Session 3] Researching at the frontier: Insights from the private sector in developing large-scale AI systems — Basma Ammari from Meta highlighted their open-source approach to large language models, emphasizing the importance of fa…
S60
How the EU’s GPAI Code Shapes Safe and Trustworthy AI Governance India AI Impact Summit 2026 — The partnership between enterprises and users is critical – enterprises must provide scalable security controls while us…
S61
Press Conference: Closing the AI Access Gap — Data strategies are another critical aspect in the AI era. Countries need robust data strategies that include sharing fr…
S62
How the EU’s GPAI Code Shapes Safe and Trustworthy AI Governance India AI Impact Summit 2026 — Global AI Governance Alignment: The panelists discussed the critical need for international coordination on AI governanc…
S63
Open Forum #30 High Level Review of AI Governance Including the Discussion — High level of consensus with significant implications for AI governance development. The alignment suggests that despite…
S64
Aligning AI Governance Across the Tech Stack ITI C-Suite Panel — Chaudhry warns that if each nation imposes its own AI rules, companies operating across borders will face fragmented com…
S65
WS #145 Revitalizing Trust: Harnessing AI for Responsible Governance — The panelists stressed the need for harmonized global regulations to avoid fragmentation and ensure interoperability acr…
S66
Tokenisation and the Future of Global Finance: A World Economic Forum 2026 Panel Discussion — Legal and regulatory | Economic References to financial crises being born from misleaded or dangerous financial innovat…
S67
Pathways to De-escalation — Balance regulation and innovation by finding the right regulatory approach
S68
Panel Discussion: 01 — Both speakers agree that regulation should strike a balance between protecting users and fostering innovation, avoiding …
S69
How Trust and Safety Drive Innovation and Sustainable Growth — Summary:All speakers agreed that trust is the foundational requirement for AI adoption. Without trust, people simply won…
S70
Responsible AI for Children Safe Playful and Empowering Learning — “safety, privacy, these are absolutely foundational and non‑negotiable as we’ve seen on the LEGO education side and simi…
S71
Aligning AI Governance Across the Tech Stack ITI C-Suite Panel — Agreed with:Aparna Bawa, David Zapolsky, Jarek Kutylowski — Security and trust are foundational requirements Agreed wit…
S72
Responsible AI in India Leadership Ethics & Global Impact part1_2 — Evidence:India stands at this defining moment in its digital journey as AI becomes a powerful engine to innovation and p…
S73
Towards a Resilient Information Ecosystem: Balancing Platform Governance and Technology — Ana Cristina Ruelas: Thank you, Frédéric. That gives me the leads directly to talk to Nadja, because I think that one of…
S74
Lightning Talk #7 Privacy Redefined: equitable Access in the AI Age — This comment critically examines the limitations of media literacy as a primary solution, arguing that over-reliance on …
S75
Risks and opportunities of a new UN cybercrime treaty | IGF 2023 WS #225 — Andrew Campling:Okay, thank you. Andrew Campling, I run a public policy, public affairs consultancy, but also a trustee …
S76
High-level AI Standards panel — ## Challenges and Future Considerations ## Future Initiatives and Coordination Paul Gaskell: Thank you. And thank you,…
S77
The role of standards in shaping a safe and sustainable AI-driven future — Onoe acknowledged the rise of a novel AI innovation ecosystem and the indispensable role of standards in extending this …
S78
UN: Summit of the Future Global Call — Guinea:I reaffirm a shared conviction that the future must not be suffered, but built. We have the power to shape this f…
S79
AI Governance Dialogue: Steering the future of AI — ## Concrete Commitments and Outcomes Doreen Bogdan Martin: Thank you. And we now have a chance together to reflect on A…
S80
Building Trusted AI at Scale – Keynote Anne Bouverot — Overall Tone:The tone is diplomatic, optimistic, and collaborative throughout. It begins with ceremonial courtesy and ap…
S81
Summit Opening Session — The tone throughout is consistently formal, diplomatic, and collaborative. Speakers maintain an optimistic and forward-l…
S82
Closing Session  — The tone throughout the discussion was consistently formal, collaborative, and optimistic. It maintained a celebratory y…
S83
Any other business /Adoption of the report/ Closure of the session — In summary, the speaker artfully blended expressions of gratitude with recognition of collaborative efforts and a call f…
S84
WSIS+20 High-Level Dialogue: WSIS Legacy in Motion: Honoring the Past, Shaping the Future — The discussion maintained a predominantly positive and constructive tone throughout, celebrating significant achievement…
S85
WS #162 Overregulation: Balance Policy and Innovation in Technology — The overall tone was thoughtful and constructive. Panelists acknowledged the complexity of the issues and the need to ba…
S86
Safeguarding Children with Responsible AI — The discussion maintained a tone of “measured optimism” throughout. It began with urgency and concern (particularly in B…
S87
WS #179 Navigating Online Safety for Children and Youth — The tone of the discussion was thoughtful and constructive, with panelists and audience members offering different persp…
S88
Deepfakes for good or bad? — The tone was thoughtful and pragmatic throughout, balancing concern with cautious optimism. The panelists acknowledged s…
S89
Revamping Decision-Making in Digital Governance and the WSIS Framework — The discussion maintained a constructive and collaborative tone throughout, with speakers building upon each other’s poi…
S90
Lightning Talk #65 Enhancing Digital Trust From Rigidity to Elasticity — Byberg, from Weibo Japan Tech Practice, provided a crucial business perspective that grounded the theoretical discussion…
S91
WS #236 Ensuring Human Rights and Inclusion: An Algorithmic Strategy — The tone of the discussion was largely serious and concerned, given the gravity of the issues being discussed. However, …
S92
Searching for Standards: The Global Competition to Govern AI | IGF 2023 — Audience:Thank you so much, Dr. Ali Mahmood. I’m from Pakistan. I’m heading a provincial government entity that is invol…
S93
AI in Mobility_ Accelerating the Next Era of Intelligent Transport — The discussion maintained a serious, urgent tone throughout, driven by the gravity of India’s road safety crisis. While …
S94
Discussion Report: AI Implementation and Global Accessibility — The tone was consistently optimistic and collaborative throughout the conversation. Both speakers maintained a construct…
S95
Global AI Policy Framework: International Cooperation and Historical Perspectives — The discussion maintained a constructive and optimistic tone throughout, despite acknowledging significant challenges. S…
S96
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — The tone was consistently collaborative, optimistic, and forward-looking throughout the session. Delegates maintained a …
S97
AI for Good Technology That Empowers People — The discussion maintained a consistently optimistic and collaborative tone throughout. It began with inspirational frami…
S98
Comprehensive Discussion Report: AI’s Transformative Potential for Global Economic Growth — The conversation maintains a consistently optimistic and enthusiastic tone throughout. Both speakers demonstrate genuine…
S99
Policymaker’s Guide to International AI Safety Coordination — In terms of what is the key to success, what is the most important lesson on looking back on what we need, trust is buil…
S100
Agentic AI in Focus Opportunities Risks and Governance — Fantastic. Carly? I love how you characterize it as moving from what we call assistive AI to operational AI. In other w…
S101
Opening of the EuroDIG2024 and Baltic Domain Days — Prime Minister Ingrida Šimonytė continued the theme of digital responsibility, addressing the challenges posed by disinf…
S102
Workshop 2: The Interplay Between Digital Sovereignty and Development — Karen Mulberry synthesizes the workshop discussion to argue that European digital sovereignty needs to be grounded in fu…
S103
World Economic Forum Panel: Sovereignty and Interconnectedness in the Modern Economy — Truncale emphasizes that clients want greater collaboration across the globe on issues that impact them, and they want s…
S104
<strong>IGF 2022 Messages</strong> — development and economic value generation, in different contexts, while respecting national sovereignty and user privacy…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
J
Jason Oxman
5 arguments158 words per minute2190 words829 seconds
Argument 1
Need for cross‑border alignment to prevent fragmentation
EXPLANATION
Oxman stresses that because AI technologies naturally cross national borders, governments must coordinate their AI governance approaches to avoid a fragmented regulatory landscape that could hinder global innovation and interoperability.
EVIDENCE
He notes that “there is a need for governments around the world to align their approaches to AI governance, because, of course, technology doesn’t, by its very nature, want to stop at borders” and that “it wants to cross borders and unite people around the world” [15-17]. He also frames the discussion as a challenge of managing risk while supporting global innovation [1-2].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for cross-border regulatory alignment to avoid fragmentation is highlighted in discussions about legal and regulatory alignment across nations [S14] and in the ITI C-Suite panel where Oxman stresses technology’s border-less nature [S1].
MAJOR DISCUSSION POINT
Global AI Governance Alignment
AGREED WITH
Aparna Bawa, David Zapolsky, Jarek Kutylowski
DISAGREED WITH
Jay Chaudhry
Argument 2
Risk‑innovation trade‑off requires a sliding‑scale approach; over‑prescriptive rules harm rollout
EXPLANATION
Oxman argues that regulation should be calibrated: too much governance stifles innovation, while too little leaves risks unmanaged. A sliding‑scale, risk‑based approach allows flexibility for different use‑cases and jurisdictions.
EVIDENCE
He observes that “the more regulation, the less room there is for innovation” and that “risk-management is a sliding scale” when discussing the balance between risk and innovation [179-180]. He later asks panelists to share examples of flexible risk-based approaches versus overly prescriptive rules [179-180].
MAJOR DISCUSSION POINT
Balancing Regulation and Innovation (Risk‑Based Approach)
AGREED WITH
Jay Chaudhry, David Zapolsky, Aparna Bawa
Argument 3
Security overlay is a vital component of AI excitement and adoption
EXPLANATION
Oxman emphasizes that trust and security must be integral to AI discussions; without a security layer, the benefits of AI cannot be fully realized.
EVIDENCE
He frames the security conversation as essential, stating “I’ve heard you remind members of the government of India… that although the five pillars are enormously valuable, if you don’t have security overlaying them, we’re all in trouble” [84-86].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Security as a foundational layer for trustworthy AI is emphasized in the AI security overview that outlines the role of security guardrails and trust building [S20] and in the AI agents session that calls for privacy and outcome trust as baseline requirements [S21].
MAJOR DISCUSSION POINT
Security, Trust, and Sovereignty
AGREED WITH
Jay Chaudhry, David Zapolsky, Aparna Bawa
Argument 4
Desire for convergent international AI standards while respecting sovereignty
EXPLANATION
Oxman calls for the development of global AI standards that harmonize regulations across countries, while still allowing individual nations to maintain sovereign policy choices.
EVIDENCE
In his forward-looking question he asks for “a convergent international AI standard” that respects sovereignty and mentions the possibility of ISO-like standards as a model for common principles [330-334][391-392].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for an ISO-style international AI standard that balances common principles with national sovereignty are echoed in the ITI discussion of ISO 42001 as a common technical framework [S3] and in industry-led consensus approaches to standards [S19].
MAJOR DISCUSSION POINT
Future Vision: Global Standards, Inclusivity, and Collaboration
AGREED WITH
David Zapolsky, Jarek Kutylowski
Argument 5
Emphasized need for user training and responsibility in AI deployment
EXPLANATION
Oxman highlights that both enterprises and end‑users must be educated about safe AI usage, stressing that user awareness is a key part of responsible AI deployment.
EVIDENCE
He asks panelists about “training the user” and the downstream responsibilities of both users and enterprises, linking this to the broader AI governance conversation [131-132].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The importance of tailored training for AI stakeholders and user awareness is discussed in the cybersecurity-focused AI governance report that stresses role-specific education [S20] and in the AI agents session that highlights user control and education as essential [S21].
MAJOR DISCUSSION POINT
Enterprise and User Responsibility / Downstream Impact
AGREED WITH
Aparna Bawa, Jarek Kutylowski
J
Jay Chaudhry
5 arguments142 words per minute1116 words469 seconds
Argument 1
Over‑alignment can stifle innovation; some alignment is sufficient
EXPLANATION
Chaudhry warns that while some degree of global alignment is beneficial, excessive uniformity can hinder innovation and create unnecessary compliance burdens.
EVIDENCE
He states that “some line of alignment is good, but over-alignment doesn’t help either” and adds that “when we start doing too much governance, too much compliance, we start killing innovations” [24-28].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
While alignment is needed, over-alignment is warned against as potentially hindering innovation in the ITI C-Suite panel [S1] and in broader fragmentation concerns that stress a balanced approach [S14].
MAJOR DISCUSSION POINT
Global AI Governance Alignment
DISAGREED WITH
Jason Oxman
Argument 2
Over‑regulation kills innovation; policies must be flexible and impact‑driven
EXPLANATION
Chaudhry argues that regulation should be proportionate and focused on actual impact rather than imposing blanket rules that could suppress technological progress.
EVIDENCE
He notes that “some level of governance is needed” but “when we start doing too much governance… we start killing innovations” [26-28]. Later he stresses the need for “a flexible policy that evolves” to keep pace with AI developments [180-183].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The panel discussion highlights the need for flexible, impact-based policies rather than rigid rules [S3], and summaries of the summit note that excessive regulation can backfire and stifle innovation [S22].
MAJOR DISCUSSION POINT
Balancing Regulation and Innovation (Risk‑Based Approach)
AGREED WITH
Jason Oxman, David Zapolsky, Aparna Bawa
DISAGREED WITH
David Zapolsky
Argument 3
AI abuse risk demands security across all layers; sovereignty includes access control
EXPLANATION
Chaudhry highlights that AI can be weaponized, requiring security measures at every layer of the stack, and that sovereignty should encompass who can access and manipulate AI systems, not just where they reside.
EVIDENCE
He describes AI as “dangerous because this technology can be abused” and calls for “a layer of security across all five layers” including concerns about data poisoning and malicious control [87-95]. He also expands sovereignty to include access control, not just geographic location [94-95].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Security across the AI stack and expanded notions of sovereignty are addressed in the AI governance roles overview [S20] and in the AI sovereignty dimensions framework that includes access control [S23]; zero-trust concepts reinforce the layered security argument [S11].
MAJOR DISCUSSION POINT
Security, Trust, and Sovereignty
AGREED WITH
Jason Oxman, David Zapolsky, Aparna Bawa
Argument 4
Governments should proactively address AI‑enabled threats to avoid reactionary over‑regulation
EXPLANATION
Chaudhry urges governments to focus on emerging AI‑driven security threats—such as ransomware, phishing, and nation‑state exploitation—so that they can act pre‑emptively rather than imposing restrictive regulations after incidents occur.
EVIDENCE
He outlines AI-enabled threats like rapid ransomware attacks, AI-generated phishing emails, and rogue AI agents, warning that “if they blow up, then government starts tightening things more and more, which doesn’t sometimes help” [340-361].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Proactive threat mitigation to prevent reactionary regulation is discussed in the ITI panel’s call to protect against AI-driven threats before governments overreact [S1] and in Chaudhry’s own remarks on AI-enabled cyber-attacks prompting pre-emptive measures [S11].
MAJOR DISCUSSION POINT
Future Vision: Global Standards, Inclusivity, and Collaboration
Argument 5
Data classification and zero‑trust architecture protect critical assets
EXPLANATION
Chaudhry stresses the importance of classifying data based on its criticality and implementing zero‑trust, firewall‑less architectures to secure high‑value assets while avoiding over‑broad compliance burdens.
EVIDENCE
He shares an anecdote from GE about protecting IP for jet engines versus consumer appliances, illustrating that “all data is not created equal” and that compliance does not equal security [190-199]. He also describes Zscaler’s anti-firewall, zero-trust approach and the need to educate regulators about it [203-210].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Zero-trust architecture as a means to protect high-value data is detailed in Chaudhry’s presentation on zero-trust and firewall-less security [S11] and reinforced by later analyses of AI-driven cybersecurity strategies [S25].
MAJOR DISCUSSION POINT
Enterprise and User Responsibility / Downstream Impact
A
Aparna Bawa
5 arguments180 words per minute1935 words643 seconds
Argument 1
Cross‑border data flows are essential; misalignment creates trade‑offs
EXPLANATION
Bawa argues that global connectivity, exemplified by Zoom’s service, depends on unrestricted cross‑border data flows, and that governmental restrictions can hinder both corporate operations and citizen progress.
EVIDENCE
She explains that “Zoom… would not be able to connect with people globally if we did not have cross-border data flow” and that “when governments start putting more and more restrictions… it impedes their own citizens’ progress” [47-51]. She also references the stark contrast observed during COVID-19 as an illustration of global inequities [39-44].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The essential nature of unrestricted cross-border data flows and the risks of fragmented regulation are highlighted in the fragmentation-avoidance brief [S14] and in the ITI discussion of technology’s border-less character [S1].
MAJOR DISCUSSION POINT
Global AI Governance Alignment
AGREED WITH
Jason Oxman, David Zapolsky, Jarek Kutylowski
Argument 2
Provide customer choice and tiered controls to balance security with usability
EXPLANATION
Bawa emphasizes that Zoom offers configurable security and privacy settings, allowing enterprises and individual users to select appropriate controls based on their risk tolerance and operational needs.
EVIDENCE
She describes how Zoom provides toggles for enterprises to enable waiting rooms, passcodes, and other controls, while also offering a simple experience for consumers, illustrating the “sliding scale” of security versus usability [232-250]. She notes that different customer categories (e.g., HSBC vs a personal user) receive tailored options [231-236].
MAJOR DISCUSSION POINT
Balancing Regulation and Innovation (Risk‑Based Approach)
AGREED WITH
Jason Oxman, Jay Chaudhry, David Zapolsky
Argument 3
Enterprises must embed strong controls, educate users, and avoid using customer data for training
EXPLANATION
Bawa states that Zoom commits to not using customer content to train AI models and stresses the need to educate users—especially younger ones—about safe prompt practices to protect personal information.
EVIDENCE
She notes that Zoom has a policy “we will not use our customer content to train data” and advises users not to put personal addresses into prompts, citing her own children’s use of AI as an example [124-128].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Enterprise responsibility for strong controls and user education aligns with the AI governance roles report that stresses training and privacy safeguards [S20] and the AI agents session that underscores privacy as a baseline and the need for user awareness [S21].
MAJOR DISCUSSION POINT
Security, Trust, and Sovereignty
AGREED WITH
Jason Oxman, Jay Chaudhry, David Zapolsky
Argument 4
Partnership between users and enterprises; user education is crucial for safe AI use
EXPLANATION
Bawa describes AI adoption as a collaborative effort where both the provider and the end‑user share responsibility, highlighting the need for user awareness and enterprise‑provided safeguards.
EVIDENCE
She says “It is a partnership” and recounts teaching her kids about safe AI usage, emphasizing that both the enterprise and the user must maintain controls and awareness [102-108][122-128].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The partnership model and emphasis on user education are reflected in the AI agents discussion that calls for shared responsibility and user control [S21] and in the broader AI governance training framework [S20].
MAJOR DISCUSSION POINT
Enterprise and User Responsibility / Downstream Impact
AGREED WITH
Jason Oxman, Jarek Kutylowski
Argument 5
Push for inclusive AI access, upskilling, and rural adoption to expand market
EXPLANATION
Bawa calls for broader AI inclusion, especially in underserved regions, arguing that expanding AI access in rural areas not only benefits societies but also creates new market opportunities for enterprises like Zoom.
EVIDENCE
She references the summit’s focus on inclusivity, mentions villages in Karnataka with low bandwidth, and argues that “if a farmer can adopt AI and can change their lives… that is good for business” [364-374]. She also cites the founder’s personal story of using Zoom to stay connected with his spouse as an illustration of AI’s human impact [366-368].
MAJOR DISCUSSION POINT
Future Vision: Global Standards, Inclusivity, and Collaboration
D
David Zapolsky
5 arguments169 words per minute1827 words645 seconds
Argument 1
Free flow of goods and information is vital; regulation creates friction, common principles needed
EXPLANATION
Zapolsky argues that Amazon’s global operations rely on unrestricted movement of goods, data, and services, and that governmental barriers introduce friction, underscoring the need for shared, high‑level principles rather than fragmented rules.
EVIDENCE
He explains that Amazon’s stores, cloud, entertainment, and satellite services all depend on “free flow of goods, free flow of information, open skies” and that “every time a government erects barriers… it creates friction” [60-62]. He later calls for “some common principles” to avoid slowing down innovation [67-68].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The importance of free flow and the friction caused by regulatory barriers are discussed in the fragmentation-avoidance analysis [S14] and in the ITI panel where Zapolsky stresses alignment to prevent friction [S1].
MAJOR DISCUSSION POINT
Global AI Governance Alignment
AGREED WITH
Jason Oxman, Aparna Bawa, Jarek Kutylowski
DISAGREED WITH
Jay Chaudhry
Argument 2
Focus regulation on high‑risk uses rather than blanket rules; avoid premature restrictions
EXPLANATION
Zapolsky suggests regulators should target AI applications that pose clear, high‑impact risks—such as decisions affecting life, health, or civil rights—rather than imposing broad, undefined constraints that could hinder beneficial uses.
EVIDENCE
He proposes stepping back to identify “high risk use” like decisions affecting life, health, or civil rights, and to build regulations from observed harms rather than a “unified field theory of AI regulation” [67-68]. He cites Colorado’s premature AI law as an example of over-broad regulation causing hesitation [66-68].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Targeted regulation of high-risk AI applications is advocated in the summit summary that warns against premature, broad rules [S22] and in the ITI discussion of building regulation from observed harms rather than a unified theory [S3].
MAJOR DISCUSSION POINT
Balancing Regulation and Innovation (Risk‑Based Approach)
AGREED WITH
Jason Oxman, Jay Chaudhry, Aparna Bawa
Argument 3
Build security into cloud services with guardrails; keep customer data private
EXPLANATION
Zapolsky describes Amazon Bedrock’s design, which embeds security controls, data‑ownership guarantees, and content‑filtering guardrails, ensuring that customers retain control over AI outputs and that their data never leaves their environment.
EVIDENCE
He details that Bedrock provides “guardrails” for output control, that “the data they use… stays their data” and does not go to model builders or Amazon, and that disclosures are built into the service for transparency [150-159]. He also notes the platform’s security and scalability features [142-144].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Embedding security guardrails and data-ownership guarantees in cloud AI services aligns with the AI security roles overview that stresses built-in safeguards and transparency [S20] and the AI agents session’s focus on trust and privacy guardrails [S21].
MAJOR DISCUSSION POINT
Security, Trust, and Sovereignty
AGREED WITH
Jason Oxman, Jay Chaudhry, Aparna Bawa
Argument 4
Upstream governance decisions shape downstream customer capabilities; provide tools and transparency
EXPLANATION
Zapolsky explains that Amazon’s upstream policies—such as model testing, bias mitigation, and data‑privacy safeguards—are built into its cloud services, giving enterprises the tools to manage AI responsibly and understand how the technology works.
EVIDENCE
He outlines that Amazon’s cloud offers a “environment” with testing, bias correction, security, and choice among 100+ models, and that it provides guardrails, disclosures, and transparency to help enterprises control AI deployment [137-160].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The impact of upstream governance on downstream capabilities is highlighted in the AI governance roles report that describes upstream testing, bias mitigation, and transparency tools for enterprises [S20] and reinforced by discussions on user-centric outcomes [S21].
MAJOR DISCUSSION POINT
Enterprise and User Responsibility / Downstream Impact
Argument 5
Aim for global consensus on AI governance, possibly via ISO‑like standards
EXPLANATION
Zapolsky advocates for an international, standards‑based approach to AI regulation, suggesting that a consensus framework similar to ISO standards could harmonize practices while allowing national nuances.
EVIDENCE
He references ISO 42001 as an example of a technical standard that could provide “a common set of principles and a common set of technical standards” for AI governance [391-392]. He also mentions emerging consensus from global forums and agreements like the Hiroshima agreements [384-389].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The proposal for ISO-style global AI standards is echoed in the ITI panel’s reference to ISO 42001 as a common technical framework [S3] and in industry-led consensus approaches that favor standards over direct regulation [S19].
MAJOR DISCUSSION POINT
Future Vision: Global Standards, Inclusivity, and Collaboration
AGREED WITH
Jason Oxman, Jarek Kutylowski
J
Jarek Kutylowski
5 arguments159 words per minute1076 words403 seconds
Argument 1
Transparent, similar frameworks worldwide benefit companies and users
EXPLANATION
Kutylowski argues that a common, transparent AI governance framework enables companies to operate globally and gives users confidence in consistent, trustworthy AI services.
EVIDENCE
He states that having “a common layer… a right balance… having a common understanding would be incredibly valuable” for both companies and customers [80-83]. He also links this to the mission of DeepL to enable multilingual communication across borders [71-73].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The benefit of transparent, harmonized frameworks for global operations is discussed in the fragmentation-avoidance brief [S14] and in the ITI conversation about ISO-based common standards [S3].
MAJOR DISCUSSION POINT
Global AI Governance Alignment
AGREED WITH
Jason Oxman, Aparna Bawa, David Zapolsky
Argument 2
Governance must adapt to varied use‑case criticality; flexibility is key
EXPLANATION
Kutylowski notes that AI applications differ in criticality—from casual email translation to patent or medical documentation—so governance frameworks must be flexible enough to address these varying risk levels.
EVIDENCE
He contrasts translation of an email with translation of patent applications and R&D documentation, describing the latter as “highly critical” and requiring different governance treatment [170-177]. He stresses the need for adaptable governance based on use-case [322-325].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for risk-based, flexible governance across different AI use-cases is highlighted in the AI governance roles overview that advocates adaptable policies based on criticality [S20].
MAJOR DISCUSSION POINT
Balancing Regulation and Innovation (Risk‑Based Approach)
Argument 3
Trust in AI outcomes is essential; privacy is a baseline requirement
EXPLANATION
Kutylowski emphasizes that beyond privacy, users need to trust the results produced by AI systems, especially when those outcomes affect important decisions.
EVIDENCE
He mentions that “privacy and… table stakes” and that building a “layer of trust into the outcomes of the AI” is crucial for both translation and agentic AI use cases [81-82].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Trust in AI outputs and privacy as a baseline are central themes in the AI agents session that stresses outcome trust and privacy as table-stakes [S21].
MAJOR DISCUSSION POINT
Security, Trust, and Sovereignty
Argument 4
Companies must help customers navigate regulations and offer flexible solutions
EXPLANATION
Kutylowski asserts that technology providers should assist their customers in understanding and complying with diverse regulatory environments, delivering adaptable products that meet local requirements.
EVIDENCE
He explains that DeepL’s role includes “helping customers figure this out for themselves” and managing regulatory complexity across markets, emphasizing a partnership approach [327-329].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Assisting customers with regulatory navigation and flexible product offerings aligns with the AI governance roles report that calls for provider-led guidance and adaptable solutions [S20].
MAJOR DISCUSSION POINT
Enterprise and User Responsibility / Downstream Impact
AGREED WITH
Jason Oxman, Aparna Bawa
Argument 5
Enable worldwide collaboration across languages through AI, supported by clear regulatory frameworks
EXPLANATION
Kutylowski envisions AI as a catalyst for global collaboration, allowing people of different languages and locations to work together, provided that regulatory frameworks are clear and supportive.
EVIDENCE
He describes DeepL’s mission to let “everyone collaborate… no matter where they sit geographically, no matter which language they speak” and hopes that future summits will showcase progress in making this possible, especially in India [316-324][395-398].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI-driven global collaboration and the need for clear, supportive regulatory frameworks are discussed in the fragmentation-avoidance analysis [S14] and the ITI panel’s reference to ISO-based standards facilitating cross-border cooperation [S3].
MAJOR DISCUSSION POINT
Future Vision: Global Standards, Inclusivity, and Collaboration
AGREED WITH
Jason Oxman, David Zapolsky
Agreements
Agreement Points
Cross‑border alignment is essential to avoid regulatory fragmentation and enable global AI services
Speakers: Jason Oxman, Aparna Bawa, David Zapolsky, Jarek Kutylowski
Need for cross‑border alignment to prevent fragmentation Cross‑border data flows are essential; misalignment creates trade‑offs Free flow of goods and information is vital; regulation creates friction, common principles needed Transparent, similar frameworks worldwide benefit companies and users
All four speakers stress that AI technologies naturally cross borders and that governments must coordinate their AI governance approaches to keep a unified market and avoid the friction caused by fragmented rules. Oxman notes that “technology … wants to cross borders and unite people around the world” [15-17]; Bawa points out that without cross-border data flows Zoom could not connect globally and that restrictions impede citizens’ progress [47-51]; Zapolsky explains Amazon’s reliance on “free flow of goods, free flow of information, open skies” and that barriers create friction [60-62]; Kutylowski says a “common layer” of transparent rules would be “incredibly valuable” for companies and customers [80-83].
POLICY CONTEXT (KNOWLEDGE BASE)
This view aligns with calls for international consensus on AI governance while respecting national sovereignty, as highlighted in the ITI C-Suite panel and IGF discussions on cross-border data flows [S37][S42][S45][S61].
A flexible, risk‑based regulatory approach is needed; over‑prescriptive rules stifle innovation
Speakers: Jason Oxman, Jay Chaudhry, David Zapolsky, Aparna Bawa
Risk‑innovation trade‑off requires a sliding‑scale approach; over‑prescriptive rules harm rollout Over‑regulation kills innovation; policies must be flexible and impact‑driven Focus regulation on high‑risk uses rather than blanket rules; avoid premature restrictions Provide customer choice and tiered controls to balance security with usability
The panel agrees that regulation must be proportionate and adaptable. Oxman frames the trade-off as a “sliding-scale” where too much regulation reduces innovation [179-180]; Chaudhry warns that “when we start doing too much governance… we start killing innovations” and calls for policies that evolve with technology [26-28][180-183]; Zapolsky advocates targeting “high-risk use” and warns against “unified field theory” regulations that slow progress [67-68]; Bawa describes Zoom’s tiered security toggles that let enterprises choose appropriate controls, illustrating a risk-based model [232-250].
POLICY CONTEXT (KNOWLEDGE BASE)
Panelists emphasized the need for principles-based, risk-focused regulation and warned against premature, overly prescriptive rules that could hinder adoption [S42][S41][S58][S59][S54].
Security and trust must be embedded across the AI stack to enable safe adoption
Speakers: Jason Oxman, Jay Chaudhry, David Zapolsky, Aparna Bawa
Security overlay is a vital component of AI excitement and adoption AI abuse risk demands security across all layers; sovereignty includes access control Build security into cloud services with guardrails; keep customer data private Enterprises must embed strong controls, educate users, and avoid using customer data for training
All speakers highlight that without robust security and trust mechanisms AI cannot be deployed responsibly. Oxman stresses that “security overlaying” AI pillars is essential [84-86]; Chaudhry describes the need for “a layer of security across all five layers” and expands sovereignty to include who can access AI systems [87-95]; Zapolsky details Amazon Bedrock’s built-in guardrails, data-ownership guarantees and transparency [150-159]; Bawa notes Zoom’s policy of not using customer content for model training and the need to educate users on safe prompt practices [124-128].
POLICY CONTEXT (KNOWLEDGE BASE)
Trust and safety were identified as foundational for AI deployment, requiring security embedded throughout the stack, as discussed in WEF and trust-focused sessions [S51][S52].
User education and partnership between enterprises and end‑users are crucial for responsible AI use
Speakers: Jason Oxman, Aparna Bawa, Jarek Kutylowski
Emphasized need for user training and responsibility in AI deployment Partnership between users and enterprises; user education is crucial for safe AI use Companies must help customers navigate regulations and offer flexible solutions
The speakers concur that AI responsibility is shared. Oxman asks about “training the user” and downstream responsibilities [131-132]; Bawa describes AI adoption as a “partnership” and shares personal examples of teaching her children safe AI practices [102-108][122-128]; Kutylowski stresses that providers should help customers understand and comply with diverse regulations, offering adaptable products [327-329].
POLICY CONTEXT (KNOWLEDGE BASE)
The importance of enterprise-user collaboration and user literacy was underscored in the EU GPAI Code briefing and trust-safety dialogues [S60][S51][S52].
A convergent, international AI standards framework (e.g., ISO‑style) is desirable while respecting national sovereignty
Speakers: Jason Oxman, David Zapolsky, Jarek Kutylowski
Desire for convergent international AI standards while respecting sovereignty Aim for global consensus on AI governance, possibly via ISO‑like standards Enable worldwide collaboration across languages through AI, supported by clear regulatory frameworks
All three speakers envision a common set of principles that can guide AI governance globally. Oxman explicitly calls for a “convergent international AI standard” that respects sovereignty and mentions ISO-like models [330-334][391-392]; Zapolsky proposes an ISO-42001-style framework to give a “common set of principles and a common set of technical standards” [391-392]; Kutylowski envisions AI-driven global collaboration made possible by clear, harmonised regulatory frameworks [395-398].
POLICY CONTEXT (KNOWLEDGE BASE)
Calls for an ISO-style global standards framework that balances coordination with sovereignty were made in multiple panels on AI governance [S37][S42][S45].
Similar Viewpoints
Both argue that regulation should be proportionate, targeting concrete high‑risk applications rather than imposing broad, premature rules that could hinder innovation. Chaudhry stresses that "when we start doing too much governance… we start killing innovations" [26-28] and calls for a "flexible policy that evolves" [180-183]; Zapolsky recommends stepping back to identify "high‑risk use" and warns against a "unified field theory of AI regulation" that slows progress [67-68].
Speakers: Jay Chaudhry, David Zapolsky
Over‑regulation kills innovation; policies must be flexible and impact‑driven Focus regulation on high‑risk uses rather than blanket rules; avoid premature restrictions
Both see the value of transparent, globally‑aligned frameworks that address security and sovereignty. Chaudhry calls for security across all AI layers and broader notions of sovereignty [87-95]; Kutylowski highlights the benefit of a "common layer" and transparent governance for users and companies [80-83].
Speakers: Jay Chaudhry, Jarek Kutylowski
Transparent, similar frameworks worldwide benefit companies and users AI abuse risk demands security across all layers; sovereignty includes access control
Unexpected Consensus
Both the need for global alignment and the caution against over‑alignment were voiced by the same panel
Speakers: Jason Oxman, Jay Chaudhry
Need for cross‑border alignment to prevent fragmentation Over‑alignment can stifle innovation; some alignment is sufficient
While Oxman strongly advocates for coordinated global AI governance to avoid fragmentation [15-17], Chaudhry simultaneously warns that “some line of alignment is good, but over-alignment doesn’t help either” and that excessive uniformity can kill innovation [24-28]. The coexistence of these positions-calling for alignment yet warning against too much-represents an unexpected consensus on the nuanced balance required.
POLICY CONTEXT (KNOWLEDGE BASE)
The ITI C-Suite discussion highlighted this dual perspective, noting that some alignment is necessary but excessive alignment can stifle innovation [S42][S41].
Agreement that AI model training should not use customer‑generated content
Speakers: Aparna Bawa, David Zapolsky
Enterprises must embed strong controls, educate users, and avoid using customer data for training Build security into cloud services with guardrails; keep customer data private
Both Bawa and Zapolsky explicitly state that customer data must remain private and not be used to train AI models. Bawa says Zoom will “not use our customer content to train data” [124-128]; Zapolsky notes that in Bedrock “the data they use… stays their data” and does not go to model builders or Amazon [150-152]. This shared stance on data ownership was not highlighted elsewhere in the discussion.
POLICY CONTEXT (KNOWLEDGE BASE)
Recent corporate policy changes by Zoom and Microsoft explicitly prohibit using customer communications for AI model training, reflecting this consensus [S47][S50].
Overall Assessment

The panel shows strong convergence on four core themes: (1) the necessity of cross‑border alignment and interoperable frameworks; (2) the importance of a flexible, risk‑based regulatory approach that avoids over‑prescription; (3) the requirement to embed security, trust and data‑ownership safeguards throughout the AI stack; and (4) the need for user education and a partnership model between enterprises and end‑users. A secondary consensus emerges around the desirability of global, ISO‑style standards that respect sovereignty.

High consensus on the direction of AI governance—participants largely agree on the principles of coordinated yet flexible regulation, security‑by‑design, and shared responsibility. This consensus suggests that future policy initiatives can build on these shared foundations, focusing on creating adaptable standards, promoting data‑privacy safeguards, and investing in user capacity development while allowing national nuances.

Differences
Different Viewpoints
Degree of global alignment needed for AI governance
Speakers: Jason Oxman, Jay Chaudhry
Need for cross‑border alignment to prevent fragmentation Over‑alignment can stifle innovation; some alignment is sufficient
Oxman argues that AI technologies cross borders and therefore governments must coordinate their AI governance to avoid a fragmented regulatory landscape that would hinder global innovation and interoperability [15-17]. Chaudhry counters that while some alignment is useful, excessive uniformity creates compliance burdens that kill innovation, so only a limited line of alignment is desirable [24-28].
POLICY CONTEXT (KNOWLEDGE BASE)
Debate persists over how far alignment should go, with some experts urging extensive coordination and others warning against over-alignment that could limit national policy space [S42][S45].
Extent and focus of regulatory intervention on AI
Speakers: David Zapolsky, Jay Chaudhry
Free flow of goods and information is vital; regulation creates friction, common principles needed Over‑regulation kills innovation; policies must be flexible and impact‑driven
Zapolsky stresses that unrestricted flow of goods, data and services is essential for Amazon’s global model and calls for shared high-level principles rather than fragmented rules, warning that barriers create friction [60-62] and advocating a focus on high-risk uses built from observed harms [67-68]. Chaudhry argues that too much governance and prescriptive rules stifle innovation and that a flexible, evolving policy that adapts to AI’s unknowns is required [180-183].
POLICY CONTEXT (KNOWLEDGE BASE)
Stakeholders differ on the scope of regulation, balancing the need for oversight with concerns that heavy-handed rules may impede innovation, as reflected in over-regulation discussions [S41][S58][S59][S54].
Security architecture approach – firewall vs. firewall‑less zero‑trust
Speakers: Jay Chaudhry, Other panelists (e.g., David Zapolsky)
AI security must span all five layers; zero‑trust, anti‑firewall architecture is needed Standard security controls (e.g., firewalls) are assumed baseline in many cloud services
Chaudhry describes a layered security model covering application, model, data, etc., and notes that Zscaler’s zero-trust design rejects traditional firewalls, requiring regulators to be educated about this approach [203-210]. Other speakers (e.g., Zapolsky) describe building security into cloud services and guardrails but do not mention a firewall-less model, implying reliance on conventional security controls [142-144]. This reflects a divergence in preferred security architectures.
POLICY CONTEXT (KNOWLEDGE BASE)
The shift toward zero-trust models versus traditional perimeter firewalls is a contested design choice in AI security architectures, highlighted in cybersecurity panels and 5G strategy reports [S55][S56][S57].
Unexpected Differences
Assumptions about baseline security controls (firewall vs. firewall‑less zero‑trust)
Speakers: Jay Chaudhry, Other panelists (e.g., David Zapolsky)
AI security must span all five layers; zero‑trust, anti‑firewall architecture is needed Standard cloud security models rely on conventional firewalls and do not mention firewall‑less designs
While most panelists discuss security in terms of guardrails, data ownership, and built-in controls, Chaudhry uniquely argues that traditional firewalls are obsolete for AI security and that regulators need education on zero-trust, firewall-less designs [203-210]. This stance was not echoed by other speakers, making it an unexpected point of divergence.
POLICY CONTEXT (KNOWLEDGE BASE)
Experts debated baseline security assumptions, with zero-trust advocated as the emerging baseline for modern AI-enabled networks [S55][S56][S57].
Overall Assessment

The panel largely converged on the need for global cooperation, risk‑based regulation, and security‑focused AI deployment. The principal disagreements centered on how much alignment and regulation is appropriate (Oxman vs. Chaudhry; Zapolsky vs. Chaudhry) and on the preferred security architecture (firewall‑less zero‑trust vs. conventional controls).

Moderate – while participants share common goals of trustworthy, innovative AI, they differ on the intensity of alignment, the scope of regulation, and specific security implementations. These differences suggest that achieving consensus will require careful calibration of global standards, flexible regulatory pathways, and clear communication about emerging security models.

Partial Agreements
All participants agree that AI systems must be trustworthy and secure, and that regulation should balance risk management with innovation. However, they differ on the mechanisms: Oxman calls for a security overlay across AI pillars [84-86]; Chaudhry pushes for layered, zero‑trust security across all five layers [87-95]; Zapolsky proposes built‑in guardrails and data‑ownership guarantees in cloud services [150-159]; Bawa emphasizes user‑level controls, education, and partnership [102-108][124-128]; Kutylowski highlights transparent, common frameworks and ISO‑style standards [80-83][391-392].
Speakers: Jason Oxman, Jay Chaudhry, David Zapolsky, Aparna Bawa, Jarek Kutylowski
Security overlay is a vital component of AI excitement and adoption Risk‑innovation trade‑off requires a sliding‑scale, risk‑based approach Need for convergent international AI standards while respecting sovereignty
All agree that a risk‑based, flexible approach is preferable to rigid, one‑size‑fits‑all regulation. Oxman frames it as a sliding scale where more regulation reduces innovation [179-180]; Chaudhry calls for flexible policies that evolve with AI [180-183]; Zapolsky suggests targeting high‑risk applications and building common principles from observed harms [67-68]; Kutylowski stresses adaptable governance based on use‑case criticality [322-325].
Speakers: Jason Oxman, Jay Chaudhry, David Zapolsky, Jarek Kutylowski
Balancing regulation and innovation (risk‑based approach) Focus regulation on high‑risk uses rather than blanket rules
Takeaways
Key takeaways
Global AI governance needs alignment to avoid fragmentation, but over‑alignment can stifle innovation; a basic common framework with room for sovereign variations is preferred. Cross‑border data, goods, and information flows are essential for AI services; excessive regulation creates friction and hampers economic growth. A risk‑based, flexible regulatory approach that targets high‑risk uses is more effective than blanket, prescriptive rules. Security, trust, and sovereignty must be embedded across all AI layers; AI abuse (e.g., ransomware, malicious agents) requires proactive security measures. Enterprise‑user partnership is critical; enterprises must provide controls and education, while users must adopt responsible practices. Leading companies (Zscaler, Zoom, Amazon, DeepL) are building guardrails, choice mechanisms, and transparent frameworks to enable safe AI deployment. Future vision includes converging on international AI standards (e.g., ISO‑like) while promoting inclusive AI access, upskilling, and global collaboration.
Resolutions and action items
Develop a set of common principles for defining and regulating high‑risk AI applications (proposed by David). Create transparent, interoperable governance frameworks that can be applied globally with minimal variation (suggested by Jarek). Implement tiered controls and user‑choice toggles in AI‑enabled products to balance security and usability (outlined by Aparna). Integrate security guardrails and data‑privacy safeguards into cloud AI services (Amazon Bedrock approach). Encourage governments to proactively address AI‑enabled threats (ransomware, malicious agents) to avoid reactionary over‑regulation (Jay). Promote inclusive AI deployment and upskilling initiatives in underserved regions, especially in the Global South (Aparna). Work toward an international consensus on AI governance, potentially via ISO‑style standards (David).
Unresolved issues
Concrete mechanisms for achieving global regulatory alignment without imposing uniform rules across all jurisdictions. Standardized criteria for classifying AI applications and defining “high‑risk” use cases across different sectors and countries. Detailed governance models for autonomous AI agents operating across borders. How to reconcile differing privacy and data‑protection regimes while maintaining free data flows. Timeline and process for developing and adopting an international AI standards framework.
Suggested compromises
Adopt a basic, globally understood framework of norms and values while allowing sovereign-specific adaptations (Aparna). Limit regulation to high‑risk AI scenarios rather than applying blanket rules to all AI technologies (David). Provide enterprises with configurable security and privacy controls, letting them self‑regulate based on risk profile (Aparna). Maintain flexible, evolving policies that can be updated as AI capabilities and threats mature, avoiding static over‑regulation (Jay). Balance the need for alignment with the risk of over‑alignment by setting minimum common standards and leaving room for innovation (Jay).
Thought Provoking Comments
Some level of governance is needed, but when we start doing too much governance, too much compliance, we start killing innovations.
Highlights the delicate balance between necessary regulation and over‑regulation, framing the core tension of the discussion.
Set the stage for the recurring theme of ‘risk vs. innovation’. Prompted other panelists (Aparna, David) to discuss the dangers of premature or excessive rules and to propose more flexible, principle‑based approaches.
Speaker: Jay Chaudhry
We would not exist if we didn’t have cross‑border data flows and free unencumbered data flow. When governments start putting more and more restrictions on them within their own countries, it impedes their own citizens’ progress.
Connects AI governance to the fundamental infrastructure of data movement, emphasizing how fragmentation harms both business and citizens.
Shifted the conversation from abstract policy to concrete economic consequences, leading David to echo concerns about barriers to global services and prompting discussion on the need for a common set of norms.
Speaker: Aparna Bawa
The danger of regulation before you really understand the technology… we see buyers put implementation on hold… we need to step back, look for common principles, and work backwards from the harms we can see today.
Advocates a pragmatic, harm‑focused regulatory approach rather than speculative, blanket rules, introducing a concrete methodology for policy making.
Provided a framework that other speakers referenced (e.g., Jay’s call for flexible policy, Jarek’s emphasis on transparent frameworks). It steered the dialogue toward actionable steps rather than ideological positions.
Speaker: David Zapolsky
Any successful technology needs to be inherently global… we need a transparent framework that is not too different across the world.
Frames global AI governance as a prerequisite for scaling AI products, linking market viability directly to regulatory alignment.
Reinforced the global‑scale argument, prompting Jay and Aparna to discuss sovereignty versus interoperability, and underscored the later discussion about international standards.
Speaker: Jarek Kutylowski
AI is dangerous because this technology can be abused… data poisoning, rogue agents… we need a layer of security across all five layers.
Introduces the security dimension as a critical, often overlooked layer of AI governance, expanding the conversation beyond policy to technical safeguards.
Moved the discussion toward concrete security measures, leading Aparna to talk about user‑enterprise partnership and David to describe built‑in guardrails in cloud services.
Speaker: Jay Chaudhry
It’s a true partnership… the user plus the enterprise that is pushing out this technology… we must give users options (e.g., opt‑out of transcription) and provide controls that scale from consumer to enterprise.
Emphasizes shared responsibility and user‑centric design, adding depth to the governance debate by focusing on product‑level choices.
Shifted the tone from high‑level policy to practical implementation, influencing Jay’s later remarks on flexible risk‑based approaches and reinforcing the theme of choice introduced by David.
Speaker: Aparna Bawa
We build guardrails that allow enterprises to control model outputs, keep customer data private, and embed disclosures directly into the interface.
Provides a concrete example of upstream governance that empowers downstream users, illustrating how companies can self‑regulate within a global framework.
Validated the earlier calls for flexible, principle‑based regulation, and gave Jarek a reference point for how DeepL could implement similar controls for agentic AI.
Speaker: David Zapolsky
Compliance doesn’t mean security… regulators asked for firewalls, we’re anti‑firewalls. Over‑regulation leads to outdated compliance; we need policies that evolve with the technology.
Challenges the assumption that meeting compliance automatically ensures security, and highlights the lag between regulation and technological evolution.
Reinforced the need for adaptive, risk‑based policies, prompting other panelists to stress the importance of flexibility and to cite examples where rigid rules stalled product launches.
Speaker: Jay Chaudhry
Regulation must differentiate risk profiles – a shopping assistant is very different from a tool that helps doctors document patient care. A one‑size‑fits‑all approach will inhibit innovation.
Sharpens the argument for nuanced, use‑case‑specific regulation, moving the conversation from generic governance to sector‑specific considerations.
Led Jarek and Aparna to discuss how their products (translation vs. agentic AI, enterprise vs. consumer Zoom) require tailored safeguards, deepening the analysis of practical governance.
Speaker: David Zapolsky
I would like to see governments converge on a basic consensus – an international standard like ISO 42001 – that gives a common set of principles and technical standards.
Proposes a concrete pathway toward global alignment, linking policy to existing standards bodies and offering a tangible goal for the next year.
Provided a forward‑looking anchor for the closing round, influencing Jarek’s hopeful vision of worldwide collaboration and reinforcing the panel’s recurring call for global harmonization.
Speaker: David Zapolsky
Overall Assessment

The discussion pivoted around the tension between global innovation and fragmented regulation. Early remarks about over‑regulation and cross‑border data flows opened the floor to a series of concrete, practice‑oriented contributions. Each thought‑provoking comment introduced a new layer—security, user partnership, risk‑based differentiation, or international standards—that deepened the dialogue and shifted it from abstract policy to actionable mechanisms. Collectively, these insights steered the panel toward a consensus: effective AI governance requires flexible, harm‑focused principles, global interoperability, and built‑in technical safeguards, all while preserving the capacity for rapid innovation.

Follow-up Questions
How should “high‑risk” AI uses be defined and what common principles can guide their regulation?
David highlighted that regulators currently lack clear definitions of high‑risk AI and suggested working backwards from observable harms to create shared principles.
Speaker: David Zapolsky
What are the actual ways AI will be used and what potential harms might arise before imposing regulations?
He noted that the industry does not yet fully understand AI applications, urging research to map use‑cases and associated risks prior to rule‑making.
Speaker: David Zapolsky
How can governments implement flexible, risk‑based regulatory frameworks that evolve with AI technology?
Both emphasized that over‑prescriptive rules stifle innovation and called for adaptable policies that balance risk and progress.
Speaker: Jay Chaudhry, David Zapolsky
What security measures are needed across all AI layers, especially to prevent rogue AI agents and data‑poisoning?
Jay warned that AI agents could become the weakest link and advocated for a security overlay spanning the five AI layers.
Speaker: Jay Chaudhry
What impact do restrictions on cross‑border data flows have on AI innovation and global service delivery?
She argued that limiting data movement hampers progress and competitiveness of AI‑driven services.
Speaker: Aparna Bawa
How can users and enterprises be educated to avoid exposing sensitive information in AI prompts?
Aparna highlighted the need for user‑level guidance and policies to prevent unsafe data sharing with AI models.
Speaker: Aparna Bawa
In what ways do upstream AI governance decisions (e.g., cloud guardrails) influence downstream customer behavior and compliance?
He described Amazon’s approach of embedding security and control features, prompting study of downstream effects on adopters.
Speaker: David Zapolsky
What specific governance challenges arise with agentic AI compared to traditional translation services, and how can trust in outcomes be ensured?
Jarek noted that autonomous agents introduce higher stakes and require distinct governance and trust mechanisms.
Speaker: Jarek Kutylowski
How do differing national AI regulations affect go/no‑go product launch decisions for globally‑operating AI companies?
Both mentioned regulatory environments influencing market entry strategies, suggesting research on regulatory impact on product rollout.
Speaker: David Zapolsky, Jarek Kutylowski
Can an international AI standard (e.g., ISO 42001) be developed to provide common principles while respecting national sovereignty?
He referenced emerging consensus and the need for a global standard that balances uniformity with sovereign perspectives.
Speaker: David Zapolsky
What are the emerging AI‑enabled threats such as ransomware, phishing, and nation‑state attacks, and how can proactive defenses be built?
Jay warned that AI will accelerate malicious activities and called for focused research on defensive measures.
Speaker: Jay Chaudhry
How can inclusive AI upskilling be delivered to low‑bandwidth, underserved communities to ensure equitable access?
She expressed a desire to see AI benefits reach villages with limited connectivity, indicating a need for research on inclusive deployment.
Speaker: Aparna Bawa
What are the efficiency costs of having multiple state‑level privacy frameworks versus a unified national or international framework?
Aparna pointed out the inefficiency of fragmented privacy regimes and suggested studying the benefits of a common framework.
Speaker: Aparna Bawa
How should data be classified for AI risk management, distinguishing between critical IP and ordinary consumer data?
He gave examples showing that not all data warrants the same protection, indicating a need for nuanced data‑classification methods.
Speaker: Jay Chaudhry
How can zero‑trust architectures be extended to secure AI agents, including identity and authorization controls?
Jay highlighted the importance of applying zero‑trust principles to AI agents to prevent hijacking and misuse.
Speaker: Jay Chaudhry
What safeguards are needed when AI is used in critical domains such as drug R&D documentation and regulatory submissions?
He mentioned translation of high‑impact documents influencing FDA decisions, underscoring the need for rigorous governance in such use‑cases.
Speaker: Jarek Kutylowski

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Building Sovereign and Responsible AI Beyond Proof of Concepts

Building Sovereign and Responsible AI Beyond Proof of Concepts

Session at a glanceSummary, keypoints, and speakers overview

Summary

The session opened by highlighting the proliferation of AI pilots worldwide and the low conversion of these projects into production, with only about 30 % reaching deployment, a shortfall attributed largely to a lack of trust in AI systems [4-6][11][13]. The presenters argued that trust must be considered at multiple levels – organizational, personal data handling, societal impact, and employment – and pointed to a rapid rise in documented AI harms, such as voice-cloning scams in Romania, AI-generated books without human oversight, and biased facial-recognition at borders, all of which erode public confidence [15-19][21-24][27-33][36-38].


To address these challenges, they introduced a “AI 4D” framework comprising sovereignty, green (sustainability), responsible, and valuable dimensions, urging that proof-of-concepts evaluate all four rather than merely functionality [64-66][65]. Illustrative scenarios were used: a health-AI model that exceeded power and water limits (green issue) [70-78][80-83]; a traffic-light optimization that improved speed but harmed low-income neighborhoods (value vs. responsibility) [89-92][95-100]; a justice-system AI hosted offshore lacking auditability (sovereignty) [108-112]; and a social-benefits AI that was opaque and biased (responsible and value) [118-124]. Audience participants echoed these points, noting regulatory gaps and the difficulty of balancing sovereignty with value, especially when relying on foreign models [131-144][154-162][167-176][181-190].


The speakers emphasized that trade-offs among the four dimensions are inevitable and must be explicitly managed, recommending organizations map concerns from high to low, develop AI policies, and adopt measurable KPIs for each lens [306-313][317-324]. They concluded by announcing a white paper summarizing eight-to-ten actionable steps for each dimension, stressing that no single lens suffices for scaling AI responsibly and sustainably [340-352][354-361][349-352].


Keypoints


Major discussion points


AI pilots rarely reach production because of trust deficits and emerging harms.


Only about 30 % of AI projects move beyond proof-of-concept, and many fail to gain organisational or public trust [11-13]. The OECD AI Observatory tracks a rapid rise in AI-related incidents – 600 reported harms in a single month – highlighting real-world risks that erode confidence [20-24]. Concrete examples (voice-cloning scams, AI-generated books without human oversight, biased facial-recognition at borders) illustrate how unchecked pilots generate fear and mistrust [28-38].


Six common reasons why proof-of-concepts stall.


The speakers enumerate six failure categories: (1) mismatch between adoption and impact, (2) governance/risk-management gaps, (3) mis-alignment with societal goals, (4) sovereignty concerns, (5) sustainability/energy-cost pressures, and (6) change-management and cultural readiness [42-60]. Sovereignty is highlighted as a hot-topic across the summit [61-64].


The “AI 4D” framework for trustworthy scaling.


To address the above gaps, the presenters propose four lenses-Sovereignty, Green (sustainability), Responsible AI (ethics, bias, governance), and Valuable AI (real-world benefit)-that must be evaluated together before moving a pilot to production [64-66].


Scenario-based illustration of each dimension and audience validation.


Health-imaging pilot failed on sustainability (excessive compute, power, water) [73-82];


Traffic-light pilot succeeded technically but failed on value (community safety was ignored) [90-98];


Justice-system pilot exposed sovereignty issues (off-shore model, lack of auditability) [103-108];


Social-benefits pilot highlighted responsible-AI problems (unexplained decisions, bias, no appeal process) and the need for clear value [108-124].


Trade-offs between the four dimensions and practical next steps.


Participants discuss when sovereignty may outweigh value (or vice-versa) and how sustainability can clash with rapid adoption [280-285][290-306]. The facilitators conclude with actionable guidance: draft AI policies, adopt responsible-AI frameworks, define measurable KPIs for each dimension, and up-skill teams to embed diverse perspectives [340-357].


Overall purpose / goal of the discussion


The session aimed to diagnose why the majority of AI pilots never become production-grade systems, introduce a holistic “AI 4D” model to build trust, and equip attendees with concrete steps (policy, metrics, governance, skills) to move from experimental proofs of concept to impactful, trustworthy AI deployments [4-13][64-66].


Overall tone and its evolution


– The opening minutes are informative and formal, presenting statistics and definitions [1-13].


– Mid-session becomes interactive and exploratory, using audience polls, scenario role-plays, and real-time hand-raising to surface concerns [94-108][210-224].


– Towards the end the tone shifts to reflective and solution-focused, summarising trade-offs, emphasizing collaboration, and offering supportive next-step recommendations [280-306][340-357].


Overall, the conversation moves from a problem-statement stance to a collaborative, constructive dialogue aimed at practical implementation.


Speakers

Theresa Yurkewich Hoffmann


Role / Title: AI practice lead / senior consultant (Kainos) – referenced as part of Kainos deploying AI systems.


Areas of Expertise: AI governance, responsible AI, AI trust, AI policy, AI-4D framework (sovereignty, green, responsible, valuable).


Source: [S5]


Omeed Hashim


Role / Title: AI strategist / consultant (collaborating with Theresa on AI sovereignty and responsible AI).


Areas of Expertise: AI sovereignty, green AI, responsible AI, AI policy, AI deployment trade-offs.


Source: [S4]


Audience


Role / Title: Various participants in the session (e.g., Ami Kotecha – Co-founder of Amro Partners, real-estate and data spin-out; other unnamed professionals).


Areas of Expertise / Interests: AI adoption in private sector, AI-driven productivity, data governance, sustainability, platform business models, responsible AI, regulatory frameworks.


Additional speakers:


– None identified beyond the three listed above.


Full session reportComprehensive analysis and detailed insights

Theresa Yurkewich Hoffmann opened the session by noting the explosion of artificial-intelligence pilots worldwide and clarifying that a pilot is essentially a proof-of-concept – an idea being tested for later implementation [4-6]. She highlighted a striking statistic: only about 30 % of AI projects ever move beyond the pilot stage into production [11]. The presenters argued that the primary barrier to scaling is a pervasive lack of trust, which can arise at the organisational level, around personal data handling, in terms of societal impact, or concerning employment effects [13-14].


To illustrate why trust is eroding, Theresa referred to the OECD AI Observatory, which monitors AI-related harms globally. The Observatory recorded roughly 600 incidents in a single month (December 2025), each representing a case where people were harmed or a hazard was created by an AI system [21-24]. She gave concrete examples: in Romania, AI was used to clone voices and scam victims [27-30]; in Cairo, books printed from generative-AI prompts appeared on a book-fair without any human editorial oversight, raising questions of creativity and consumer deception [31-35]; and facial-recognition systems deployed at borders have shown unequal performance across demographic groups, further undermining public confidence [36-38].


Theresa then identified six recurring reasons why proof-of-concepts stall. First, there is often a gap between adoption and impact – organisations build a model but do not consider how users will actually employ it [43-49]. Second, governance failures arise from inadequate risk-management, unclear accountability, and insufficient attention to bias or security [50-56]. Third, misalignment with broader societal goals, such as ignoring job-loss concerns when automating tasks, creates value-capture problems [57-60]. The remaining three challenges are sovereignty (control over data and models), sustainability pressures (energy and carbon costs), and change-management issues related to organisational culture and human-AI interaction [61-64][60-64].


To address these gaps, the speakers introduced the “AI 4D” framework, which evaluates every pilot through four lenses: Sovereignty (who controls the data, the model and the infrastructure); Green (Sustainability) (environmental impact and carbon-footprint considerations); Responsible AI (ethics, bias, governance, human-centred design); and Valuable AI (real-world benefit beyond mere cost-saving) [64-66]. The premise is that a proof-of-concept should be judged not only on functionality but also on how it performs across all four dimensions before it can be scaled [65-66].


The first illustrative scenario involved a public-health company using AI to triage radiology scans. During the pilot the model demanded far more compute than anticipated, exceeding the available power supply and creating a substantial water-usage burden for GPU cooling in a water-sensitive region [73-76]. The team concluded that the project failed for Green (Sustainability) reasons – the environmental and infrastructural costs made it financially and politically untenable [78-83].


The second scenario examined a city-wide traffic-light optimisation pilot. The audience initially suggested that the failure reflected Sovereignty (because the system relied on external data feeds) and Responsible AI concerns [94-95]. Theresa later clarified that the core issue was a Valuable AI problem – the promised benefit of faster travel conflicted with community safety needs – and also highlighted a lapse in Responsible AI because social impact had not been considered [96-98].


The third case described a justice-department AI intended to triage citizen complaints. Although it performed well in testing, it was hosted offshore, with no clear audit trail or control over model updates [103-108]. When the team prepared for production they discovered they could not guarantee the model’s integrity or compliance, highlighting a Sovereignty problem – loss of control over a critical public-service system [108-112].


The fourth example concerned a social-benefits eligibility engine. Although the pilot reduced manual checks and processing time, the model could not explain its decisions, exhibited bias across age, ethnicity and gender, and lacked an appeal mechanism [108-114][118-124]. This scenario combined Responsible AI failures (opacity, unfairness) with a shortfall in Valuable AI for vulnerable citizens, demonstrating how ethical shortcomings directly diminish societal benefit [118-124].


During the discussion of trade-offs, the audience asked, “Out of all these four, how do you rank them low to high?” Theresa answered that Responsible AI should be the top priority, followed by Valuable AI, Green (Sustainability), with Sovereignty placed lower [310-315]. Several participants argued that Responsible AI often creates Valuable AI, suggesting that ethical design can be a driver of tangible benefit [280-285]. Omeed Hashim warned that prioritising rapid deployment with foreign large-language models can jeopardise Sovereignty, because a sudden loss of external access would cripple the service [292-298][137-144]. A further tension emerged between Green (Sustainability) and speed of adoption: organisations may accept higher carbon footprints to accelerate rollout, yet many rank sustainability low on their risk matrix [306-313][154-165].


Theresa highlighted divergent regulatory landscapes. In the UK, forthcoming AI regulations will impose transparency, explainability and third-party-supplier requirements for high-risk systems [219-224]; by contrast, Omeed Hashim noted that the US approach currently lacks comparable regulation [300-301]. An audience member from the private sector noted that India’s new data-protection and personalisation law, slated to take effect in late 2025, will eventually force Responsible AI practices, but adoption remains at a negligible 0.1 % level [230-235]. This underscored a shared view that government action is essential to set safe-use boundaries, even as participants debated how prescriptive such rules should be [213-224][230-235].


Omeed expanded on the Sovereignty dimension, describing it as “control” over who sees data, why, and what they do with it, and arguing that without this control public trust collapses [136-144]. He cited national examples – Serbia’s plan to build its own LLMs, France’s Mistral model, and the UK’s push for domestic AI – as evidence that state-led model development can mitigate dependence on foreign providers [145-147][245-247]. Regarding Green (Sustainability), he linked environmental impact to economic viability, noting that large data-centres (e.g., Microsoft’s Los Angeles-scale facility) consume massive electricity, and that a low-carbon system is inherently cheaper and more scalable [161-165][154-165]. For Responsible AI, he stressed human-centred design, citing Prime Minister Modi’s emphasis on ethics and the need for clear user-benefit articulation [172-176]. Finally, on Valuable AI, he recounted a UAE executive’s ambition to achieve ten-fold productivity gains with AI, warning that without a clear societal benefit the technology becomes a “dead weight” [186-194][191-196].


The audience raised practical business-model concerns. One entrepreneur described difficulty in building a platform-level AI for vending-machine agents because large clients (e.g., PepsiCo) demand exclusive IP, limiting broader adoption [254-264]. Omeed responded by proposing a hybrid service model where core IP is retained but reusable layers are offered as a shared service, echoing the need to move beyond pure IP lock-in [267-276]. This exchange highlighted a tension between commercial incentives and the Valuable AI lens, which seeks societal impact at scale.


Across the session, several points of agreement emerged: (i) Sovereignty is tightly linked to trust and project success [13-14][61-64][137-144]; (ii) Green (Sustainability) must be balanced against rapid adoption, as sustainability is often deprioritised yet essential for long-term scaling [306-313][154-165]; (iii) Responsible AI and Valuable AI were repeatedly voted the most critical dimensions by the audience [280-285][332-339]; and (iv) government policies-whether safe-use lists, transparency mandates, or national model development-are pivotal for establishing both Sovereignty and Responsible AI practice [213-224][236-247].


Conversely, disagreements centred on the relative priority of Sovereignty versus Responsible AI/Valuable AI (Theresa placed sovereignty lower in her personal ranking, while Omeed argued it is a non-negotiable trust factor) [317-319][292-298]; on whether Green (Sustainability) should be a primary driver or a secondary concern [306-313][154-165]; and on the extent of government regulation, with some participants calling for decisive, enforceable rules, while others pointed to the nascent state of existing legislation [213-224][230-235].


In concluding remarks, the presenters announced a white-paper that distils eight-to-ten actionable recommendations per lens and made it publicly available via LinkedIn and a QR code [342-346]. They urged organisations to (a) draft an AI policy that explicitly prioritises the four lenses; (b) adopt a Responsible AI framework covering ethics, bias, governance and human-centred design; (c) define measurable KPIs for Green (Sustainability) (e.g., carbon or energy use), Sovereignty (e.g., proportion of locally hosted models), Responsible AI (e.g., bias-audit scores) and Valuable AI (e.g., user-outcome metrics) [354-361]; and (d) up-skill staff and embed diverse stakeholder perspectives to strengthen accountability [360-362]. The final message stressed that no single dimension can guarantee success; instead, a holistic, trade-off-aware approach is required to move AI pilots from experimental proofs of concept to trustworthy, scalable production systems [349-352][350-353].


Theresa thanked the participants, invited further questions after the session, and provided her contact details along with a QR code for feedback [342-346].


Session transcriptComplete transcript of the session
Theresa Yurkewich Hoffmann

Okay. Sounds good. Okay. Well, this session will be all around that. So if we can have the next slide. So what we want to talk to you today is that there are so many different AI projects and AI pilots happening in the world. And a pilot is the same as a proof of concept. It’s an idea that you’re testing. And it’s a concept that you’re testing. And it’s a concept that you’re testing. And it’s a concept that you’re testing. to see if that idea is something that you can put into implementation later on. And I was looking at the stat of how many AI pilots are in the world, and that was very difficult to quantify.

But what I did find was that only 30 % of all the AI projects actually go into production. So what we’re finding in the world is that we have lots of different AI ideas, but really a difficulty in translating that into something real. And the point of this session and what I think is the point of the whole AI summit was that one of those reasons is because we don’t have trust. So if we can have the next slide. So if we think about trust, that could be an organization’s trust that the AI will work. It can be trust in us as individuals around how our data will be shared, the outputs that it will give us.

It could be trust in terms of the impacts that it will have on people and people’s lives. It could be trust in terms of jobs and how that will work. And with that, what we’re seeing is a lot of these AI projects are failing to consider that. And I don’t know if you’re familiar with the OECD AI Observatory, but they do a monitor where they essentially monitor all of the harms and all of the AI incidents around the world. And you can see that it’s been growing exponentially. In 2025 of December only, there were 600 different incidents in the world. So those are 600 different times that people were harmed or that there was some kind of AI hazard that was created through a pilot.

If we can have the next slide. It’s just to zoom in, so this is a little bit difficult for you to read now. But in that harms monitor, you can click on any of them and learn more about them. So some that I found, the first one is in Romania. AI was being used to clone people’s voices and then scam their voices. By making them think that they were in distress. As well, there was an example, I believe it was in Cairo. So there was a book fair, and a lot of the books there were actually produced using an equivalent of chat GBTs, using generative AI. But there was no humans included in that project, so the books were printed with the prompts and the AI instructions still in them.

So that created a lot of issue of creativity, and are these books generated by AI? Are they what we’re looking for? Is that what we thought we were buying? And then there’s several other examples happening all around the world where this is happening with facial recognition, for example. So using that at borders, and all of a sudden that might not work equal between different types of people. And all of these really build towards people losing trust in AI and being fearful of using it. So these are some examples, and we’ll kind of go into next what we can do about that. So. Next we’re going to look at. why do these proof of concepts fail and how do we shift from just experimenting to actually having impact.

So I can have the next slide. So I put here six ideas of what we’re seeing with the customers we work on is why proof of concepts are not working. The first one is between adoption and impact. So a lot of times we’ll have organizations that are working on AI and they’ve just thought about producing something but they haven’t actually thought about how will people use it. Will it have the goal that you’re hoping it to have? Or say, for example, I’m using a legal tool. Will it actually serve the purpose that I’m looking for? Will it require more work for me to actually review everything it’s doing? So there’s a gap there. The second is around governance failures.

So I’m not sure how many of you have thought about risk management. How do you identify all of the risks that are coming up? Who’s going to be accountable to solving them? That might be things like, is it treating people differently? Is it biased? It might be things around security, for example. And then there’s also a failure around misalignment. So between what you’re looking for in society, those might not be aligned. So if you’re, for example, prioritizing AI use to automate people, all of a sudden people are thinking, what about job loss? So there’s not really a link in value there, and that’s another reason. We’ve got three other challenges. The first one is sovereignty, which I think if anyone was around the summit today or this week, everybody was talking about sovereignty.

So questions around how do we maintain control? Who is responsible? If, for example, a foreign government decides to turn off that AI access, is that something we trust? Or how do we deal with that? we also have sustainability pressure so thinking about the carbon cost of using AI and lack of clarity around that and then change management is really all the people so if we’re thinking about these frontier firms where people are working with agents what does that work culture look like have we actually thought about how people use AI and have time to test it and practice with it have we thought about the relationship between people and AI and how that works as well so these are six quick concepts and if we can have the next slide is just a point to make is that when we’re considering a proof of concept we’re really just considering does it function we weren’t considering any of those other six things and if we want to scale AI we need to think about everything else so next slide so I guess the point of this session is really to think about how do we actually do that so what we have thought of is calling it AI and 4D so four dimensional the idea that you need to look at four different lenses to build trust in AI if we could have next slide and when we’re looking at that we’re thinking if you can look at all these four different lenses that’s really going to help you predict any harms or challenges that could come with the AI model and actually prevent them so that you can deploy and scale that AI there’s four dimensions that we’re looking at the first one is sovereignty so thinking about who controls it not just data but looking at all the security measures behind it where does the model come from who has access to it we’re looking at green so that’s sustainability can this scale without destroying our climate goals for example We’re looking at responsibility, so that is thinking about ethics and governance and bias and fairness and human -centered.

And then valuable, so is this project actually really going to deliver a real -world benefit to people? So next slide. This one, I think it might be difficult for us to create a poll, so what we’ll do is we’ll do it by hand instead. So if we can just go to the next slide. What I thought we could do before we give you more information of those 4D and how to apply them and break out into groups is we could just have some quick scenarios and test what your knowledge is of those themes already. So I’m going to give you an example, and then we’ll do a show of hands of who thinks what lens is missed here.

So this example is with a public health company. They’re using AI to read different x -rays and radiology scans. And the point of the proof of concept is to help triage different illnesses or different breaks, things that you might find in the scan, and reduce that backlog. So when they actually started modeling and rolling it out, the team realized that this required more compute than they needed. It would exceed, actually, the available power supply, so there was not going to be ability to use it consistently. And that, actually, there was a large demand on water because the GPUs needed to be cooled, and this is in a water -sensitive area. So that would be another challenge between people and the planet.

So this program failed, this hypothetical program failed, because it was financially and politically impossible to run. So who thinks that this is a problem because of sovereignty? Who thinks that this is a problem with sustainability? Yeah? who thinks that this is a problem with responsible AI and value. Yeah, I agree. So I mark this one as sustainability. I think it’s an example of the dynamics that we might have in the real world is we want to scale AI, do really great things, but actually we haven’t considered the power or the water usage that that has because we either don’t have the information or it hasn’t been something that’s been baked in the front to think about.

And we will give you some higher level into what this means and how to apply it in a moment. Okay, the next one. So we’ve got a second one. This is dealing with transport. So I think we’ve all dealt with traffic this week. We’re looking at this in this scenario here. It’s thinking about this project is to optimize traffic lights across the city and smooth congestion. But when they started implementing this project, it was only looking for average commute time. It was diverting traffic into lower income areas and pedestrian safety actually became worse. So while this met the technical triggers that it did reduce and optimize time, there was a lot of community backlash. So does someone want to tell me which one they think this is a failure of?

Audience

Sovereign and responsibility.

Theresa Yurkewich Hoffmann

Yeah, we’ve got some sovereignty, we’ve got responsibility. I think this one is actually value. So here, what the ministry had thought was valuable, reduce overall time, is not what’s valuable to the people. What’s valuable to the people is that they want to have safety and walking. And what’s valuable to them is that you protect communities and you don’t have unbiased impact. Next one. So now we’re looking at justice. So here we’ve got a justice system. Our justice department is building AI. to triage different complaints from citizens and reroute them to the right legal body, so whether it’s the courts or a commissioner or something like that. In the pilot, it performed really well, but later when they started to prepare to deploy this into production, the team discovers that, one, the model is hosted offshore.

Two, they don’t have a lot of information on when the model will be updated, and they don’t have control over that. This government doesn’t. That different logic within the model could change based on updates that they couldn’t control, and that they can’t audit the logs. So what do we think this time?

Audience

Yes.

Theresa Yurkewich Hoffmann

Okay, everyone is sovereignty. Sorry, did you say something else? A responsible AI? I think that could also be here, because they hadn’t thought of maybe all these risks beforehand. but I agree here especially when you’ve got a national organization they need to have control of the model and how it functions not being able to update it or audit it in such a sensitive area like justice is a real challenge so sovereignty is a challenge here and then last one okay so here we’ve got a social science agency and they’re using AI to determine who’s eligible for social benefits and the pilot showed that they were able to progress and reduce the time and have fewer manual checks but when they were actually doing this in real life the model wasn’t able to explain why it had made a decision so why it had allocated benefits to someone versus someone else there was no ability to understand how to appeal it so if you were rejected for example you couldn’t understand why that was and how to change that decision There was bias discovered between different groups, so age groups or ethnicity or gender.

It wasn’t applying it the same to equal to everyone. And there was no agreed process for how you would escalate if there was a problem. So this became very seriously harmful, and there was a lot of vulnerable citizens who could be impacted. So in this scenario, what do we think between responsible and value? Anybody else? Training data not accurate. Agreed. So I agree. I think this one is a good one of responsible but also valuable here. Responsible AI is thinking about bias. It’s thinking about fairness. It’s thinking about the data that you have. It’s thinking about all these. It’s thinking about all these harms up front and how you’re going to deal with them. And then equally with value, people need to see value of why they’re using AI in a public system.

And if it’s actually harming people, then it’s not necessarily a good use case. So far, everyone is doing good. I think we can move on. But what we wanted to go through now is how does this work in real life? What does this actually look like? And so I’ll pass to Omid. Can we have the next slide, please?

Omeed Hashim

Right. So I think it’s clear, you know, having had this conversation and the contribution from yourselves, that it’s not so straightforward because there are different dimensions, and this is the point that Riz is making in terms of having to look at different angles. So over the last two days, or definitely the day before yesterday, I was going around in the summit hall, and I was asking everyone, because you see everywhere it says sovereign AI, sovereign AI. I was asking them, what do you mean by sovereign AI? And some people were talking about, oh, we need to have our data centers here. Somebody was saying, or our models need to be here. There were different kind of conversations in terms of what sovereign AI actually means in the context of AI and how it works and how it deploys and so on and so forth.

But the key thing is that ultimately it comes down to control. And my view is that it’s not even just about the organization, the sector potentially, or the nation, but also about the people. So where is your data? Who’s actually looking at your data? Why are they looking at your data? What will they do with your data? If you don’t have an understanding of that, the likelihood of you trusting that system is very low, and therefore it would be susceptible to failure. So it’s really, really key to understand the implications of data sovereignty, AI sovereignty, and so on. I mean, I was talking to one country… called Serbia, and they were saying that we have a view that we need to have control of our own environment, we’re building new large language models in our own geography, and we are going to have control over what we do.

And I think that’s the key thing. But the important thing is that if the trust is lost in terms of the sovereignty, the likelihood is that the system will fail. And I can assure you that if it’s not designed in at the beginning, you’re going to test this under a lot of pressure. You’re likely to be in a crisis as well, because when you don’t know if your health data is trained on somebody else’s data, or you’re using very commercially available large language models. then the thing is you’re actually beholden to those people and therefore you may not be able to achieve what you want to achieve as an objective. So it’s a really, really important dimension in terms of a successful deployment.

And all of the stuff that I’m going to go through here, whilst I’ve seen them through failure, but also they’re the recipe for success. So you can think of it in both ways. So if I could have the next slide, please. So green AI, I mean, this is kind of not dissimilar to what we had before in terms of cloud and green computing and the fact that unless you actually look at the environment, look at it from an economic viability of the system, ultimately what it means is that it’s going to cost a lot more and it won’t scale. And if it doesn’t scale and you cannot handle the data volumes and the amount of usage that you do, you’re going to have, the likelihood is that it would stop.

Now, in my mind, the approach to take here is to make sure you address both. And what happens is that addressing both the environmental effects as well as the cost actually work very, very nicely together. So we had a similar scenario before in how we deployed cloud services, and the same thing is translating to this now. So the more economic your system is, the more likely that it’s going to reduce less greenhouse gases as well. And as a result of that, you can sustain this system longer term. I mean, we all know people are building now massive data centers. Yesterday, there was, I think, a discussion around Microsoft building the new data center that consumes, as much electricity as all of Los Angeles, and Los Angeles is an enormous city.

So the environmental effects of what we’re doing are really key, and it has a direct link into the costs that are driven out of that as well. And I can again assure you, I think it does only take away, that if an AI system can’t scale sustainably, then it won’t scale at all. I’m pretty convinced of that. So we can move on. So the next one is responsible AI, and I think a lot of people here are familiar with that. In terms of governance, assurance, are we doing the right things ethically, is there bias in the system, all of those things fall under the responsible AI banner. And it’s really fundamental in terms of giving people that trust that Teresa was talking about in order to use the system in anger and kind of really link their kind of lifestyle to that, and so on and so forth.

And as you know now, there are all sorts of other systems now like the AI companions that kind of help you achieve different things, whether it’s weight loss or even provide you counseling and help you along in your life. But unless they’re done in an ethical way and an unbiased way, they’re not leading you down a particular path, they’re likely to fail as well. Now, one thing that I wanted to bring to attention, and yesterday, Prime Minister Modi was talking about this, which is really key as far as I’m concerned in the responsible AI area, is the human -centered design of AI. Because when you’re actually building an AI system, you need to have in your mind who you’re trying to help and how.

And what does this actually mean? And if you’re trying to do something, you have to have a clear vision of what you’re trying to do. And if you’re trying to do something, you have to have a clear vision of what you’re trying to do. to them when they start to use the system. So I think the example around the traffic management was a very good one because we all struggled over the last few days with the traffic. And if a system is put into place which does not take into account what the purpose of what they’re doing is, then it is likely to fail. I think the goal of the system itself as well is really key in terms of whether it gets the right sort of results or not.

So there are many systems where people don’t consider that and as a result of that, it becomes unusable by the people or it might have harms built into it as a result of that. But the last one, the last dimension is how valuable that AI is and what does it mean in terms of the outcomes and what the measures are and so on. So a couple of days ago I attended a session where we had a senior executive from UAE. They were talking about, as a country, what they’re trying to do. And it’s really key for us to understand what we’re trying to achieve. So they had a very simple kind of thinking in terms of what they were trying to do, which made what they were doing much more measurable.

So what was the intention for them? The intention was that there are about 12 million people in the United Arab Emirates. And they wanted to effectively be, rather than 12 million people, with the introduction of AI, do as much work as 120 million, almost like 10 times the size. And I think that actually is really, really key. Very simple reason as to why you’re doing what you’re doing and how you measure it. And what the value is. Now, if you actually think about that in the context of, say, India, in my opinion, that ambition doesn’t give India the value. So to create, I don’t know, lots of agents to replace people’s jobs or do more jobs, right, doesn’t actually have the right outcome because there’s already a lot of people here.

Why would you do that, right? So you have to think really carefully about what the value is of the system itself because without thinking about that, you end up building a system that you cannot measure the value of. And then ultimately what you would do is that it would just become a dead weight. Why do we have this at all? Should we be getting rid of it or not? So hopefully you kind of understand all of the aspects of the different areas. At Kainos, we deploy systems, AI systems, into production. So we see. A lot of these issues. And we are quite lucky because our customers, which are all the government departments. are actually very, very clued up as well in terms of what different aspects of what we’re doing are, and they see value in it.

So it’s not just about deploying the technology, but how is this technology going to affect the UK citizens and where we work in other countries like Canada, US, and so on, those countries respectively. So I think that was my last slide. I think I’m going to hand it over to you.

Theresa Yurkewich Hoffmann

So we had originally intended to maybe do different breakout groups. The audience is quite small, so it’s up to you. We could either have everyone kind of have a few discussions and talk about what you think is the most challenging, or we could use 10 minutes if we want to do a Q &A, if people want to share their thoughts. Put your hands up if you want to go into a breakout group and discuss one of the concepts together. okay so we’ll do the second nobody voted for that so why don’t we, yeah we can have a discussion it’d be interesting to hear are you looking at these four challenges which do you think is the most difficult, which do you feel like you’ve solved and we can have a little discussion around that for a little bit introduce yourself

Audience

Hi there thank you my name is Ami Kotecha I’m co -founder of Amro Partners we are a real estate company and we are now getting involved in a data spin out my challenge is as follows, I’m one of the co -founders of the company as a leader I’m very keen of course that there’s AI adoption there’s upskilling etc. in the company and of course productivity challenges where we have them should be addressed using this technology. I feel like I am often left in the lurch to actually literally make all the decisions within the private sector environment whereas I think government needs to step in and make some of these decisions on our behalf in terms of model utilization, where we go, what we do with it.

I mean we are good experimenters so fortunately we are throwing capital at experimenting not every company can afford to do that or would want to do that because of the same sort of issues you mentioned right at the start which are aligned with just the fear of adopting something that is going to break your system or open yourself up to some kind of cyber attack etc so how do you see this sort of playing out in the next 6 months, 8 years 12 months because obviously the technology is moving really fast as to what role the government is going to play in saying this is safe to use and this is still experimental and you should worry about it?

Theresa Yurkewich Hoffmann

It’s like, go ahead and do it. But then there’s a medium risk, a high risk would be something that would be like really critical infrastructure or something that’s impacting people directly. And if it’s a high risk, then there’s a load of different things that you need to do around transparency with people. There’s also prohibited use cases of how to use AI. So I think that’s one example where some governments are actually saying, this is what we’ve deemed safe. And if it’s not one of these uses, then we want to see a lot of other checks. In the UK, we have regulation that’s looking at third party suppliers right now. And if they’re critical to the infrastructure or not of the country, then there will be new requirements on AI as well, in terms of like the updates that go in transparency around models, explainability.

But then maybe you have the US approach where you don’t have regulation yet. So I think that’s one example. it really depends on the country. I think a lot of what we heard yesterday was around, you know, for India, thinking about ethical and responsible AI, but I don’t know if you have any regulation in place around that yet. I think, yeah, I think it’s very difficult otherwise for a private company because otherwise you’re fighting to who gets to the bottom, who’s the cheapest, who’s the quickest. And this week I was touring around with different businesses and everyone was thinking, how do we do agents? But no one was thinking about human -centered, ethical, responsible. So I think it does need to come from the government to have a base.

But I noticed that some are maybe more forthcoming with that than others.

Audience

I think just before, I just wanted to answer your question about the government. There is a data protection and data personalization law that was, you know, legislated last year. November 2025 is going to be legal in the next 27 October onwards. They are getting a time of, you know, around 18 to 24 months. After that, what you are saying, the addressing of how the data is handled by the person who is creating the data, who is like the person who is created, who is the principal or one who is the repository, all that rules are coming. But presently, I would say it is 0 .1 % of that responsible AI part which is happening. But over a period of these two years, the preparation is going to happen where it will slowly get into that mode, actually.

Omeed Hashim

I was just going to say, so she’s a high flyer entrepreneur in the UK, actually. But I was just going to say, in my mind, right, there are a couple of things that we should really push the government to do, right? One is about smartness. Smart data. So they’ve been playing around with this for years and years. So we’ve got quite a lot of open banking applications now. But this can be extended way beyond open banking where different organizations can share data. Like, for instance, in the property market, you know, how do you go through the cycle of all the way from putting an offer in to conveyancing to, I don’t know, valuation to the end, right?

So that’s really critical. The other side of it is actually having trust in language models which are built within the U .K. itself, right? And I think most of the – even Serbia is doing that, right? French have already done it with Mistral. So there is a lot of examples of this, and that’s where the government can really help, and that’s what we should be lobbying them to do, in my opinion. Any other comment? Oh, yeah. Maybe behind you? Oh, sorry. You had your hand up. You had your hand up first. You go first, and then behind you next.

Audience

Yeah. Yeah. So my – I am building a agentic AI for vending machines. And I have been an entrepreneur in the corporate world. But before three years, I was just doing physical stuff, right? Doing products, innovation, the food and beverage sector. One of the challenges which I am seeing is how to build value at a platform level rather than an individual customer level, right? For example, if I offer this vending machine agentic AI for a PepsiCo, they would say don’t do it for Coca -Cola, right? Give it to us only and keep it with us. But UPI, for example, was not a master card or a visa card thing, right? It was for the whole country, right?

So how do you get that kind of attraction to build a platform instead of one very customized for a customer who might say that don’t give it to anybody else. So that is the key question that I am trying to address and I do not seem to find answers.

Theresa Yurkewich Hoffmann

I agree I think that is a challenge in the corporate world I used to work at Microsoft and even there it was if you’re using our technology if we’re coming on a panel then we’re on a panel but we’re not having Amazon on the panel or Google on the panel with us but I think like you say that’s really figuring out what you have that’s so unique and that actually goes to the value lens I think is that if you have something that’s really valuable to people you make the case that it has to be shared but it is it is difficult if you’re building it with one customer first because that almost becomes their IP that they want to keep right so something that we are doing when we’re working on response by our projects is we’re looking at all this similarity of requests that come in and we’re sort of doing the work on ourselves in the background and then we’re taking elements that we need and exposing them to the different customers and that way we keep that IP but it is very difficult to get multiple customers on board if they’re all competing

Audience

yeah so for example I build a few IP in the area of sustainability like clean air clean water I sold to a company but that company is not commercializing it I don’t want to name that company because it didn’t want to commercialize it wanted to keep that technology right so that’s a big challenge that I am seeing in the corporate world that a company will buy another company but it won’t implement for the society or for the good right so that is the challenge that I am seeing how do you handle that because that is part of the responsible AI as well as the valuable AI part

Omeed Hashim

yeah I think you’re right and I think you have your own kind of description of this problem but I was in the US a few months ago and I saw, I don’t know whether you’re familiar with SVB, but it’s basically Silicon Valley Bank, right? And they did a presentation to us where they were talking about where all the funds are going, right? And if you actually see what is going on in terms of this, I think it’s about a trillion dollars worth of investment. This investment is flowing only into a handful of companies. What those companies are doing, they literally are stifling everybody else, right? This is a commercial reality, right? But if I was to offer you some options, I would say there shouldn’t be just the IP.

You should be thinking about it more as a service that you could build layers on, right? So you may retain the IP or you may share the IP. It could be a co -created, whatever it is, but it’s got to have a service model attached to it because if PepsiCo buys X and then co -creates, Coca -Cola buys Y, why would they be buying it and how would you be able to build on top of that but you know it’s very very commercially challenging problem it’s been there for many years this is nothing new

Audience

as Shri said exactly like that UPI beat that right so today UPI compared to a Mastercard or a Visa in India everyone is using that right and there are applications which are attached to UPI whether it’s a Paytm or a Google or whatever Amazon Pay all of them are on the platform of API right so the question I had was why are IT companies for example Kainos right or an Infosys or an Accenture not looking at the platform approach and looking at the services approach where they can put their team manpower and run projects right so I see this as a challenge I have been talking to the top management of Infosys Accenture every time I go with the proposal they say just do it for a client you know and we will attach you as an expert I don’t want to do that I want to build a platform there is nobody who is really interested in building that sort of a business which is path breaking it takes longer time right like UPI it happened organically can these kind of initiatives happen inorganically that was the question

Theresa Yurkewich Hoffmann

I think they are looking at both yeah so I think we should take one more question because we have very few minutes left so we can talk after and I want to get to the person behind you for his question as well and then we will do a quick back up

Audience

Good afternoon thanks for covering those areas in the lectures that were much needed to understand so you talked about sovereignty AI and then you talked about value or responsible AI so there might be few scenarios where while chasing sovereignty we might have to bypass value additions or responsibility for the citizens while the other way round also so can you discuss about those scenarios where you value sovereignty more than talking about responsible AI or value additions AI and the otherwise also and when they can be parallelly taken into account

Theresa Yurkewich Hoffmann

So you’re asking around Responsible AI and valuable AI Where they link Where one might be more useful Than the other So where I see Responsible AI I think it can actually incorporate As a lens for everything But it’s much easier to think of it as separate I think Responsible AI can encompass five things So like ethics Trust Like bias and fairness Human centered Governance and security Where I think that distinguishes from value Is value Is looking beyond Financial growth So a lot of organizations You might work with Or when you think of many organizations In the world, they’re looking at how much money Will this save me Or how much time or how much productivity But I think valuable AI is looking at What goes beyond that And what’s the value of value And what’s the value of value And what’s the value of value does it actually create more well -being in people?

Does it give people time back with their families, for example, or other hobbies they want to do? Valuable is thinking about what’s the long -term benefit that this will have in terms of how we change society. Maybe it’s going to create a whole bunch of different jobs now in something else. So I think actually if you’re using responsible AI, it will create value. So I still think they go hand -in -hand, but that’s probably how I distinguish it. Is that your question? No. I’m not sure. Maybe Omid has an answer. You know also. Yeah.

Omeed Hashim

So I think you’re saying what happens when you have to do a trade -off, right, between sovereignty and value. And I think this is a very good question, to be honest, right? Because when you – so, again, yesterday I was wandering around in the summit. I keep asking people questions about different things. Right. And one of the countries that I spoke to – they know that using GPD models or Claude and various other things is a quick route to building what they need to build because they’re there it is immediate and it can be done almost like without any issues at all but they’re taking the hard route so they’re saying actually we don’t want to do that because what if tomorrow we fall out with them as Europeans are falling out with Americans anyway so what happens if they turn off the systems, what would we do then so if you think about it in terms of the speed and the value is actually going with what you got but the more challenging thing which is the value the value is can we actually use this system for our citizens on an ongoing basis is that data something that belongs to us are the models aligned with what we are doing.

So they want to be able to enable their people in order to deliver the right outcomes. And that would not happen if they just outsourced their sovereignty to the U .S. So I think those are some of the very, very important factors that need to be explained. But ultimately, from a value perspective, three is a spot on. I think it’s about what is the value in terms of to the people that are going to use that system. So if tomorrow we found this fantastic system, like I give you an example, we’ve stopped multiple times in terms of the traffic because some VIP was coming out of somewhere, right? And then just literally closed the road.

So we’re sitting there for like half an hour and then we get going again. That’s happened, I think, three or four times so far. So if you were to build the system, you would be needing to think, you know, what is the value for those taxi drivers, all these public that are going around. and that’s the key thing that you need to be able to use AI to achieve, right? This needs to be measurable, it needs to actually help the people itself. So yes, it’s a very tricky trade -off.

Theresa Yurkewich Hoffmann

I think the trade -off is really good, especially in sustainability as well. So a lot of times organizations might just think, how do we adopt AI as quick as possible, get people to use it as much as possible, but actually every query that you use has a sustainability impact. And so I think there’s a trade -off there because there might be a sustainability impact, but depending on where you are, you might value training people to use AI more, so you might be okay with that impact because it’s more about getting people comfortable with using it. But then if you are an organization that really values sustainability, you have really strong carbon goals or net zero goals.

then actually that might be the trade -off that you have. So I think one thing that we’re doing when we’re working with organizations is we’re getting them to make that very difficult decision of here’s high concern, here’s low concern. We map out all the harms that we can think of and all the principles and values that align to them and they can’t put any side by side. And they have to move all of them from high to low concern. And very quickly that makes you see what’s real for your organization and I’ve seen a lot of them put sustainability at the bottom, which to me is a little bit concerning, but it does start to think about really understanding your organization and how those trade -offs are going to play.

And that’s what we’re finding in the human one as well.

Audience

So just 10 seconds more, adding to yours, ranking low to high. So out of all these four, sustainability, sovereignty, responsibility and valuation, how do you rate them as low to high, all the four factors that you have covered in your paper?

Theresa Yurkewich Hoffmann

how do you rate all this on a scale of how do I rate them I think that’s very difficult I think I’m putting response by AI at the top because I think it can actually it’s a cheat because it can kind of include sustainability actually and I think it will create value I think so then I would probably put sovereignty lower than that but obviously this year has maybe changed that geopolitically I think I still put response by AI at the top I’ll make that hard choice what do you say

Omeed Hashim

I think I kind of agree and I think Modi Prime Minister Modi said this himself yesterday about human centered AI design is part of responsible AI a few days ago me and Teresa were talking to someone and they were describing a system now if you just indulge me for a couple of minutes let me explain the background of the system and then you see how it’s relevant so they were building a system for kind of the nursing or old people’s home so you may know that elderly get dehydrated and they forget to drink water and then that causes a lot of problems for them so they built a system where using AI and vision they were seeing if the elderly were having enough liquid in the day or not now that’s fantastic everybody says this is a brilliant idea but then you think about it they are monitoring those elderly both in the area where it’s common as well as where they may be in their bedrooms or whatever so that brings a challenge and then the other challenge was what about the people, the nurses who are actually hydrating them because that could become a negative effect on them because somebody might be saying you’re not doing your job right right and what about the family of the elderly, what about the impact on them, so I think it is really important to understand why we build the system who it affects how it affects them and what the long term benefits are which brings the value, right, this is why it’s four dimensions, none of these are independent, I think they’re all relating to one another, one shape or form

Theresa Yurkewich Hoffmann

yeah so we’ll work towards wrapping up because I think we’re getting the time check can we but this one switched because it said 8 and then it said 17 and now it says 10 and then she told me I had 7 this one’s right, okay well let’s see if there are more questions yeah, but we have the takeaways and things to go through also, so I think we’ll wrap up and we can talk to people individually afterwards, can we skip through some of the slides, because we wanted to ask next one, next one next one Next one, I think. Okay, so we actually wanted to flip that question and ask this to you in the audience as well. Which one would be your top?

So of those four lenses, sovereign, green, responsible AI, which one do you think is an absolute must -have? And you can only pick one. And if this isn’t there, it’s going to derail the project. Shall I show of hands? Who says sovereignty is the most important? Who says that it’s green AI sustainability? Who says it’s responsible and then value? Some people didn’t vote. You didn’t vote back there. But I think it sounds like a lot of people are in the responsible and value is the most important. I think I agree. But I think what we wanted to come across is all of these need to come into play as well. Can we do the next slide?

Of that question, though, who has a responsible AI practice in place? Who uses a framework or anything like that? Anybody? Who has a sovereign AI policy in place? No? And who is looking at sustainability? None of us. So that’s a takeaway for all of us. I think we wanted to wrap up with how do I take this forward. So a couple points I want to make. The first is that we have taken a lot of the learnings and things you talked about in this and we’ve turned it into a white paper. There’s a link below, but we can share it with you. We’ve shared it on LinkedIn as well to talk about these learnings. And we wrapped it up to say for each of those themes, here’s eight to ten things that you could do if you really wanted to take sovereignty, green, sustainable, responsible, and valuable AI forward.

So please check out. I’m very happy to talk about that paper. I’m very happy to talk about it and give you insight. My thing, the key takeaway for us here is that no single dimension is the answer. I think that’s come out in the scenarios, in the conversations that we’ve had, and in how we’re prioritizing is that you can’t really have one, just one. You need all of them if you want to scale that project and really make it to production. The second point is on tradeoffs. So it was really good that that came up in the conversation, is that just being aware of the tradeoffs that you will have to make and having something in process to talk about why you made that decision is important.

I think a takeaway for everyone here is think about an AI policy, which talks about how you’ll use AI and what you will prioritize. Think about having a responsible AI framework, which is essentially all the questions and things that you want to be implemented across ethics or across trust, security. and then really think about how you can turn some of this into numbers. So what are the KPIs that you can actually look at for sustainability, for users, for ethics? Don’t just make them a concept of we will be ethical. Think about what that actually means for you and how you’re going to measure it. That’s important if you want to get funding, investment, and show the project is a success.

And then finally, think about how you can upskill your teams to understand these concepts and how you can incorporate diverse views. I think that’s probably the most important in building out the responsibility. So we will wrap up. This is to say if you want to get in touch with us, here’s our details. Find us on LinkedIn. Send us an email. We can have a couple minutes after this since I know there was two questions in the audience that we might not have got to. But otherwise, we hope this session was useful. If you want to give us feedback, here’s a bit. Bigger QR code. so if you want to stay in touch then fill out this, let us know if there’s anything we can improve on the session or any questions that you have, we’re super happy to hear that and otherwise just a big thank you for your participation and yes, hope you have a good rest of the day and a good weekend

Omeed Hashim

Yeah, I was just going to say I think great questions there about trade -off and absolutely the right question to ask because none of these are unique Sorry, please, go ahead

Audience

Yeah Like you were talking about trade -offs, I just wanted to say okay, every model has their own aspects like pros and cons or as you say, different dimensions, I’ve got most of my answers from those questions but I just wanted to ask, if we’re building something and taking these aspects like okay, taking responsible AIA, valuable AIA on them. But if we are taking, we’ll be missing some aspects. As he said about responsibility, if we are taking accuracy and fairness, if we will take if it makes it easier to speak in Hindi, I understand. It’s okay, but fairness, okay. If we are doing Sorry, sorry. Again.

Theresa Yurkewich Hoffmann

I think there’s no issue. You can ask us on email as well. It’s not an issue. We’re more than happy we’ll respond to you if you want to ask.

Omeed Hashim

Just a question.

Related ResourcesKnowledge base sources related to the discussion topics (29)
Factual NotesClaims verified against the Diplo knowledge base (7)
Confirmedhigh

“A pilot is essentially a proof‑of‑concept – an idea being tested for later implementation.”

The discussion transcript explicitly equates pilots with proof-of-concepts, matching the description in the knowledge base [S17] and reiterated in the session summary [S4].

!
Correctionhigh

“Only about 30 % of AI projects ever move beyond the pilot stage into production.”

The knowledge base reports that almost 80 % of pilots fail to reach production, implying roughly 20 % succeed, not 30 % as stated [S7]; other sources note only a “small fraction” reach production but do not give a 30 % figure [S4].

Confirmedhigh

“The primary barrier to scaling is a pervasive lack of trust.”

Lack of trust is identified as a key barrier to moving AI pilots to production in the knowledge base summary of the session [S4].

Confirmedmedium

“In Romania, AI was used to clone voices and scam victims.”

Both a specific mention of a Romanian voice-cloning scam and a broader discussion of synthetic-voice fraud appear in the knowledge base [S105] and [S104].

Confirmedmedium

“Governance failures (inadequate risk‑management, unclear accountability, insufficient attention to bias or security) cause pilots to stall.”

The knowledge base highlights governance gaps, risk-management shortcomings, and data-quality issues as major reasons pilots do not progress [S4] and [S7].

Confirmedmedium

“Sustainability pressures (energy and carbon costs) are a recurring challenge for AI pilots.”

Sustainability impacts of AI queries and the trade-off between rapid adoption and carbon footprint are discussed in the knowledge base [S6].

Additional Contextlow

“The “AI 4D” framework evaluates pilots across Sovereignty, Green (Sustainability), Responsible AI, and Valuable AI dimensions.”

While the exact “AI 4D” label is not in the knowledge base, the same four pillars-sovereign data/control, sustainability, responsible/ethical AI, and real-world value-are referenced in discussions of sovereign and responsible AI beyond proof-of-concepts and sustainability considerations [S6] and [S4].

External Sources (107)
S1
WS #280 the DNS Trust Horizon Safeguarding Digital Identity — – **Audience** – Individual from Senegal named Yuv (role/title not specified)
S2
Building the Workforce_ AI for Viksit Bharat 2047 — -Audience- Role/Title: Professor Charu from Indian Institute of Public Administration (one identified audience member), …
S3
Nri Collaborative Session Navigating Global Cyber Threats Via Local Practices — – **Audience** – Dr. Nazar (specific role/title not clearly mentioned)
S4
Building Sovereign and Responsible AI Beyond Proof of Concepts — – Theresa Yurkewich Hoffmann- Omeed Hashim
S5
Building Sovereign and Responsible AI Beyond Proof of Concepts — – Theresa Yurkewich Hoffmann- Omeed Hashim – Theresa Yurkewich Hoffmann- Omeed Hashim- Audience
S6
Building Sovereign and Responsible AI Beyond Proof of Concepts — Speakers:Audience, Theresa Yurkewich Hoffmann Speakers:Audience, Theresa Yurkewich Hoffmann, Omeed Hashim Speakers:The…
S7
AI as critical infrastructure for continuity in public services — first definitely not technology because I think we’ve seen technology is always almost ahead very true over the last cou…
S8
Building a Digital Society, from Vision to Implementation — – Chukwuemeka Cameron Economic | Sociocultural Hines cites research from Gary Marcus presented at Web Summit showing t…
S9
Multistakeholder Partnerships for Thriving AI Ecosystems — Robert Opp opened the discussion by highlighting UNDP’s concern that without responsible deployment, AI could exacerbate…
S10
Shadow AI and poor governance fuel growing cyber risks, IBM warns — Many organisations racing to adopt AI arefailing to implement adequate security and governance controls, according to IB…
S11
AI Meets Cybersecurity Trust Governance &amp; Global Security — Anne warns that the rush to deploy consumer AI tools without sufficient safeguards creates systemic security gaps, and t…
S12
Toward Collective Action_ Roundtable on Safe &amp; Trusted AI — And AI is allowing this to happen at a scale that at the moment we already see disruptions, but I think there’s real ris…
S13
When language models fabricate truth: AI hallucinations and the limits of trust — AI has come far from rule-based systems and chatbots with preset answers.Large language models (LLMs), powered by vast a…
S14
https://dig.watch/event/india-ai-impact-summit-2026/building-sovereign-and-responsible-ai-beyond-proof-of-concepts — And if it’s actually harming people, then it’s not necessarily a good use case. So far, everyone is doing good. I think …
S15
HealthAI: The Global Agency for Responsible AI in Health — Responsible AI is characterised by AI technologies that align with established standards and ethical principles, priorit…
S16
Multistakeholder Partnerships for Thriving AI Ecosystems — An audience member raised concerns about whether AI democratisation would genuinely benefit small enterprises or primari…
S17
https://app.faicon.ai/ai-impact-summit-2026/building-sovereign-and-responsible-ai-beyond-proof-of-concepts — And that’s what we’re finding in the human one as well. And then valuable, so is this project actually really going to …
S18
How AI Is Transforming Diplomacy and Conflict Management — “Increasingly, we’re in a pilotitis zone where almost everyone’s got pilots”[157]. “One of the biggest gaps is leaders n…
S19
Scaling AI Beyond Pilots: A World Economic Forum Panel Discussion — Roy Jakobs emphasizes that successful AI implementation requires dedicating as much effort to user adoption and change m…
S20
Driving Enterprise Impact Through Scalable AI Adoption — Thank you Terrific topic to be discussed at Davos I’m Pranjal Sharma I’m from India I’m an author and analyst we’re look…
S21
WS #123 Responsible AI in Security Governance Risks and Innovation — This comment elevated the technical discussion to a more sophisticated understanding of systemic governance challenges. …
S22
The future of work: preparing for automation and the gig economy — The report suggests several measures to ‘help people adjust to the new technologies’: education and (re)training, suppor…
S23
ETHIO PA 2025 — 4 Frey, C. B. (2019). The technology trap: Capital, labor, and power in the age of automation . Princeton, NJ: Princeton…
S24
Contents — As discussed in Chapter 1, young people in both developing and developed countries continue to face disproportionate dis…
S25
Challenges to UK becoming an AI superpower — UK Prime Minister Rishi Sunak envisions Britain becoming an AI superpower, leveraging the potential of AI to drive econo…
S26
review article — In a world of sovereign nation states, health continues to be primarily a national responsibility; however, the intensif…
S27
International Cooperation for AI &amp; Digital Governance | IGF 2023 Networking Session #109 — Furthermore, the dependence of Equatorial Guinea on foreign technology, particularly Huawei and China for its ICT infras…
S28
AI as critical infrastructure for continuity in public services — So we’ve seen, especially in India, we’ve seen many, many, many pilots. And almost 80% of those pilots don’t make it to …
S29
How Trust and Safety Drive Innovation and Sustainable Growth — Explanation:Despite representing different perspectives (UK regulator, Singapore regulator, and industry), there was une…
S30
Keynote-Martin Schroeter — Thank you. Thank you. Thank you very much. Good afternoon, everybody. First, I want to thank the Honorable Prime Ministe…
S31
Who Watches the Watchers Building Trust in AI Governance — I’m not sure, but I think what we can do is sort of look at the trend, and the trend is towards, I think, a stronger eco…
S32
Scaling Trusted AI_ How France and India Are Building Industrial &amp; Innovation Bridges — And it’s very useful. It’s used to benchmark applications and performance on quantum computers and using AI techniques a…
S33
Responsible AI in India Leadership Ethics &amp; Global Impact — These key comments fundamentally shaped the discussion by establishing a progression from theoretical principles to prac…
S34
A Guide for Practitioners — – What are the current macroeconomic, political and social environments, and how do they relate to health? A thoro…
S35
Diplomatic policy analysis — Global collaboration:Policy analysis helps identify shared interests and opportunities for cooperation, fostering consen…
S36
Opportunities, risks and policy implications — Most actions geared towards making the metaverse inclusive and accessible are voluntary. The Web Accessibility Initiati…
S37
Data Policy in the Fourth Industrial Revolution: Insights on personal data — – -There is a need for a common and consistent risk-based framework to help policy-makers identify and understand object…
S38
Leaders TalkX: When policy meets progress: paving the way for a fit for future digital world — Lidia Stepinska Ustasiak: Excellencies, distinguished delegates, ladies and gentlemen, good afternoon. My name is Lidia …
S39
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — The roadmap is built upon core principles including “human and planetary welfare, accountability and transparency, inclu…
S40
Building Sovereign and Responsible AI Beyond Proof of Concepts — And I think that’s the key thing. But the important thing is that if the trust is lost in terms of the sovereignty, the …
S41
AI and international peace and security: Key issues and relevance for Geneva — Capacity Building and Information Exchange:Supporting education and regional dialogue to bridge technological divides an…
S42
KSA Cloud First Policy — – a) Software as a Service (SaaS) is the preferred option as it maximizes the benefits brought by Cloud. – b) Platform a…
S43
Cloud Policy of the Icelandic Public Sector — The purpose of the Icelandic public sector cloud policy is to define objectives across the Icelandic public sector in th…
S44
Policies and platforms in support of learning: towards more coherence, coordination and convergence — – (a) Internal learning is part of the staff’s longer-term engagement with their organization on a learning and developm…
S45
Research Publication No. 2014-6 March 17, 2014 — Depending on the model in question, it is these characteristics that give rise to the concerns of governments related to…
S46
Searching for Standards: The Global Competition to Govern AI | IGF 2023 — Tomiwa Ilori:Thank you very much, Michael. And quickly to my presentation, I’ll be focusing more on the regional initiat…
S47
Keynote Adresses at India AI Impact Summit 2026 — Summary:The speakers demonstrate remarkable consensus across multiple dimensions: the strategic importance of U.S.-India…
S48
Open Forum #75 Shaping Global AI Governance Through Multistakeholder Action — High level of consensus with remarkable alignment across government, private sector, and civil society representatives. …
S49
State of play of major global AI Governance processes — In accordance with this, extensive research and algorithmic advancements have been integrated into public policy-making …
S50
The International Observatory on Information and Democracy | IGF 2023 Town Hall #128 — In concluding the analysis, the speakers provide valuable insights into the complexities surrounding technology policy a…
S51
From Technical Safety to Societal Impact Rethinking AI Governanc — Explanation:Both speakers support government involvement but disagree on scope – Ioannidis wants to keep core technology…
S52
Responsible AI for Children Safe Playful and Empowering Learning — The discussion maintained a consistently thoughtful and cautious tone throughout, with speakers demonstrating both excit…
S53
Planetary Limits of AI: Governance for Just Digitalisation? | IGF 2023 Open Forum #37 — Another perspective suggests that countries from the Global South are not prioritising sustainability and climate protec…
S54
Green AI and the battle between progress and sustainability — AI is increasingly recognised for its transformative potential and growing environmental footprint across industries. Th…
S55
Smaller Footprint Bigger Impact Building Sustainable AI for the Future — AI’s energy demands. Threaten to outpace green energy progress. Model providers face a stark reality. AI’s energy needs …
S56
WS #279 AI: Guardian for Critical Infrastructure in Developing World — The discussion explored strategies for reducing dependence on foreign technology, including developing robust domestic l…
S57
Skilling and Education in AI — Infrastructure development emerged as crucial, with investments in data centers, subsea cables, and compute capacity to …
S58
Global AI Policy Framework: International Cooperation and Historical Perspectives — Mirlesse outlines practical steps for implementing open sovereignty, emphasizing domestic AI deployment in key sectors w…
S59
Skilling and Education in AI — This discussion focused on leveraging artificial intelligence as a tool for development and equality in India, examining…
S60
Lightning Talk #107 Irish Regulator Builds a Safe and Trusted Online Environment — Gap between policy aspirations and regulatory implementation capabilities Gap between policy development and regulatory…
S61
Hello from the CyberVerse: Maximizing the Benefits of Future Technologies — There’s often a lag between adoption of technologies and regulations governing them
S62
Promoting policies that make digital trade work for all (OECD) — There is a regulatory gap between what is negotiated on multinational level and what is on the ground in these countries…
S63
Strengthen Digital Governance and International Cooperation to Build an Inclusive Digital Future — Gurry explains that there is an increasing gap between when new technologies appear and are adopted versus when governme…
S64
Smaller Footprint Bigger Impact Building Sustainable AI for the Future — Disagreement level:Low to moderate disagreement level with high strategic alignment. The speakers demonstrate strong con…
S65
Day 0 Event #251 Large Models and Small Player Leveraging AI in Small States and Startups — ### Sustainability Imperative A nuanced discussion emerged around the appropriate balance between enabling innovation a…
S66
AI as critical infrastructure for continuity in public services — So we’ve seen, especially in India, we’ve seen many, many, many pilots. And almost 80% of those pilots don’t make it to …
S67
Building Sovereign and Responsible AI Beyond Proof of Concepts — Artificial intelligence | Building confidence and security in the use of ICTs Theresa points out that only a small frac…
S68
AI Transformation in Practice_ Insights from India’s Consulting Leaders — Data governance, security concerns, and potential token pricing shocks are major barriers preventing pilot projects from…
S69
AI as critical infrastructure for continuity in public services — “Data is siloed, data is not ready for AI scale.”[71]. “So almost 80 % of those pilots don’t make it to production.”[98]…
S70
Building Sovereign and Responsible AI Beyond Proof of Concepts — Okay. Sounds good. Okay. Well, this session will be all around that. So if we can have the next slide. So what we want t…
S71
Keynote-Martin Schroeter — Thank you. Thank you. Thank you very much. Good afternoon, everybody. First, I want to thank the Honorable Prime Ministe…
S72
Keynote-Martin Schroeter — “while more than two -thirds of global organizations are already heavily invested in AI, almost half still struggle to s…
S73
Scaling Trusted AI_ How France and India Are Building Industrial &amp; Innovation Bridges — And it’s very useful. It’s used to benchmark applications and performance on quantum computers and using AI techniques a…
S74
Scaling AI for Billions_ Building Digital Public Infrastructure — Is it safe? Is it trustworthy? And what is the risk if this particular model, a service provider who’s providing that mo…
S75
A Resource Guide to Public Diplomacy Evaluation By Robert Banks — The impact model utilized by the BBG has five dimensions: reach, engagement, influence, understanding, and reliability. …
S76
https://app.faicon.ai/ai-impact-summit-2026/how-nonprofits-are-using-ai-based-innovations-to-scale-their-impact — These are not homogeneous systems, right? But unfortunately, the educational structure around this is a chalk and talk m…
S78
Closure of the session — Practical Considerations and Next Steps
S79
Wrap up — ### Practical Outcomes and Next Steps
S80
Building Indias Digital and Industrial Future with AI — Good morning, everyone. Warm welcome, distinguished guests, colleagues and partners and speakers who have joined us toda…
S81
Day 0 Event #10 First Aid Online: Making the Difference for Children — The tone was primarily informative and concerned, with speakers presenting statistics and examples to illustrate the ser…
S82
Standard ECMA-387 High rate 60 GHz PHY, MAC and HDMI PALs — ## High Rate 60 GHz PHY, MAC and PALs COPYRIGHT PROTECTED DOCUMENT | Contents …
S83
Any other business /Adoption of the report/ Closure of the session — In contrast, the draft report poised for adoption is received with positive sentiment. It is commended for its structure…
S84
Lightning Talk #109 Ensuring the Personal Integrity of Minors Online — The discussion maintained a professional, informative tone throughout, with speakers presenting serious statistics and c…
S85
Debrief of the exercise and open discussion — Panelists:Thank you very much, Mayor. and it’s great in absence, or in absentia is any more diplomatically correct. This…
S86
Stakeholder group representation — Thank you very much! Very interactive session and great take aways. Honest conversation about what we already know but a…
S87
Global Standards for a Sustainable Digital Future — The discussion maintained a collaborative and constructive tone throughout, with speakers demonstrating expertise while …
S88
Dynamic Coalition Collaborative Session — – **Audience** – Various audience members asking questions
S89
WS #83 the Relevance of Dpgs for Advancing Regional DPI Approaches — Interactive polls revealed participant priorities and concerns. When asked about top challenges, responses were evenly s…
S90
Revamping Decision-Making in Digital Governance and the WSIS Framework — The discussion maintained a constructive and collaborative tone throughout, with speakers building upon each other’s poi…
S91
WS #236 Ensuring Human Rights and Inclusion: An Algorithmic Strategy — The tone of the discussion was largely serious and concerned, given the gravity of the issues being discussed. However, …
S92
WS #302 Upgrading Digital Governance at the Local Level — The discussion maintained a consistently professional and collaborative tone throughout. It began with formal introducti…
S93
Emerging Markets: Resilience, Innovation, and the Future of Global Development — The tone was notably optimistic and forward-looking throughout the conversation. Panelists consistently emphasized oppor…
S94
Law, Tech, Humanity, and Trust — The discussion maintained a consistently professional, collaborative, and optimistic tone throughout. The speakers demon…
S96
Contents — This archetype is new and exists mostly in the pilot and proof-of-concept phases.
S97
HIGH LEVEL LEADERS SESSION I — Issues of trust arise in determining who selected the data, the level of trust in the data and identifying the benefits …
S98
Building Climate-Resilient Systems with AI — Data Infrastructure and Scaling Challenges: Multiple speakers highlighted critical barriers including lack of standardiz…
S99
Building Climate-Resilient Systems with AI — -Data Infrastructure and Scaling Challenges: Multiple speakers highlighted critical barriers including lack of standardi…
S100
WS #145 Revitalizing Trust: Harnessing AI for Responsible Governance — Brandon Soloski opened the discussion by referencing the Edelman Trust Barometer, which showed a significant trust defic…
S101
Secure Talk Using AI to Protect Global Communications & Privacy — Thank you. Thank you for your kind words and welcome Vikram. Before we really get down to asking a few questions from th…
S102
AI and Human Connection: Navigating Trust and Reality in a Fragmented World — Because for my money, we are fiddling while Rome burns. The world is falling apart, particularly addressed probably to P…
S103
Keynote by Mathias Cormann OECD Secretary-General India AI Impact — India AI Impact Summit. And thank you to India for your leadership in bringing together the global AI community followin…
S104
Generative AI and Synthetic Realities: Design and Governance | IGF 2023 Networking Session #153 — The ability to mimic voices and generate realistic messages allows malicious actors to deceive individuals in various wa…
S105
WS #255 AI and disinformation: Safeguarding Elections — Roxana Radu: Yes, absolutely. First of all, apologies for not being able to join you physically this year at the IGF….
S106
How nonprofits are using AI-based innovations to scale their impact — This is a profound critique of the nonprofit sector’s approach to technology adoption. It highlights the paradox where o…
S107
Building Population-Scale Digital Public Infrastructure for AI — The discussion highlighted that AI deployment differs fundamentally from traditional software procurement. Rather than a…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
T
Theresa Yurkewich Hoffmann
18 arguments170 words per minute4540 words1599 seconds
Argument 1
Only 30 % of AI projects reach production; lack of trust is a primary barrier (Theresa)
EXPLANATION
Theresa points out that a very small proportion of AI pilots become operational systems, and she attributes this low conversion largely to a deficit of trust among stakeholders. Without confidence that AI will work reliably and safely, organisations hesitate to move projects beyond the proof‑of‑concept stage.
EVIDENCE
She cites the statistic that only 30 % of AI projects go into production [11] and links this to a broader trust problem, noting that trust in the technology, data handling and outcomes is missing [13].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Low adoption despite available technology is highlighted in [S7], and research showing companies spending on AI without ROI is discussed in [S8]; the prevalence of “pilotitis” is noted in [S18].
MAJOR DISCUSSION POINT
Trust deficit in AI pilots
AGREED WITH
Omeed Hashim
Argument 2
AI‑related incidents are rising (≈600 in 2025), eroding public confidence (Theresa)
EXPLANATION
Theresa highlights a rapid increase in reported AI harms worldwide, suggesting that the growing number of incidents undermines public trust in AI systems. The surge in incidents signals that many pilots are generating unintended negative outcomes.
EVIDENCE
She refers to the OECD AI Observatory’s harms monitor, which recorded 600 AI-related incidents in December 2025 alone [21-23].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Trust erosion caused by AI hallucinations and misinformation is documented in [S13] and [S12], while broader trust decline is described in [S11].
MAJOR DISCUSSION POINT
Increasing AI incidents
Argument 3
Real‑world harms (voice‑cloning scams, AI‑generated books, biased facial‑recognition) illustrate trust problems (Theresa)
EXPLANATION
Theresa provides concrete examples of AI misuse that have caused real damage, demonstrating why users and the public are wary of AI deployments. These cases show that pilots can generate harms that erode confidence.
EVIDENCE
She describes a Romanian case where AI-cloned voices were used for scams [28-29]; a Cairo book-fair incident where generative-AI-written books were printed with prompts visible, raising questions about creativity and authenticity [30-36]; and facial-recognition systems at borders that performed unevenly across population groups [36-38].
MAJOR DISCUSSION POINT
Illustrative AI harms
Argument 4
Four lenses are needed to anticipate harms and enable scaling (Theresa)
EXPLANATION
Theresa introduces the 4D framework, arguing that evaluating AI projects through four distinct dimensions helps predict and prevent harms, making it possible to move from pilot to production at scale. The four lenses together create a holistic trust‑building approach.
EVIDENCE
She outlines the four dimensions-sovereignty, green (sustainability), responsibility, and valuable-stating that looking at all of them helps predict harms and enables deployment [64-66].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The four-dimensional (sovereignty, green, responsible, valuable) framework is presented in detail in [S4] and reiterated in [S6].
MAJOR DISCUSSION POINT
4D trust framework
Argument 5
Responsible AI covers ethics, bias, governance and human‑centred design; essential for user trust (Theresa)
EXPLANATION
Theresa explains that the responsibility dimension encompasses ethical considerations, fairness, bias mitigation, governance structures and designing AI around human needs, all of which are crucial for gaining user confidence.
EVIDENCE
Within her description of the four lenses she lists “responsibility” as covering ethics, bias, governance and human-centred design [64-66].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Responsible AI’s components-ethics, bias mitigation, governance, human-centred design-are defined in [S6] and [S15]; governance challenges are further explored in [S21].
MAJOR DISCUSSION POINT
Responsible AI definition
AGREED WITH
Audience
Argument 6
Value lens focuses on real‑world benefit beyond cost‑saving, e.g., wellbeing, new jobs (Theresa)
EXPLANATION
Theresa states that the value dimension looks past immediate financial gains to assess broader societal benefits such as improved wellbeing, time saved for families, or the creation of new employment opportunities.
EVIDENCE
She describes the value lens as measuring outcomes like wellbeing, time for families and new jobs, beyond simple cost-saving metrics [65-66].
MAJOR DISCUSSION POINT
Value dimension
AGREED WITH
Audience
Argument 7
Adoption‑impact gap: pilots often ignore how users will actually work with the system (Theresa)
EXPLANATION
Theresa notes that many AI pilots focus on building a technical solution without considering real‑world usage patterns, leading to a mismatch between what is delivered and what users need.
EVIDENCE
She outlines this gap by describing organisations that produce AI outputs without thinking about user interaction, goals, or additional review work required [44-49].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for change-management and user-centric adoption is emphasized in [S19]; the “pilotitis” phenomenon is described in [S18]; macro adoption challenges are discussed in [S20].
MAJOR DISCUSSION POINT
Adoption vs impact
Argument 8
Governance failures: missing risk identification, accountability, bias and security checks (Theresa)
EXPLANATION
Theresa argues that many pilots lack proper risk management structures, leaving questions about who is accountable for bias, security, or other risks unanswered.
EVIDENCE
She lists governance failures such as not identifying risks, unclear accountability, and overlooking bias or security concerns [50-56].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Lack of AI governance and security controls is warned about in [S10]; broader trust-governance issues are covered in [S11]; specific governance failures are analysed in [S21].
MAJOR DISCUSSION POINT
Governance shortcomings
Argument 9
Misalignment with societal goals: automation pilots overlook job‑loss concerns (Theresa)
EXPLANATION
Theresa points out that AI pilots aimed at automation can clash with societal expectations, especially when they ignore potential job displacement, creating a value mismatch.
EVIDENCE
She gives the example of AI projects that prioritize automation without considering public concerns about job loss, highlighting a misalignment between organisational goals and societal expectations [57-59].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The impact of automation on jobs and the need for social safeguards are highlighted in [S22]; inequality and automation risks are discussed in [S23].
MAJOR DISCUSSION POINT
Societal misalignment
Argument 10
Sovereignty challenges: reliance on foreign providers reduces control (Theresa)
EXPLANATION
Theresa emphasizes that depending on external AI services can jeopardise an organisation’s ability to control the model, data and continuity, which undermines trust and project viability.
EVIDENCE
She mentions sovereignty issues such as loss of control if a foreign government shuts off AI access, and the broader concern of external dependence [61-64].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Risks of dependence on foreign technology and loss of sovereignty are examined in [S26] and [S27]; UK’s computing constraints that affect sovereignty are noted in [S24]; health-sector sovereignty concerns are presented in [S23].
MAJOR DISCUSSION POINT
Sovereignty risk
AGREED WITH
Omeed Hashim
DISAGREED WITH
Omeed Hashim, Audience
Argument 11
Sustainability pressures: high energy and water use can make projects infeasible (Theresa)
EXPLANATION
Theresa notes that AI pilots with heavy compute requirements can consume large amounts of electricity and water, making them unsustainable and financially untenable.
EVIDENCE
She cites the health-scan pilot that required more compute than power supply and generated high water-cooling demand, leading to project failure [73-77].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
High energy consumption of AI workloads and data-centre power use are detailed in [S10]; computing-resource constraints that affect sustainability are discussed in [S24] and [S25].
MAJOR DISCUSSION POINT
Sustainability constraints
Argument 12
Change‑management issues: cultural readiness and AI‑human interaction are overlooked (Theresa)
EXPLANATION
Theresa argues that many pilots fail to prepare people for working with AI, neglecting training, cultural acceptance and clear processes for escalation, which hampers adoption.
EVIDENCE
She references the broader challenge of change-management, noting that organisations often do not consider how people will use AI, practice with it, or manage escalation when problems arise [61-64].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The importance of cultural readiness and change-management for AI adoption is stressed in [S19]; trust and governance aspects are covered in [S11] and [S21].
MAJOR DISCUSSION POINT
Change management gap
Argument 13
Governments can define safe‑use lists, transparency and explainability rules; approaches differ (UK vs US) (Theresa)
EXPLANATION
Theresa explains that governments are beginning to regulate AI by specifying permissible use‑cases, requiring transparency, and setting explainability standards, but regulatory approaches vary across jurisdictions.
EVIDENCE
She describes the UK’s emerging regulation on third-party AI suppliers, transparency, and explainability requirements, contrasted with the US’s lack of formal regulation [213-224].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Policy approaches to responsible AI deployment are discussed in the UNDP-focused report [S9]; broader governance and transparency concerns are highlighted in [S11].
MAJOR DISCUSSION POINT
Regulatory landscape
AGREED WITH
Omeed Hashim
DISAGREED WITH
Audience
Argument 14
Sustainability vs. adoption: higher usage raises carbon cost; organisations must rank concerns (Theresa)
EXPLANATION
Theresa highlights a trade‑off where rapid AI adoption can increase carbon emissions, requiring organisations to prioritize sustainability alongside other concerns.
EVIDENCE
She discusses mapping high-concern versus low-concern harms, noting that many organisations place sustainability low on their priority list despite its impact [306-313].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Carbon and energy costs of AI scaling are examined in [S10]; computing-capacity limits that influence sustainability decisions are noted in [S24].
MAJOR DISCUSSION POINT
Sustainability trade‑off
AGREED WITH
Omeed Hashim
DISAGREED WITH
Omeed Hashim
Argument 15
Prioritising responsible AI tends to generate value; dimensions are inter‑dependent (Theresa)
EXPLANATION
Theresa argues that focusing on responsible AI (ethics, bias, governance) naturally creates valuable outcomes, and that the four dimensions reinforce each other rather than operate in isolation.
EVIDENCE
She states that responsible AI can incorporate the other lenses and that responsible practices lead to value creation, emphasizing their inter-dependence [280-285].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The inter-dependence of the four lenses is described in [S6]; responsible AI’s role in creating value is reinforced in [S15]; governance interplays are analysed in [S21].
MAJOR DISCUSSION POINT
Inter‑dependency of dimensions
Argument 16
Release a white paper with 8‑10 actionable items per dimension (Theresa)
EXPLANATION
Theresa announces that the session’s insights have been compiled into a white paper offering concrete steps for each of the four dimensions, providing practitioners with a practical guide.
EVIDENCE
She mentions the white paper containing eight to ten recommendations for sovereignty, green, responsible and valuable AI [342-346].
MAJOR DISCUSSION POINT
Guidance document
Argument 17
Create an AI policy, a responsible‑AI framework, and measurable KPIs for each lens (Theresa)
EXPLANATION
Theresa recommends that organisations develop formal AI policies, adopt responsible‑AI frameworks, and define key performance indicators to monitor sustainability, ethics, and other dimensions.
EVIDENCE
She outlines steps to draft an AI policy, adopt a responsible-AI framework, and set measurable KPIs for sustainability, users and ethics [354-359].
MAJOR DISCUSSION POINT
Policy and metrics
Argument 18
Upskill teams and embed diverse perspectives to strengthen responsibility (Theresa)
EXPLANATION
Theresa stresses the importance of building internal capacity and ensuring diverse viewpoints are represented, as this underpins responsible AI implementation.
EVIDENCE
She calls for upskilling teams and incorporating diverse views as the most important factor for responsible AI [360-362].
MAJOR DISCUSSION POINT
Capacity building
O
Omeed Hashim
5 arguments161 words per minute2796 words1039 seconds
Argument 1
Sovereignty means control over data, models and the ability to prevent external shutdowns; loss of it kills trust (Omeed)
EXPLANATION
Omeed defines AI sovereignty as the capacity to decide where data resides, who accesses it, and to maintain operational control, arguing that losing this control erodes trust and leads to system failure.
EVIDENCE
He explains that sovereignty involves data location, access, purpose, and that without this understanding trust drops and systems become vulnerable [137-144]; he also cites a Serbian example of building domestic large-language models to retain control [145-147].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Sovereignty risks from reliance on foreign ICT providers are outlined in [S26] and [S27]; health-sector sovereignty considerations are discussed in [S23].
MAJOR DISCUSSION POINT
Definition of AI sovereignty
AGREED WITH
Theresa Yurkewich Hoffmann
Argument 2
Green AI links environmental cost to economic viability; scalable AI must be low‑carbon (Omeed)
EXPLANATION
Omeed argues that AI’s environmental footprint must be considered alongside cost, because unsustainable energy use makes scaling impossible; greener systems are also more economical.
EVIDENCE
He describes how high energy and water consumption increase costs and prevent scaling, noting that more economic systems tend to emit fewer greenhouse gases and citing large data-center power use examples such as Microsoft’s centre consuming as much electricity as Los Angeles [154-165].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Energy and water consumption of AI systems and their economic impact are highlighted in [S10]; broader computing-resource constraints affecting sustainability are covered in [S24] and [S25].
MAJOR DISCUSSION POINT
Green AI rationale
AGREED WITH
Theresa Yurkewich Hoffmann
DISAGREED WITH
Theresa Yurkewich Hoffmann
Argument 3
National AI models and data sovereignty are critical; examples from Serbia, France, UK (Omeed)
EXPLANATION
Omeed highlights that countries are pursuing domestically hosted AI models to ensure data sovereignty and reduce reliance on foreign providers, providing concrete national examples.
EVIDENCE
He mentions Serbia’s plan to build its own large-language models for local control [145-147] and references France’s Mistral model as another domestic effort [245-247].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for domestic AI capabilities to avoid foreign dependence is discussed in [S26]; UK’s computing challenges that motivate domestic model development are noted in [S24].
MAJOR DISCUSSION POINT
Domestic AI initiatives
AGREED WITH
Theresa Yurkewich Hoffmann
Argument 4
Sovereignty vs. value: foreign models give speed but risk loss of control (Omeed)
EXPLANATION
Omeed discusses the trade‑off where adopting readily available foreign AI services accelerates development but creates dependency, potentially compromising national sovereignty and long‑term value.
EVIDENCE
He recounts conversations with officials who prefer building their own models rather than using U.S. services, fearing future shutdowns and loss of control, illustrating the sovereignty-value tension [292-298].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Trade-offs between rapid adoption of foreign AI services and sovereignty concerns are examined in [S26].
MAJOR DISCUSSION POINT
Sovereignty‑value trade‑off
DISAGREED WITH
Theresa Yurkewich Hoffmann, Audience
Argument 5
Lobby governments for smart‑data sharing and domestic model development (Omeed)
EXPLANATION
Omeed suggests that the private sector should advocate for policies that promote smart data ecosystems and the creation of home‑grown AI models, which would enhance trust and reduce reliance on external providers.
EVIDENCE
He proposes pushing governments to enable smart data sharing across sectors (e.g., open banking extended to property markets) and to support UK-built language models, citing examples from Serbia and France as models to emulate [236-247].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Policy advocacy for responsible AI ecosystems and domestic model development is highlighted in the UNDP-focused report [S9] and governance challenges in [S21].
MAJOR DISCUSSION POINT
Policy advocacy for domestic AI
AGREED WITH
Theresa Yurkewich Hoffmann
A
Audience
4 arguments154 words per minute1127 words437 seconds
Argument 1
Upcoming data‑protection law will push responsible AI, but adoption remains minimal (Audience)
EXPLANATION
An audience member notes that a new data‑protection and personalization law is slated to become effective, which should drive responsible AI practices, yet current implementation is still negligible.
EVIDENCE
The participant states that the law will be enforced from October 2025, giving organisations 18-24 months to comply, but observes that only 0.1 % of responsible-AI measures are currently in place [230-235].
MAJOR DISCUSSION POINT
Regulatory gap
DISAGREED WITH
Theresa Yurkewich Hoffmann
Argument 2
Industry prefers bespoke IP over platform models, limiting broader societal value (Audience)
EXPLANATION
A participant describes how companies often develop AI solutions as proprietary IP for a single client, which hampers the creation of shared platforms that could deliver wider societal benefits.
EVIDENCE
The speaker explains challenges in scaling a vending-machine AI for one client (PepsiCo) without sharing it, compares this to India’s UPI platform that succeeded because it was open, and notes that consulting firms push bespoke projects rather than platform approaches [254-264].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Concerns about AI democratization and the benefits of shared platforms versus proprietary solutions are raised in [S16].
MAJOR DISCUSSION POINT
IP vs platform business models
AGREED WITH
Omeed Hashim
Argument 3
Participants ranked responsible/value as most critical; none had all dimensions fully covered (Audience)
EXPLANATION
Audience feedback shows that attendees consider responsible and valuable AI the most essential lenses, while acknowledging that no organization currently implements all four dimensions comprehensively.
EVIDENCE
Responses indicate that most participants voted for responsible/value as the top priority and that no one had sovereign, green, or responsible frameworks in place, as shown by the poll results [279-283] and the later statement that no one had sovereign AI policy or sustainability practice [332-339].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The incomplete adoption of the four-dimensional framework across organisations is noted in the overview of the framework in [S4] and [S6].
MAJOR DISCUSSION POINT
Prioritisation of AI dimensions
DISAGREED WITH
Theresa Yurkewich Hoffmann, Omeed Hashim
Argument 4
Question about ranking four lenses low to high (Audience)
EXPLANATION
An audience member asks the panel to rank the four 4D lenses (sovereignty, green, responsible, valuable) from low to high importance, seeking guidance on prioritisation.
EVIDENCE
The participant requests a ranking of the lenses and notes the desire to understand trade-offs, prompting the facilitator to conduct a quick poll [315-316].
MAJOR DISCUSSION POINT
Lens prioritisation request
Agreements
Agreement Points
Lack of sovereignty undermines trust and leads to AI project failure
Speakers: Theresa Yurkewich Hoffmann, Omeed Hashim
Only 30 % of AI projects reach production; lack of trust is a primary barrier (Theresa) Sovereignty challenges: reliance on foreign providers reduces control (Theresa) Sovereignty means control over data, models and the ability to prevent external shutdowns; loss of it kills trust (Omeed)
Both speakers stress that without control over data and models (sovereignty), stakeholder trust erodes, causing pilots to stall before production [13-14][61-64][137-144].
POLICY CONTEXT (KNOWLEDGE BASE)
This view mirrors the argument that early incorporation of sovereignty safeguards is essential to maintain trust and avoid project collapse, as highlighted in the discussion on building sovereign and responsible AI [S40] and reinforced by calls for open sovereignty and domestic AI deployment to mitigate reliance on foreign providers [S58].
Sustainability (green AI) must be balanced against rapid AI adoption and cost
Speakers: Theresa Yurkewich Hoffmann, Omeed Hashim
Sustainability vs. adoption: higher usage raises carbon cost; organisations must rank concerns (Theresa) Green AI links environmental cost to economic viability; scalable AI must be low‑carbon (Omeed)
Both agree that AI’s energy and water demands create trade-offs; organisations need to consider environmental impact alongside speed and cost of deployment [306-313][154-165].
POLICY CONTEXT (KNOWLEDGE BASE)
The tension between environmental impact and speedy AI rollout is documented in analyses of Green AI, which describe the trade-off between progress and ecological footprints [S54] and emphasize the need for efficient, sustainable models to avoid widening digital divides [S55].
Responsible AI and the value lens are the most critical dimensions for successful AI
Speakers: Theresa Yurkewich Hoffmann, Audience
Responsible AI covers ethics, bias, governance and human‑centred design; essential for user trust (Theresa) Value lens focuses on real‑world benefit beyond cost‑saving, e.g., wellbeing, new jobs (Theresa) Participants ranked responsible/value as most important; no one had all four dimensions in place (Audience)
Theresa’s definition of responsible and valuable AI aligns with audience voting that these two lenses are top priorities for AI projects [280-285][279-283][332-339].
POLICY CONTEXT (KNOWLEDGE BASE)
Responsible AI and value-based assessment are core principles of the AI Policy Research Roadmap, which stresses accountability, ethical governance and human welfare as foundational for AI success [S39].
Government policy and domestic AI capabilities are needed to ensure trust and reduce dependence on foreign providers
Speakers: Theresa Yurkewich Hoffmann, Omeed Hashim
Governments can define safe‑use lists, transparency and explainability rules; approaches differ (UK vs US) (Theresa) National AI models and data sovereignty are critical; examples from Serbia, France, UK (Omeed) Lobby governments for smart‑data sharing and domestic model development (Omeed)
Both highlight the role of state action-regulation in the UK/US and building home-grown models-to secure AI sovereignty and public confidence [213-224][236-247].
POLICY CONTEXT (KNOWLEDGE BASE)
Multiple sources underline the role of national policy and capacity building in fostering trustworthy AI and limiting foreign dependence, from capacity-building initiatives in AI governance [S41] to strategic diversification recommendations for domestic AI deployment [S58] and infrastructure investments to build local trust frameworks [S57].
Platform‑as‑a‑service models are preferable to proprietary IP for broader societal impact
Speakers: Audience, Omeed Hashim
Industry prefers bespoke IP over platform models, limiting broader societal value (Audience) Suggest a service model with shared layers; avoid pure IP lock‑in (Omeed)
A private-sector participant’s concern about IP lock-in matches Omeed’s recommendation to adopt platform/service approaches for AI solutions [254-264][267-276].
Similar Viewpoints
Both see sovereignty as essential for trust and sustainability as a necessary trade‑off with rapid AI rollout [61-64][137-144][306-313][154-165].
Speakers: Theresa Yurkewich Hoffmann, Omeed Hashim
Sovereignty challenges: reliance on foreign providers reduces control (Theresa) Sovereignty means control over data, models and the ability to prevent external shutdowns; loss of it kills trust (Omeed) Sustainability vs. adoption trade‑off (Theresa) Green AI links environmental cost to economic viability (Omeed)
Both prioritize responsible and valuable AI as the key lenses for successful deployment [280-285][279-283][332-339].
Speakers: Theresa Yurkewich Hoffmann, Audience
Responsible AI covers ethics, bias, governance and human‑centred design (Theresa) Value lens focuses on real‑world benefit beyond cost‑saving (Theresa) Participants ranked responsible/value as most important (Audience)
Both acknowledge that regulatory frameworks are needed to drive responsible AI, even though implementation is currently limited [213-224][230-235].
Speakers: Theresa Yurkewich Hoffmann, Audience
Governments can define safe‑use lists, transparency and explainability rules (Theresa) Upcoming data‑protection law will push responsible AI, but adoption remains minimal (Audience)
Both argue for platform‑oriented AI delivery to increase societal impact and avoid restrictive IP models [254-264][267-276].
Speakers: Omeed Hashim, Audience
Industry prefers bespoke IP over platform models, limiting broader societal value (Audience) Suggest a service model with shared layers; avoid pure IP lock‑in (Omeed)
Unexpected Consensus
Private‑sector participants endorse strong government involvement in AI governance despite typical market‑driven expectations
Speakers: Audience, Theresa Yurkewich Hoffmann, Omeed Hashim
Audience notes upcoming data-protection law will drive responsible AI but uptake is low [230-235] Theresa describes government-defined safe-use lists and regulatory differences [213-224] Omeed calls for lobbying governments for smart-data sharing and domestic model development [236-247]
While private entrepreneurs often favour minimal regulation, the audience member, Theresa and Omeed all stress the necessity of state-led policies and standards to ensure trust and sovereignty, revealing an unexpected alignment across sectors [213-224][230-235][236-247].
POLICY CONTEXT (KNOWLEDGE BASE)
High-level multistakeholder forums report strong consensus among industry, government and civil society for coordinated AI governance, indicating private-sector willingness to accept robust governmental roles [S48]; similar sentiments appear in regional AI governance initiatives that call for strong state participation [S46].
Both speakers consider sustainability a lower organisational priority yet essential for scaling AI
Speakers: Theresa Yurkewich Hoffmann, Omeed Hashim
Theresa observes many organisations place sustainability low on their concern list [306-313] Omeed links environmental cost to economic viability, implying it is often overlooked [154-165]
It is surprising that both highlight sustainability as commonly deprioritised, despite its critical role in making AI scalable, indicating a shared but under-addressed challenge [306-313][154-165].
POLICY CONTEXT (KNOWLEDGE BASE)
Discussions at recent AI forums note that while sustainability may rank lower than immediate business goals, it is still viewed as a prerequisite for scalable AI deployment, reflecting the sustainability imperative highlighted in Green AI debates [S54] and the “Sustainability Imperative” sessions that stress its strategic importance despite lower priority [S65].
Overall Assessment

The discussion shows strong convergence among speakers on four core themes: (1) sovereignty and trust are inseparable; (2) sustainability must be balanced with rapid AI adoption; (3) responsible and valuable AI are the most critical lenses for success; (4) government policy, domestic model development and platform‑oriented delivery are essential to achieve trust, reduce dependence on foreign providers, and maximise societal benefit.

High consensus across speakers and audience on the importance of the 4D framework and the need for integrated policy, governance and technical approaches. This alignment suggests that future AI initiatives at the summit are likely to adopt a holistic, multi‑dimensional strategy, emphasizing sovereignty, green AI, responsible practices and tangible value creation.

Differences
Different Viewpoints
Priority of sovereignty versus responsible/value lenses
Speakers: Theresa Yurkewich Hoffmann, Omeed Hashim, Audience
Sovereignty challenges: reliance on foreign providers reduces control (Theresa) Sovereignty vs. value: foreign models give speed but risk loss of control (Omeed) Participants ranked responsible/value as most critical; none had all dimensions fully covered (Audience)
Theresa later downplays sovereignty, ranking it lower than responsible and valuable AI ([317-319]), while Omeed stresses that loss of sovereignty destroys trust and can cause project failure ([137-144]) and argues that sovereignty is a core trade-off with value ([292-298]). The audience poll shows participants consider responsible and valuable AI the most important and report no sovereign AI policies in place ([332-339]), indicating a disagreement on how critical sovereignty should be.
POLICY CONTEXT (KNOWLEDGE BASE)
Debates on whether sovereign control or responsible-value frameworks should dominate AI strategy echo the divergent positions observed in governance discussions, where some advocate open sovereignty as a risk-management tool [S58] while others stress ethical, value-based oversight as the primary driver [S51].
Emphasis on sustainability (green AI) versus speed of adoption
Speakers: Theresa Yurkewich Hoffmann, Omeed Hashim
Sustainability vs. adoption: higher usage raises carbon cost; organisations must rank concerns (Theresa) Green AI links environmental cost to economic viability; scalable AI must be low‑carbon (Omeed)
Theresa treats sustainability as a lower-priority concern that many organisations place at the bottom of their risk-ranking ([306-313]), whereas Omeed argues that environmental impact is inseparable from economic viability and that AI cannot scale without being low-carbon ([154-165]). This reflects a clash over whether sustainability should be a primary driver or a secondary consideration.
POLICY CONTEXT (KNOWLEDGE BASE)
The clash between rapid AI rollout and environmental stewardship is a recurring theme, with participants at IGF 2023 highlighting the battle between progress and sustainability [S54] and noting that some Global South actors prioritize digitalisation over climate concerns [S53]; tactical disagreements on this balance were also recorded in sustainability-focused sessions [S64].
Extent of government regulation and intervention in AI deployment
Speakers: Theresa Yurkewich Hoffmann, Audience
Governments can define safe‑use lists, transparency and explainability rules; approaches differ (UK vs US) (Theresa) Upcoming data‑protection law will push responsible AI, but adoption remains minimal (Audience)
Theresa describes emerging UK regulations that set safe-use lists and transparency requirements, contrasting them with the lack of US regulation ([213-224]), suggesting a gradual, sector-specific approach. The audience member argues that the government should take a more decisive role in deciding model utilization and that current responsible-AI adoption is negligible despite an upcoming law ([210-212]; [230-235]). The disagreement lies in how proactive and prescriptive government action should be.
POLICY CONTEXT (KNOWLEDGE BASE)
A split view on regulatory scope is documented in analyses that compare unrestricted technical development with calls for a central coordinating governmental body for AI governance [S51]; the broader regulatory lag between policy formulation and on-the-ground implementation is further illustrated by multiple reports on gaps in AI regulation [S60], [S61], [S62], [S63].
Unexpected Differences
Platform versus proprietary IP business models
Speakers: Audience, Theresa Yurkewich Hoffmann, Omeed Hashim
Industry prefers bespoke IP over platform models, limiting broader societal value (Audience) Four‑dimensional framework focuses on sovereignty, green, responsible and valuable lenses without addressing platform sharing (Theresa) Lobby governments for smart‑data sharing and domestic model development (Omeed)
The audience raises a concern about the inability to create shared platforms due to IP restrictions ([254-264]), which was not anticipated in the speakers’ discussion that centered on governance, sovereignty and sustainability rather than business-model structures. This introduces a new dimension of disagreement about how AI solutions should be commercialised and shared.
POLICY CONTEXT (KNOWLEDGE BASE)
The trade-offs between platform-as-a-service and proprietary intellectual property models are discussed in government-focused research that highlights security, privacy and control considerations influencing model choice [S45], alongside policy preferences for cloud-based services over proprietary solutions [S42].
Perceived gap between regulatory progress and on‑the‑ground adoption
Speakers: Audience, Theresa Yurkewich Hoffmann
Upcoming data‑protection law will push responsible AI, but adoption remains minimal (Audience) Release a white paper with 8‑10 actionable items per dimension (Theresa)
While the audience claims that responsible-AI practices are virtually absent despite an imminent law ([230-235]), Theresa asserts that concrete guidance (a white paper with actionable steps) is already available ([342-346]), revealing an unexpected mismatch between perceived regulatory readiness and the availability of practical resources.
POLICY CONTEXT (KNOWLEDGE BASE)
Several sources point to a persistent discrepancy between AI policy ambitions and actual implementation, including observations of a regulatory-implementation gap in Ireland [S60], the general lag between technology adoption and legislative response [S61], and the OECD’s note on mismatches between multinational agreements and national enactments [S62]; this gap is also highlighted in monitoring of AI deployments in public services [S49].
Overall Assessment

The discussion revealed three main contention areas: (1) the relative importance of AI sovereignty versus responsible/value considerations; (2) the weight given to sustainability (green AI) compared with rapid deployment; and (3) the scope and immediacy of government regulation. While participants share a common goal of building trustworthy AI systems, they diverge on which dimensions should be prioritized and how policy should intervene. Unexpectedly, debates also surfaced around business‑model choices (platform vs. IP) and a perceived disconnect between regulatory guidance and actual implementation.

Moderate to high. The disagreements are substantive, touching on strategic priorities (sovereignty vs. value), resource allocation (sustainability vs. speed), and governance approaches (regulatory depth). These divergences could hinder consensus on policy recommendations and implementation road‑maps, requiring further dialogue to align priorities across stakeholders.

Partial Agreements
Both agree that trust is essential for moving AI pilots to production, but differ on the primary source of trust: Theresa points to overall trust deficits across multiple dimensions ([13-14]), while Omeed focuses specifically on data and model sovereignty as the trust anchor ([137-144]).
Speakers: Theresa Yurkewich Hoffmann, Omeed Hashim
Only 30 % of AI projects reach production; lack of trust is a primary barrier (Theresa) Sovereignty means control over data, models and the ability to prevent external shutdowns; loss of it kills trust (Omeed)
Both see a role for government in shaping AI practice, but Theresa emphasizes regulatory frameworks that set safe‑use boundaries, whereas the audience stresses the need for immediate, enforceable data‑protection legislation to drive responsible AI adoption ([213-224] vs. [230-235]).
Speakers: Theresa Yurkewich Hoffmann, Audience
Governments can define safe‑use lists, transparency and explainability rules; approaches differ (UK vs US) (Theresa) Upcoming data‑protection law will push responsible AI, but adoption remains minimal (Audience)
Takeaways
Key takeaways
Only about 30 % of AI pilots progress to production, largely due to a trust deficit. AI‑related incidents are rising (≈600 in 2025), eroding public confidence in AI systems. A 4‑dimensional framework (sovereignty, sustainability (green), responsible, valuable) is needed to anticipate harms and enable scaling. Common reasons proof‑of‑concepts fail: adoption‑impact gap, governance failures, misalignment with societal goals, sovereignty dependence on foreign providers, unsustainable resource use, and poor change‑management. Scenario analysis showed each of the 4D lenses in action (health‑scan – sustainability; traffic‑lights – value; justice triage – sovereignty; social‑benefits – responsible/value). Government regulation (safe‑use lists, transparency, explainability) and national AI models are critical for sovereignty and trust; approaches differ across regions (UK, US, Serbia, France). Trade‑offs between dimensions are inevitable; prioritising responsible AI often creates value, but sustainability, sovereignty and value must be balanced. Practical recommendations: publish a white‑paper with actionable steps, create an AI policy, adopt a responsible‑AI framework, define measurable KPIs for each lens, up‑skill teams, embed diverse perspectives, and lobby for smart‑data sharing and domestic model development.
Resolutions and action items
Release and distribute a white‑paper containing 8‑10 concrete actions for each of the four dimensions. Encourage participants to develop an organization‑wide AI policy that outlines priorities across the 4D lenses. Adopt a responsible‑AI framework with defined governance, risk, bias, and human‑centred design processes. Define and track quantitative KPIs for sustainability (e.g., carbon/energy use), sovereignty (e.g., % of models hosted locally), responsibility (e.g., bias audit scores), and value (e.g., user‑outcome metrics). Upskill staff on the 4D framework and incorporate diverse stakeholder views into AI projects. Lobby governments for clear safe‑use regulations, data‑protection laws, and support for domestic model development and smart‑data sharing. Shift business models from pure IP ownership toward service/platform approaches that enable broader societal impact.
Unresolved issues
How and when upcoming data‑protection and data‑personalisation legislation will be operationalised and enforced. Concrete pathways for private firms to influence or rely on government‑defined safe‑use lists and AI regulations. Methods to build and sustain platform‑level AI solutions (e.g., for vending‑machine agents) when large customers demand exclusive IP. Specific mechanisms for balancing sovereignty with value when using foreign large‑language models versus developing domestic alternatives. How organizations should rank the four dimensions (sovereignty, sustainability, responsibility, value) for their unique contexts. Lack of existing responsible‑AI practices, sovereignty policies, and sustainability metrics among many participants – no clear plan to implement them yet.
Suggested compromises
Accept a limited sustainability impact to accelerate AI adoption while planning longer‑term carbon‑reduction measures. Use foreign AI models for rapid prototyping but concurrently invest in domestic model development to preserve future sovereignty. Adopt a hybrid IP/service model: retain core IP for the primary client while offering a modular service layer that can be reused across other customers. Map all identified harms and concerns as high/low and make trade‑off decisions transparently, allowing some lower‑priority dimensions to be deprioritised temporarily. Balance rapid value delivery (e.g., time‑saving) with responsible design by embedding human‑centred checks early in the pilot phase.
Thought Provoking Comments
Only 30 % of all AI projects actually go into production. One of the main reasons is that we don’t have trust – trust in the technology, in the data, in the impacts on jobs and people.
Sets the central problem of the session with a striking statistic and links it directly to the theme of trust, framing the entire discussion.
Established the urgency of the topic, prompting the audience to think about why pilots fail and leading directly into the later analysis of trust dimensions.
Speaker: Theresa Yurkewich Hoffmann
The OECD AI Observatory recorded 600 AI incidents in December 2025 alone – from voice‑cloning scams in Romania to AI‑generated books in Cairo that still showed the prompts, to biased facial‑recognition at borders.
Provides concrete, global examples of AI harms, moving the conversation from abstract concerns to real‑world consequences.
Illustrated the stakes of mistrust, reinforcing the need for robust governance and setting up the audience for the six failure reasons that follow.
Speaker: Theresa Yurkewich Hoffmann
Proof‑of‑concepts fail for six reasons: adoption vs impact gaps, governance failures, misalignment with societal goals, sovereignty issues, sustainability pressure, and change‑management challenges.
Offers a comprehensive diagnostic framework that categorises the root causes of AI pilot failures.
Structured the subsequent discussion, allowing participants to map their own experiences onto these categories and paving the way for the 4D model.
Speaker: Theresa Yurkewich Hoffmann
We propose an AI 4D framework – Sovereignty, Green (sustainability), Responsibility, and Value – as four lenses to build trust and predict harms before scaling AI.
Introduces a novel, easy‑to‑communicate model that synthesises the earlier failure categories into actionable dimensions.
Guided the interactive scenario exercises and the audience poll, focusing the conversation on evaluating projects against these four lenses.
Speaker: Theresa Yurkewich Hoffmann
Sovereignty is really about control – whose data is it, who can see it, who can turn the model off. If people don’t understand that, trust collapses and the system fails.
Deepens the notion of sovereignty beyond organisational boundaries to include individual data rights, linking trust directly to control.
Shifted the tone from a technical checklist to a socio‑political concern, prompting participants to consider regulatory and national‑level implications.
Speaker: Omeed Hashim
Addressing both environmental effects and cost works nicely together – greener AI is cheaper to run, and cheaper AI is greener. If an AI system can’t scale sustainably, it won’t scale at all.
Connects sustainability with economic viability, reframing green AI from a compliance cost to a strategic advantage.
Introduced a trade‑off perspective that broadened the discussion of the ‘Green’ dimension, leading participants to consider carbon footprints in business cases.
Speaker: Omeed Hashim
As a private‑sector founder I’m left to make all AI decisions alone; we need governments to set clear safe‑use guidelines and regulation, otherwise we’re stuck competing on price and speed without any ethical baseline.
Brings a real‑world stakeholder viewpoint, highlighting the gap between corporate experimentation and public policy.
Prompted the panel to discuss regulatory differences across countries (UK vs US vs India) and reinforced the need for a shared governance framework.
Speaker: Ami Kotecha (Audience)
Governments should push for smart data sharing and domestic language models – like the UK building its own LLMs – so we aren’t dependent on foreign providers that could be switched off tomorrow.
Offers a concrete policy recommendation that ties sovereignty to national AI capability and resilience.
Steered the conversation toward actionable steps for policymakers and sparked agreement on the importance of local model development.
Speaker: Omeed Hashim
Why are consulting firms not building platform‑as‑a‑service models like UPI? We keep getting asked to build bespoke solutions for one client, which locks the IP and prevents broader societal impact.
Raises a strategic business‑model challenge that intersects the Value and Responsibility dimensions, questioning current industry practices.
Expanded the dialogue beyond technical lenses to market dynamics, leading to suggestions about service‑based IP models and highlighting barriers to scaling AI responsibly.
Speaker: Audience member (platform discussion)
In a nursing‑home hydration‑monitoring AI, we risk turning nurses into scapegoats and invading residents’ privacy – the system’s value depends on who it actually helps and how it changes relationships.
Provides a nuanced, human‑centered example that illustrates unintended social consequences, emphasizing the interplay of all four dimensions.
Deepened the conversation about Responsible AI, showing that ethical design must anticipate secondary effects, and reinforced the need for holistic evaluation.
Speaker: Omeed Hashim
Overall Assessment

The discussion was driven forward by a series of layered insights that moved from a stark failure statistic to a multidimensional framework for trustworthy AI. Theresa’s opening data and the 4D model gave participants a shared vocabulary, while Omeed’s expansions on sovereignty, sustainability, and policy concrete‑ized those lenses. Real‑world concerns voiced by audience members (regulatory uncertainty, platform business models, and human‑centered harms) injected practical urgency, prompting the panel to link abstract concepts to actionable steps. Together, these pivotal comments transformed the session from a high‑level overview into a concrete, stakeholder‑rich dialogue about how to operationalise trustworthy AI across technical, ethical, economic, and geopolitical dimensions.

Follow-up Questions
How do you see the role of government in AI safety and regulation evolving over the next 6‑12 months and beyond, and what impact will that have on private sector AI adoption?
Understanding future regulatory timelines is crucial for companies to plan investments, risk management, and compliance strategies for AI deployments.
Speaker: Ami Kotecha (audience)
How can organisations build AI value at a platform level rather than delivering bespoke, single‑customer solutions that lock‑in IP and limit broader impact?
A platform approach could unlock economies of scale, wider societal benefits, and avoid fragmentation of AI capabilities across competing clients.
Speaker: Audience member (vending‑machine AI entrepreneur)
What mechanisms or governance models can address situations where a company acquires AI technology but chooses not to commercialise it for societal benefit, raising responsible‑AI and value concerns?
Ensuring that AI innovations serve the public good rather than being hoarded is essential for responsible AI deployment and maximising societal value.
Speaker: Audience member (sustainability IP owner)
Why are major IT services firms (e.g., Kainos, Infosys, Accenture) not pursuing platform‑as‑a‑service models for AI, and can such platform initiatives be created inorganically rather than organically?
Identifying barriers within large consultancies to platform strategies can inform policy or business‑model changes that promote broader AI adoption.
Speaker: Audience member (platform‑focused entrepreneur)
Can you discuss scenarios where prioritising AI sovereignty might conflict with responsible or valuable AI outcomes, and vice‑versa, and how organisations can balance or align these dimensions?
Trade‑offs between control (sovereignty) and ethical/value considerations are central to designing trustworthy AI systems; guidance is needed to navigate them.
Speaker: Audience member (question on sovereignty vs responsibility/value)
How would you rank the four AI lenses—sustainability, sovereignty, responsible AI, and valuable AI—from lowest to highest importance for a given project?
Prioritisation helps organisations allocate resources and focus on the most critical dimensions for successful AI deployment.
Speaker: Audience member (ranking question)
Which single AI lens (sovereignty, green/sustainability, responsible AI, or valuable AI) is an absolute must‑have, such that its absence would derail a project?
Identifying a non‑negotiable dimension can guide minimum compliance standards and risk mitigation strategies.
Speaker: Audience member (must‑have lens question)
Who currently has a responsible AI practice, a sovereign AI policy, or a sustainability framework in place within their organisation?
Understanding current adoption levels informs gaps in practice and highlights opportunities for knowledge‑sharing and capacity‑building.
Speaker: Theresa Yurkewich Hoffmann (prompt to audience)
What are the root causes behind the low conversion rate (≈30 %) of AI pilots to production, and how can organisations improve the transition from proof‑of‑concept to impact?
Investigating failure factors can lead to better governance, adoption, and impact strategies for AI projects.
Speaker: Theresa Yurkewich Hoffmann (stated statistic)
How can the environmental impact (carbon footprint, water usage) of AI workloads be accurately measured and incorporated into sustainability decision‑making?
Quantifying AI’s resource consumption is essential for aligning AI deployments with net‑zero and climate goals.
Speaker: Theresa Yurkewich Hoffmann / Omeed Hashim (discussion on green AI)
What frameworks or metrics can be developed to assess the societal value of AI beyond financial ROI, such as well‑being, job creation, or equity?
Moving beyond cost‑benefit analysis enables organisations to capture broader public‑interest outcomes of AI.
Speaker: Theresa Yurkewich Hoffmann / Omeed Hashim (discussion on valuable AI)
How does data sovereignty affect trust in AI systems, and what governance structures are needed to ensure transparent data handling across jurisdictions?
Clarity on data ownership and location is pivotal for user trust and compliance with emerging data‑sovereignty regulations.
Speaker: Omeed Hashim (discussion on data sovereignty)
What are effective human‑centered design practices for AI in sensitive domains (e.g., elderly care monitoring), and how can potential negative impacts on caregivers and families be mitigated?
Designing AI that respects all stakeholders’ needs reduces harm and improves adoption in high‑stakes environments.
Speaker: Omeed Hashim (example of nursing home AI)
What policy approaches (e.g., UK vs US vs India) are most effective for governing high‑risk AI applications, and how can cross‑jurisdictional lessons be shared?
Comparative policy analysis can guide nations in crafting balanced AI regulations that protect citizens while fostering innovation.
Speaker: Theresa Yurkewich Hoffmann (comparison of regulatory regimes)
What decision‑making processes or tools can help organisations map and prioritize AI harms and trade‑offs across the four lenses, turning qualitative concerns into measurable KPIs?
Operationalising trade‑off analysis enables systematic risk management and demonstrates accountability to stakeholders.
Speaker: Theresa Yurkewich Hoffmann (final recommendations)

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Building Inclusive Societies with AI

Building Inclusive Societies with AI

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel convened to examine how India’s informal workforce can be better integrated into the formal economy through technology, policy and partnership [1-4]. It brought together Arundhati Bhattacharya of Salesforce India, Aditya Natraj of the Pyramid Foundation, and Manisha Verma, Additional Chief Secretary of Maharashtra’s SEED department [6-8][11-13][17-19].


Romal Shetty identified five systemic roadblocks for informal workers: discovery and trust, steady demand, timely payment, upskilling and access to protection [26-31]. Arundhati argued that a nationwide digital marketplace is essential to make workers’ credentials visible, match them with local opportunities and create verifiable upskilling pathways [34-38]. She emphasized that delayed payments plague even large corporates and the government, and that only a digital platform can generate an audit trail to enforce accountability [40-44]. She warned that reports without an execution authority remain ineffective, calling for a single body to own implementation of the suggested platform [45-50].


Manisha described Maharashtra’s newly created SEED department, which oversees more than a thousand ITIs, a state vocational board, short-term skilling schemes and a public skills university to broaden vocational education [57-66][72-73]. The department also targets vulnerable groups such as prisoners, persons with disabilities and tribal communities through tailored training programmes [76-77]. She highlighted that the state supports roughly 35 000 startups through hackathons, grant challenges, district-level committees and a “Startup Week” that awards direct work orders and helps firms secure patents and market access [146-154][158-170]. A public-private partnership model allows industry to manage ITI curricula and provide on-the-job training, exemplified by a 100 % placement pilot with Mahindra Tractors for tribal students [274-280][284-285].


Aditya noted that the poorest quartile, especially women and tribal populations in five eastern states, remain outside the market and lack basic education, limiting productivity gains [86-90][94-100]. He argued that aggregating blue-collar workers-through models like FabIndia, Amul or UrbanClap-can create quality standards, enable technology deployment and improve earnings [188-200][210-213]. His experience digitising health workers showed that adoption depends on four user profiles, from non-phone users to tech-savvy youths, and that one-size-fits-all programmes fail [293-302][308-322].


The panel collectively agreed that digital platforms, government stewardship, industry partnership and context-specific behavioural interventions are required to unlock the informal sector’s potential [34-38][45-50][274-280][293-322]. They concluded that coordinated multi-stakeholder action, anchored by an accountable implementation body, is essential for scaling inclusive growth and formalising India’s 490 million informal workers [45-50][274-280][293-322].


Keypoints

Major discussion points


Systemic roadblocks faced by the informal workforce and the need for a digital, accountable platform – The panel identified five core challenges – discovery & trust, steady demand, timely/fair payment, upskilling, and access to protections – and argued that a nationwide digital marketplace with verifiable credentials and payment tracking is essential, while also stressing that a clear execution authority must be assigned to implement such solutions [26-32][34-44][45-50].


Government-led skilling, vocational education, and inclusive social programmes – Maharashtra’s newly created SEED department oversees more than a thousand ITIs, a state board for accreditation, a dedicated skills university, and targeted skilling for prisoners, people with disabilities, women and tribal communities, illustrating a comprehensive public-sector approach to human-capital development [57-66][71-78].


Aggregation of blue-collar workers to ensure quality and market access – The discussion highlighted that unlike white-collar professions, informal workers lack organized aggregators; models such as cooperatives (Amul/Seva), private-sector platforms (FabIndia), and gig-style rating systems (UrbanClap) are needed, with the National Rural Livelihood Mission (NRLM) identified as a key government mechanism for such aggregation [188-200][214-215].


Leveraging the startup ecosystem and public-private partnerships for social impact – Maharashtra’s vibrant startup scene (≈35,000 startups) is being catalysed through hackathons, grant challenges, “Startup Week”, direct work-order awards, and sector-focused innovations in clean energy, health, agriculture, etc., demonstrating how entrepreneurial solutions can be scaled through government support [146-176][180-184].


Behavioural and adoption barriers to digital/AI interventions – A case study of ASHA health workers shows four distinct user groups (from non-phone users to tech-savvy youths); one-size-fits-all programmes fail, and tailoring interventions to these varied “Indias” is crucial for successful technology uptake [295-303][308-322].


Overall purpose / goal of the discussion


The panel convened representatives from industry, the development sector, and government to diagnose the structural challenges of India’s informal sector, evaluate digital and policy-based interventions, and chart a coordinated, accountable roadmap-combining technology, skilling, aggregation, and entrepreneurship-to unlock productivity, inclusion, and sustainable livelihoods for the country’s 490 million informal workers.


Overall tone and its evolution


– The conversation opens with a formal and collaborative tone, emphasizing the privilege of a multi-stakeholder panel.


– It shifts to a critical and urgent tone when panelists point out execution gaps, systemic payment delays, and the lack of an accountable authority [45-50].


– As examples of successful pilots and startup initiatives are shared, the tone becomes optimistic and solution-focused, highlighting concrete impacts.


– The discussion concludes on a hopeful and appreciative note, recognizing the collective expertise and reaffirming confidence in India’s ability to address the informal sector’s challenges [324-328].


Speakers

Manisha Verma – Additional Chief Secretary, SEEID, Maharashtra; IAS officer (1993 batch); Head of the Department of Skills, Employment, Entrepreneurship, and Innovation. Expertise in vocational education, skilling programs, innovation ecosystems, and industry-government partnerships. [S1][S2]


Aditya Natraj – CEO, Pyramid Foundation. Expertise in education reform, community-led development, poverty alleviation, behavioral change, and digital adoption for informal workers.


S. Anjani Kumar – Moderator/introducer of the panel (role not explicitly titled). [S5]


Arundhati Bhattacharya – Chairperson and CEO, Salesforce India (former SBI Chairperson). Recognised Padma Shri award-ee; expertise in responsible AI, inclusive technology adoption, public-private collaboration, digital economy, ethics, governance, and sustainability. [S7]


Romal Shetty – CEO, Deloitte South Asia; moderator of the discussion. [S9][S10]


Additional speakers:


Roy – Mentioned in the opening remarks (“Thank you so much, Roy”). No specific role or expertise identified in the transcript.


Full session reportComprehensive analysis and detailed insights

The session opened with S. Anjani Kumar framing the challenge of bringing India’s 490 million informal workers – roughly 90 % of the country’s workforce [70-72] – into the formal economy. He introduced a three-pronged panel representing industry, development and government: Ms Arundhati Bhattacharya, chairperson and CEO of Salesforce India and Padma Shri award-winner; Mr Aditya Natraj, CEO of the Pyramid Foundation and veteran education-reform leader; and Ms Manisha Verma, Additional Chief Secretary of Maharashtra’s SEED department, a senior IAS officer who has drafted landmark legislation such as the National Food Security Act [1-5][6-10][11-13][17-20].


Romal Shetty, CEO of Deloitte South Asia, outlined the five systemic roadblocks identified in the NITI-Aayog study – lack of discovery and trust, irregular demand, delayed or unfair payment, insufficient upskilling, and limited access to social protection [26-31]. He expanded the analysis with a “persona-led” framework that maps the diverse actors in the informal economy – cultivators, artisans, middlemen, textile workers, trade workers and those facing migration pressures [70-78].


Arundhati Bhattacharya argued that, given India’s scale, only a digital marketplace can overcome these barriers. She illustrated a scenario in which a skilled plumber in a remote village would be unable to find work without a platform that records credentials, matches opportunities and creates a verifiable payment trail [34-38]. She noted that without such a system there is no audit-able footprint for payment delays, which affect even large corporates and the government [40-44]. Arundhati called for a single, accountable authority to own the execution of this national platform [44-50] and highlighted the committee’s recommendation for an “Uber-like” platform that aggregates demand and supply in real time [210-215].


Manisha Verma described Maharashtra’s SEED department, which oversees more than a thousand Industrial Training Institutes (ITIs) and a state board that accredits private skill-training providers [57-66][71-78]. The department runs short-term skilling programmes through the Maharashtra State Skilling Society and has introduced evening courses for non-ITI students, complemented by a partnership with Mahindra Tractors that guarantees 100 % placement for participants [300-312]. By integrating vulnerable groups – prisoners, persons with disabilities, women and tribal communities – into these schemes, the department seeks an inclusive human-capital pipeline [76-78].


Building on the state’s human-capital agenda, Manisha highlighted Maharashtra’s vibrant startup ecosystem, now home to roughly 35 000 registered startups – the largest concentration in India [146-147]. The government catalyses this ecosystem through hackathons, grant challenges, district-level “startup yatras”, and a flagship “Startup Week” that receives about 3 000 entries annually, short-lists them via an independent jury and awards direct work orders worth up to ₹25 lakh [158-170]. Successful examples include Sagar Defense (marine-surveillance technology for the Indian Navy), a low-cost home-diagnostic health app, a sustainable menstrual-hygiene solution, and a wheelchair-to-two-wheeler conversion for the physically challenged [174-185]. A public-private partnership (PPP) policy empowers industry anchor partners to manage ITI curricula for ten to twenty years, provide expert faculty and co-design training – a model echoed in the central PM Setu scheme [260-268][274-280].


Aditya Natraj shifted the focus to aggregation, arguing that blue-collar workers lack the organised marketplaces enjoyed by white-collar professions. He cited models such as FabIndia’s brand-centric supply chain, the cooperative structures of Amul and Seva, and gig-platform rating systems like Urban Clap, all of which create predictable quality and enable consumers to choose providers confidently [188-200][203-208]. He identified the National Rural Livelihood Mission (NRLM) and its state counterpart SRLM as critical government mechanisms for aggregating informal workers and feeding them into such platforms [214-215]. Aditya also stressed that productivity gaps stem from inefficient workflows and tooling deficits, not merely skill shortages [80-85]. He highlighted that the poorest quartile – especially women married before 18 and tribal populations in eastern states – often lack even six years of schooling, making the usual 10th-standard prerequisite for ITI training unrealistic [86-90][94-100][108-110]. Consequently, he called for “three-times-hard” interventions that first address basic literacy before delivering advanced digital certifications [112-113].


A concrete illustration of behavioural adoption barriers came from Aditya’s digitisation of the ASHA health-worker programme. He identified four user profiles: (i) workers over 50 with no phone experience; (ii) those with only a dumb phone; (iii) smartphone owners who use devices solely for entertainment; and (iv) young, tech-savvy workers who already use phones for side-businesses [295-303][308-322]. He argued that one-size-fits-all programmes fail because they ignore these distinct adoption curves, and that interventions must be tiered to each group’s comfort with technology [319-322].


Across the panel there was strong consensus on several fronts. All agreed that multi-sector collaboration-bringing together industry, development agencies and government-is essential for any platform to succeed [1-5][6-10][11-13][17-20][122-124]. The need for a nationwide digital marketplace that aggregates workers, showcases verifiable credentials and records payment timelines was affirmed by Arundhati, Romal and Aditya [34-38][42-44][137-140][188-215]. Upskilling was universally recognised as a lever for productivity, but participants concurred that certification must be digital, programmes should be tailored to the four adoption profiles, and vulnerable groups require special attention [37-38][79-80][55-58][309-318]. Finally, the importance of digital footprints for building trust and ensuring accountability was echoed by both Arundhati and Romal [42-44][113-115].


In conclusion, the panel underscored that unlocking the potential of India’s informal workforce demands a coordinated roadmap: a dedicated execution authority to own a national digital marketplace; integration of the NRLM/SRLM aggregation mechanisms; scalable upskilling pathways linked to digital certification; robust public-private partnership models for ITI curriculum design; and targeted, low-tech pilots that build trust among the most marginalised. By aligning technology, policy and entrepreneurship, the stakeholders expressed optimism that India can move from “fantastic reports” to tangible, inclusive growth for its 490 million informal workers [45-50][274-280][309-322].


Session transcriptComplete transcript of the session
S. Anjani Kumar

show a video which will give you context of what the informal sector is, what are some of the interventions that can be taken before I call the esteemed panel to have a discussion on the topic. So we are privileged to have a panel. We are privileged to have a panel today, which represents industry, the development sector, and the government. You know, all of the ecosystem has to come together to solve for this problem. So may I now invite my first panelist, Ms. Arundhati Bhattacharya, chairperson. And CEO, Salesforce India. Thank you. She is the recipient of the Padmashri, India’s fourth highest civilian award, and has frequently been featured on Forbes’ World’s 100 Most Powerful Women and Fortune’s World’s 50 Greatest Leaders list.

She is a strong advocate of responsible AI, inclusive technological adoption, and public -private collaboration for national growth. She is instrumental in expanding India’s digital economy while embedding ethics, governance, and sustainability into technology ecosystems. Thank you, ma ‘am, for joining us today. Representing the development sector, we have the pleasure of inviting Mr. Aditya Natraj, the CEO of Pyramid Foundation. He’s a prominent education reform leader and also the founder of Kaivalya Education Foundation and the Pyramid School of Leadership. He’s over 20. He has over 20 years of experience in the development sector, including a significant tenure with… driving volunteer -led literacy campaigns in rural India. He’s been recognized as an Ashoka Fellow, an Echoing Green Fellow, and Aspen India Fellow.

He’s also the recipient of Time’s Now Amazing Indian Award in Education. Thank you, Aditya, for joining us. On the government side, again, I’m privileged to request Ms. Manisha Verma, Additional Chief Secretary, SEEID, Maharashtra. She’s a 1993 batch IAS officer who has contributed to drafting transformative regulations in India, like the National Food Security Act, the Forest Rights Act, the National Food Rights of Persons with Disabilities, Right to Education, Magnera, and others. She’s been felicitated by the Honorable President, the Honorable Prime Minister, Niti Ayog, Honorable Governor, and Honorable Chief Minister for various initiatives, and is also a recipient of Maharashtra Foundation Award for Outstanding Policy. Thank you, ma ‘am, for joining us. And to kick us off, I’m delighted to welcome Romul Chetty, CEO of Deloitte South Asia, to

Romal Shetty

Thank you so much, Roy. Good afternoon, ladies and gentlemen, and always a privilege to have a wonderful panel here. So maybe I’ll kick off first with you, Arundhati, to start with. As you know, when we did our study, obviously you and Arundhati were significant contributors to that study. We’ve seen that the informal workforce basically faces about five really systemic roadblocks. One is being discovered and trusted. Second is getting some steady demand. Third is getting fair and timely payment. Then upskilling, that sort of translates into higher productivity. And, of course, accessing protections, insurance and others. So how do you use? How do you see these challenges playing out in the future? and what or which of it must be prioritized in the next 12 to 18 months?

Arundhati Bhattacharya

So given the fact that ours is a very populous nation, I don’t think we have a way other than a digital way of addressing these solutions. In the sense that you might have a worker, say a person who works as a plumber, who might be really, really good at his job and there might be very good opportunities in his village or in the village next to his, but he has no idea that it exists. So this lack of knowledge is not something that you can manage to do away with unless you have some kind of a marketplace where people can put in not only their credentials and their experience, but also be able to access the opportunities that are there for their kinds of jobs.

That’s one piece. The second piece is that unless and until we put all of… these people together we would also not understand what is the upskilling that is required for such people because more and more as days go by we are realizing that everything is changing all of the technology is changing and the change in technology is such that requires people to be further upskilled now how do you get that upskilling how do you ensure that you have a verifiable certification that you have gone through that upskilling again you have got to come back to the digital area third is regarding getting payment on time as you said this is something by the way which is a very big problem across India and it does not only impact your the blue -collar workers it impacts even the MSMEs and the SMEs and sadly enough I would say it is the big corporates that are the worst at this including the government means I cannot not include the government over there because getting payments on time in India is something that is not considered to be at all important It is one of the things that you do last.

You have to do it. So you do it at some point of time. And this is not something that speaks well for us as a country. It really adds to the difficulty in doing business because you’re not funding people the moment that they need to be funded in. And there has to be an accountability for all of this, which unless if you use a digital platform, there is no footprint. There is no footprint about the delays that are taking place unless you put a digital platform to this. So I think, you know, in the report that we put out together, and I think there were other people, especially your people, Deloitte people who did a lot of work on this, who actually suggested a platform where all of these things could be comprehensively addressed.

Now, I was just asking Romil before coming in over here that India is great at putting out fantastic reports. At the end of the reports, who is charged with the execution? Who is really accountable that if it doesn’t get executed, there is a downside to it? We have no such downsides. We have suggestions, we have reports, and then we don’t have a person who is charged with the execution. I think it’s time for all of us to understand that reports are great, suggestions are fantastic, but there has to be an authority that will take charge of this, will run with it, and be accountable for actually implementing it. Because there are some really, really good suggestions over there that need to be implemented.

Romal Shetty

Thank you, Arundhati, and you know why she was the SBI chairperson, because she’s got a strong mind of her own, and always willing to challenge the status quo, which I think in her own life, as well as of course in the various positions that she’s held. Thank you, Arundhati. So Manisha, a question to you now, and this is really about Maharashtra, and obviously, could you sort of share an overview of the work, the work that’s being undertaken by your department? for the benefit of all the delegates here. And how is it working towards enhancing human capital and social inclusion?

Manisha Verma

So first of all, thank you so much for having me here. I’m looking forward for a great dialogue with this esteemed panel members as well as all of you. I head the Department of Skills, Employment, Entrepreneurship, and Innovation. Innovation, that is why it is written SEED, so it’s not a very common kind of a department. This is a newly constituted department in Maharashtra. And to put it simply, it is overseeing the entire vocational education spectrum. So there is a thousand plus institutes, ITIs, government, and private, which are the cutting edge. You know, they are the cradle of creating skilled workforce for the industries, manufacturing, and service sector, but mainly the manufacturing. And so all the ITIs are under the department oversight.

But we are also looking at short term skilling programs through our Maharashtra State Skilling Society. So all the government of India programs and the state budget resources for skilling. Then we have a state board of vocational education and training. So if you are a private provider of skill training, then the accreditation and recognition of the courses is done by our state board. And affiliation is also given because today you know that there is a lot of duping of people, ordinary people. There is no information as to whether the courses which are given in the market are actually accredited or have a value. So this body does the independent assessment of the training institutes and gives affiliation and recognition.

And then to complete the spectrum of because you know that the students from, ITIs or from people who are doing vocational education, they might have aspirations. for higher education and independently also. So we recently set up a public state skills university, Ratan Tata State Skills University in Maharashtra. So that is also doing pretty well now, I mean, in its infant stages. And then we have a Maharashtra State Innovation Society, which is under my department, which is looking at promotion of startups and incubators. So this is a whole spectrum of the work that we are doing. But not to miss out the vulnerable groups for social inclusion, we are also partnering with agencies to do skilling for jail inmates, prisoners in jail, people with disabilities, women, tribal areas and all.

So that in brief is the work that we are doing. Thank you.

Romal Shetty

Thank you, Manisha. Aditya, one of our core insights from our study was that we are working with the government that productivity gaps often come from… sort of inefficient workflows and tooling deficit rather than any work effort. So as we look to increase productivity 10x to really realize Vixit Bharat aspirations, what guardrails do you think should be in place so that technology augments workers, improves their safety and earnings, and does not really replace them altogether?

Aditya Natraj

Yeah. So thank you very much for having me on this panel. It was great fun to be part of the committee at Niti Ayog as well, which put this together. Thanks to Deloitte’s efforts. I think when we’re talking about this informal labor force, we’re all imagining this electrician who’s coming to our house, right? And so we’re imagining an upgrade of that. We at the Pyramal Foundation are working with the bottom quartile of India. Largely the top quartile is sitting in this room and driving the growth. The next quartile sort of supports that growth by being drivers, electricians, plumbers. The next quartile is just about surviving. And the fourth quartile, honestly, first of all, you have to tune into to even understand how badly off there are.

There are still, as per official statistics, over 200 million people in India in poverty, right? So the areas where we focus, which are the five eastern states, for example, I mean, so when you’re talking about productivity deficit, just I’ll give you a few statistics, right, because we’re imagining this is a plumber who’s coming into my house and how do I increase this thing? But what about the women? 50 % of India is women, right? And the states where we work with Jharkhand, Assam, Chhattisgarh, Orissa, Bihar, these states, today, the number is at 36 % of women getting married below the age of 18. What is going to be my productivity gap? I got married before the age of 18. My productivity is measured by how fast I produce the first child and the second child.

And all my energy is going into just taking care of children. What is AI going to do for this girl who, by the age of 20, has two children and at home? What is it going to do for the tribal? I’m going to be able to do it for the next 10 years. who’s still in the Dandakaranya forest in South Chhattisgarh. So that group of people has lower growth rate than the median of India. As it is, they were lower and they have lower growth rate. So really increasing productivity for that group, I think, is going to be key because it’s not about taking the top quartile to $29 ,000, right? That is going to happen or it’s going to happen because there are automatic mechanisms in place in the market to incentivize that productivity gain.

The bottom quartile is not yet plugged into the market, right? These are the 70 million people who are in poverty in these five states. Out of them, the statistic is not having 40 % of those families don’t have even one person who has had six years of education in the family. Six years, we’re not talking about 10th standard. So a lot of our programs are designed on, okay, 10th standard, after that you’re going to do ITI, you’re going to do this thing. So this bottom quartile really needs, I think productivity gains are going to come by us unknowingly. Understanding why the bottom quartile… is not involved in the market and what do we need to do three times, four times as hard so that they’re not pulling the median of India down.

Romal Shetty

I mean, and, you know, as consultants, when we look at these reports, and I can tell you from the NITI one was, I think these kind of inputs, because it’s very easy sometimes just to be far off and sort of give recommendation, but when you realize the nitty -gritties as well, well, I think you realize that there have to be different solutions, and I think this report was where really different sets of people came together to contribute. Arundhati, back to you in terms of, you know, we created this persona -led, you know, the carpenter, the, you know, the cultivator, and we chose this because challenges differ, right? So cultivators face sort of volatility and information gaps.

Artisan face sort of market access. Middlemen, dependents. Textile workers face skills and technology gaps, and trade workers, of course, face income insecurity. Migration, of course, pressure. as well. So how do you balance a centralized approach while ensuring each person’s unique challenge are solved for?

Arundhati Bhattacharya

So basically again you know there cannot be a cookie cutter solution to all of this because the persuasions are so different the challenges are so different you necessarily need to solve for people in different ways. There are certain fundamental issues that bother all of these whether it’s an issue of access, issue of health issue of you know basic understanding and literacy these are all basic issues that need to get fixed at a very low level in the sense at a very early level in their lives. But if you are looking beyond that and if you are looking at vertical wise the different kinds of people and the different ecosystems that they work for you necessarily will have to come up with different solutions and again here I think this is where the stakeholder which is the major stakeholder, which is the government, the government has a role to play.

Because it is the government that is going to enable the ecosystem to help these people to grow. For them to grow on their own, like was being said by him, the upper quartile people can help themselves. The people who are absolutely at the lower quartile, they actually need help. And I remember one incident where, you know, we used to run this Youth for India program in State Bank of India. Where we had people taking a gap year, coming and serving in the villages. Now one such guy was serving in one of these villages of Dang tribals, who work with bamboo. And he discovered that the equipments that they were working the bamboo with were basically stone age equipments.

Literally stone age equipments. Now just by changing the nature of the equipments that they were working with, and again nothing very fancy. Nothing with technology or AI. And they were working with bamboo. But just changing those equipments improved the quality of the product so much that it had a much better purchase in the market. So, you know, solutions may be something that’s very simple, but it is something that has to be innovated there by actually getting knowledge of what really is holding them back. So I think, again, this is something that needs a lot of work and it needs a lot of work by people at that place, which, again, has to be partly the government.

Romal Shetty

And in fact, the platform that the committee recommended in some sense was to also help to Uberize, to create demand, to also build skills also. So as simple as long as you have a simple phone, you could actually use it. So I think that was actually done as well. So, Manisha, coming to the sort of the startup ecosystem and, you know, and obviously Maharashtra has been doing phenomenally well in the startup ecosystem. So could you share how you’re driving societal impact through this startup? Ecosystem.

Manisha Verma

I think honestly startup ecosystem is something that is organically grown and government should not be taken too much credit. I was just sharing with Arundhati ji before and we were entering that, you know, some things are on autopilot and government should just catalyze or facilitate and not obstruct the growth, I think that is. But nevertheless, I would like to say that we have been trying from the Maharashtra government side to really kind of catalyze this ecosystem which is there in Maharashtra. You know, Maharashtra has 35 ,000, nearly 35 ,000 startups currently registered by DPIIT and it is the leading state. And some of the things that we have been doing actually is to create this, get this culture penetrated across the state.

Initially, we saw that their startups were primarily centered around Mumbai because of the ecosystem and Pune. But today, I’m happy to share that every district in Maharashtra, including Garchiroli, has minimum of 25 startups registered. So can you imagine that? So we’ve tried to do it through multiple ways, like having hackathons, grant challenges, startup yatras, involving the college students. And the rural areas as much, creating district level committees, you know, led by collector, but having an entire ecosystem of stakeholders, including principals, ITIs, the district industries officers, the MSME clusters. Then we also give some support of financial because not all startups are capable of prototyping and then, you know, getting the quality testing done. So we’ve done that.

We do some reimbursement for IPR. for domestic patents or, you know, international patents. We are helping them to obtain quality testing and certification. But a very unique experiment that we have done, I think, and which we can, you know, take genuine credit of, is our very unique program called Startup Week. We invite startups from across the country. We get nearly close to 3 ,000 entries every year. And they are shortlisted by an independent jury of domain experts, VCs. And then we have their pitching done before second round of independent jury. Now, these are not startups. You know, we are looking at startups and their technologies and innovations, which have a large social impact. So just to give you an example, the sectors are actually clean energy, mobility, agriculture, health, education.

And FinTech, these are the kind of… sector. So I’m happy to share some examples like there was a startup and then we give them as awards, direct work orders up to 25 lakhs. In recently, we have entries from 15 to 25. So otherwise, startups are stuck for procurement policies of the government. They are not able to compete with the tender systems that are there. So we give them a direct work orders as winning price. And then we connect them with the domain departments to rule out their innovations. And that has been very helpful for our startups to gain visibility and even gain international markets and investors. So some of our startups have really grown up like this Sagar Defense.

Now today, it’s called Sagar Defense. We started with their, you know, now today, their technology has been upgraded for marine surveillance and Indian Navy, has also placed orders and they’ve created a manufacturing plant near NASIG. We have new docs recently. It was our winner from IIT and other people who have created a very beautiful home diagnostic app. On phone you can have more than 30 health parameters at a very low cost. We have which has done the entire thing of menstrual hygiene management and disposal of sanitary pads in a sustainable way. We did their pilots in Mantralay itself to do the, you know, see the proof of concept and give them the work order. So we have, I think it is new motors.

I remember very interesting for physically challenged people that their wheelchair converts into a battery operated two wheeler. disabled person. So I can cite a lot of examples and I would say even in the areas of agriculture and clean energy. So these are kind of some efforts that we have been doing and hopefully we’ll take it to the next level with the help of such experts.

Romal Shetty

I think it’s fantastic work and on a lighter note, of course, Manisha ji, we also struggle on the tender side. So maybe So Aditya, I mean from your experiences, where do digital or sort of AI led interventions for the informal force sort of break down and what are some of the learnings from the past? Like you said, you bucketed into the four categories as well.

Aditya Natraj

So we’ve done a lot of digitization work. In fact, we’ve showcased it even at the expo and we work with with the government to digitize government health systems, digitize government education systems, agri, water, any space digitization normally adds value. But here when we are talking about the informal labor force, I think we have to look at the mental model here. When we are talking about white collar workers, right, like Deloitte or a lawyer firm, they got aggregated more than 40, 50 years ago. If you went back 100 years ago, you had an individual chartered accountant, an individual lawyer, an individual banker, or an individual consultant. Now you have firms. Now as soon as you’ve aggregated, you get lots of benefits because you get specialization, and then you can reintegrate to offer a more complex service.

Or you get more skill capability growth for each person. You can get quality standards. The customer knows what he’s buying. So in the white collar workforce, this has already happened. In the blue collar workforce, on the other hand, tell me where you will go for quality of election. electrician right you’ll end up asking your neighbors what about a carpenter tailor we’ve not yet organized the blue collar workforce in a way in which the customer can choose quality predictably right as an urban consumer i will face more than 80 brands a day even my salt is branded it’s catch you walk into a village today nothing is branded right so the need to aggregate is very critical to improve quality of service and this is what we tried with our farmer produce organizations and how they could improve but if you see there are multiple models for this aggregation right you can have the fab india type model right the fab india’s and the uh type model where it’s a private sector fab india high design high designs help the entire supply chain in leather fab india helped in the entire textile supply chain right you can go in that private sector type model The second model is that you can actually go in the Amul and the Seva model, which the firm itself is owned by the farmers.

Today, when I buy Amul milk, 90 % of what I pay goes back to the farmer. When you buy Nestle milk, it doesn’t go back to the farmer. So when you buy Seva, when you buy Lijat Papad, 90 % is going back to the last person because it’s organized as a cooperative. And the third is then you’ve got the Urban Clap model, which is saying, OK, I will certify the person and he’s got a 4 .5 rating. So therefore, you choose him. You choose this physiotherapist. You choose this carpenter. You choose this plumber. All these are aggregating in different ways and distributing incentives in different ways. I think unless we think of but for the artisan who’s 45 years old and doing a traditional Kalamkari, you’re expecting that someone’s going to come and choose this particular piece without having branded that as a whole.

I think actually his productivity is quite high. The problem is his realizations are not that high. What he’s able to realize from the market. It is not as great as the actual craft. his actual understanding of where the design market is going in Paris or in New York or in Delhi is not as high in order to adapt his design. And so the constraints, I think, is about aggregation of these workers, which I think the government’s main program of NRLM, the National Rural Livelihood Mission, and the SRLM, which is, of course, very, very powerful in Maharashtra, Bihar, I think is extremely critical for aggregating workers at various levels in order that then you can improve quality, deploy technology, create incentives, create a common expectation of quality.

Because otherwise, as a consumer, I’m not going to be willing to pay unless I’m sure of a certain quality level.

Romal Shetty

So I have a last question to each of you for what I request is maybe just a minute or two, a quick one. So Arutati, as part of the study, if you remember, we met about 70 personas. We met 70, we had 70 stories, we had 70 different aspirations. but they all represent a 490 million workforce, 90 % of the country’s workforce. These are numbers, I believe, but I believe the stories matter actually more. And as a reflection, if you could share a persona which stuck with you the most during our exercise.

Arundhati Bhattacharya

the mountains, you have the seas, you have culture, you have temples, you have old structures, like you ask for it and it is there. And yet this is one sector where we really haven’t done well. And it’s very difficult to understand why. People in countries with far, far less are doing much, much better. And this also is a very labor -intensive sector. We talk about people not having enough jobs. And why not? Because this is a sector that can provide a lot of jobs. There are so many wonders in this country which we ourselves as Indians have not witnessed. And this, I think, is something that the government needs to take up on a really urgent footing because not everything is going to happen from the private side.

But, of course, the private sector coming in over here in full force, along with the government, should actually mean a great deal to us. because this is also not going to be something that is not going to give us foreign exchange. It will give us foreign exchange. It will give us enough amount of employment. And more than anything else, I think it will showcase what India is all about, which I think is very important. So if you ask me, that was one place that I thought we could do a separate study just on that to see whether we could do something more for that particular segment. And I can tell you she was as passionate then also.

I remember this discussion as well specifically, but it is a fact that hospitality, tourism actually is a force multiplier because it also impacts so many industries, right?

Romal Shetty

So Manisha, in terms of industry partnerships, so really when it comes to employment, an important ally is industry partnership. What special efforts are there to sort of deepen collaboration between industry and the government for societal impact? A quick question.

Manisha Verma

Okay, before I go to industry, I just quickly wanted to respond because I remember I was a few years ago tribal department secretary and we used to have a small fund called Nucleus Budget Fund, which was untied. We could do some locally contextualized responses. So I do remember one of my department officers saying, ma ‘am, I want to build homestays in tribal areas. And beyond Nasik, there’s Bhandardhara Falls area and there is fireflies. There is a cluster of tribal villages which have got these fireflies before the monsoon sets in. It’s a beautiful site. I would ask some of you to explore if you haven’t. And then he so I couldn’t I funded that time’s few homestays, which was just one lakh rupee for villages.

I would ask some of you to explore if you haven’t. And then he so I couldn’t I funded that time’s few homestays, which was just one lakh rupee for villages. I would ask some of you to explore if you haven’t. And then he so I couldn’t I funded that time’s few homestays, which was just one lakh rupee for villages. I would ask some of you to explore if you haven’t. And then he so I couldn’t I funded that time’s few homestays, which was just one lakh rupee for villages. I would ask some of you to explore if you haven’t. And then he so I couldn’t I funded that time’s few homestays, which was just one lakh rupee for villages.

iron furniture, one bed and mattress and something. They couldn’t even afford that because they were all small marginal farmers. And I forgot about it. And I did it out of the way because there was no such scheme but I designed it for them because I trusted my officer that he will use it well. And then he said, Ma ‘am, you come. They are doing good business. And three years ago, I had left the department few years ago and I traveled to Bhandardhara area to catch this fireflies. And he said, Ma ‘am, they are reminding you to come to their house and eat. So around from 11 at night till 2 in the morning, I was looking at that tract of fireflies and then I visited that village Hamlet.

She cooked that Jowari, Bhagri and everything. And she was so happy to share with me the lady of the house. Ma ‘am, this is the room. We will take our food. We will give our food to the Maharashtrians. You are giving authentic Maharashtra food. and a lot of people come and stay in my room. So one example, I just got some warm remembrance. And I’m sure there are so many efforts that are happening, but as ma ‘am was saying, we have so much to do in terms of aggregation, a systematic kind of approach to kind of tap the potential of tourism as well as our rich culture and diversity that we have. Coming quickly to industry, we’ve created industry as a major role because we keep talking about industry -aligned courses, matchmaking for the job seeker and job provider, but it is our industries which are the job providers, whether it is small -scale industries, MSMEs, or they are big industry associations or service sector.

So what we have done is actually to modernize curriculum of research. One of our ideas we have started, we have created a PPP policy. public -private partnership in which we are ensuring that if an industry -led anchor partner is there, we will give our ITI management to the industry for 10 years or 20 years. And we will give them freedom to design curriculum, to have expert faculty, and even converge our resources. This is something that Maharashtra did before. Therefore, recently, Government of India has also announced PM Setu scheme, which is akin to this kind of concept of developing ITIs along with industry partnership. But on a regular basis also, we are trying to tap industry expertise for OGT on the job training, apprenticeship programs, you know, advising our institutions, academic institutions.

Another good example, I would just like… to share because it’s a recent one. We have introduced short -term training courses and opened the ITI to non -ITI students in the evening. for optimal utilization. So in the evenings and on, we can have short -term skilling programs. We are looking for partnerships. One good partnership we have done is with Mahindra Tractors in Garchiroli for tribal students again. And we’ve done the first batch of certification in Mahindra Tractors technology and with 100 % placement in Garchiroli. So some

Romal Shetty

Thank you, Manisha.

Manisha Verma

But one line, this is not enough. We really need industry to engage very deeply. There are structural kind of issues, but we are really open to partnerships, but I think industry needs to come forward.

Romal Shetty

Aditya, final question to you. Of course, the Pyramid Foundation has developed, really deep experience in community -led development. mile governance and of course behavioral change. In your view, what behavioral change levers are the most critical to sort of unlock adoption and also trust amongst the informal workers?

Aditya Natraj

You’re asking a question which we spend all our time on and I’m going to try and summarize it in two minutes. Let me give you an example of a very basic technology, right? The government of India has a huge national digitization program for healthcare workers. There are over a million ASHA workers in India who are the last mile delivery for all health services. And ASHA workers still in many states has a manual register in which she fills up the pregnancies. She has 54 different things to track. She has a separate register for pregnancies, separate register for TB, separate register for nutrition, separate register for adolescence. In most states, that was not yet. Now you would imagine, come on, that’s like the easiest thing to automate, right?

Because. It’s a tool. She goes to each home. There’s a geo -tagging, and then you have the database, and then you fill up what’s the latest problem so that her surveys are more efficient. Bihar alone, and we went into Bihar to try to digitize this, and Bihar alone has over 100 ,000 ASHA workers, right? And we thought, hey, this will be done in three months because we had the technology. The point is that technology adoption is a separate skill from the technology, right? And when you think of technology adoption, again, we’re thinking of the white -collar person in this room. When we saw the people who had to adopt this, we saw that they were in four categories.

Category one is people who are over 50, okay, and have never used any technology. She’s not even used a dumb phone. Now, suddenly, you’re asking her on the smartphone to collect her wage. She’s saying, bitya ko dedo, bo kar degi. Okay? So we have to remember that there are people. People are 50 to 75, and they’re in the government workforce as ASHA workers, right? So there’s people who don’t even have dumb phone. That’s about a quarter. the second quartile is people who still have a dumb phone and not a smartphone so they use it for call call ke labar not even sms use it for call and you use it for emergency you’re not using it for work you’re not used to how will you use it for work when i press here what happens where does it go how does that data come back here who’s looking at it these are all the questions going in their mind because of which they say so there is a huge fear of this technology adoption then there’s a third quartile which has smartphones but is not used to using it for business right that is used for you know my sun watches youtube i have prime video all those sort of things but using it for business my business means whatever work i’m doing you know i’m using it for business you’re not used to because and then the top one is typically younger people who are you know young asha workers from 25 to 35 who have a smartphone who are going out who are also selling something on the side also running some site business, they are really smart.

So the adoption depends on the profile of the workers inside and how far they have adopted. And typically we design one size fits all type programs. And there’s a group of people who already knew how to do it. And there’s a group of people who’s never going to do it. And I think this is very critical to really imagine that there is not one India, there are at least four Indias on any dimension. And to first understand that, and then tailor our programs to that, I think all adoption can happen.

Romal Shetty

Yeah, thank you. I think this is I mean, you can see the wealth of experience, the depth of knowledge, and the willingness to work you can clearly see from industry, from the development sector, from the government. So I think sometimes we feel a bit disheartened of, you know, but whenever we hear stories, and if we see leaders like this, you know that, you know, India is in good hands. So thank you, everyone for such a wonderful panel. Thank you for your time. Thank you. Thank you. Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (27)
Factual NotesClaims verified against the Diplo knowledge base (4)
Confirmedhigh

“India has 490 million informal workers, roughly 90 % of the country’s workforce.”

The knowledge base lists the same figure of 490 million informal workers affected by the identified challenges, confirming the scale mentioned in the report [S4].

Confirmedhigh

“The five systemic roadblocks identified in the NITI‑Aayog study are: lack of discovery and trust, irregular demand, delayed or unfair payment, insufficient upskilling, and limited access to social protection.”

The source enumerates these exact five challenges, matching the report’s description [S4].

Additional Contextmedium

“Arundhati Bhattacharya is chairperson and CEO of Salesforce India and a Padma Shri award‑winner.”

The knowledge base mentions Arundhati Bhattacharya as a speaker advocating cloud-native AI solutions, providing background on her expertise, but does not confirm her CEO role or Padma Shri award [S81].

Additional Contextmedium

“Aditya Natraj is CEO of the Pyramid Foundation and a veteran education‑reform leader.”

The source references Aditya Natraj’s perspective on poverty and education reform, adding context to his involvement, but does not verify his CEO position at the Pyramid Foundation [S1].

External Sources (84)
S1
Building Inclusive Societies with AI — -Manisha Verma: Additional Chief Secretary, SEEID (Skills, Employment, Entrepreneurship, and Innovation Department), Mah…
S2
https://app.faicon.ai/ai-impact-summit-2026/building-inclusive-societies-with-ai — He’s also the recipient of Time’s Now Amazing Indian Award in Education. Thank you, Aditya, for joining us. On the gover…
S3
https://dig.watch/event/india-ai-impact-summit-2026/building-inclusive-societies-with-ai — She is a strong advocate of responsible AI, inclusive technological adoption, and public -private collaboration for nati…
S4
Building Inclusive Societies with AI — show a video which will give you context of what the informal sector is, what are some of the interventions that can be …
S5
Building Inclusive Societies with AI — -S. Anjani Kumar: Role/title not explicitly mentioned in the transcript, appears to be moderating or introducing the pan…
S6
Building Inclusive Societies with AI — Agreed with:S. Anjani Kumar, Arundhati Bhattacharya — Industry-government collaboration is crucial for addressing inform…
S7
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Panel Discussion Moderator Amitabh Kant NITI — <strong>Moderator:</strong> With a big round of applause, kindly welcome the panelists of this last panel of AI Impact S…
S8
Building the Next Wave of AI_ Responsible Frameworks & Standards — This panel discussion at the Global AI Summit focused on reimagining responsible AI and balancing rapid innovation with …
S9
Building Inclusive Societies with AI — -Romal Shetty: CEO of Deloitte South Asia, moderating the panel discussion This panel discussion, moderated by Romal Sh…
S10
Building Inclusive Societies with AI — This panel discussion, moderated by Romal Shetty (CEO of Deloitte South Asia), examined challenges facing India’s inform…
S12
Accelerating Structural Transformation and Industrialization in Developing Countries: Navigating the Future with Advanced ICTs and Industry 4.0 — Multi-stakeholder collaboration is essential for success Multi-stakeholder partnerships for success The success of Ind…
S13
Closing Session  — Sustained collaboration between governments, industry, and other stakeholders is essential for translating recommendatio…
S14
Joining forces against disinformation: humanitarian, peace and media actors’ perspectives — The strong consensus on the need for multi-level, coordinated approaches provides a foundation for future collaboration….
S15
https://app.faicon.ai/ai-impact-summit-2026/building-the-workforce_-ai-for-viksit-bharat-2047 — before using the authentic insights to taking decisions. In the past year, the Commission has developed holistic policy …
S16
Assessment report on international cooperation on cybercrime in the Eastern Partnership region — After receiving request for mutual legal assistance at pre-trial stage, the competent central authority (Prosecutor Gene…
S17
CONSULTATIVE DRAFT — 1. This report is about the working of a core group of regional organisations in the Pacific and their collective capaci…
S18
Generative AI: Steam Engine of the Fourth Industrial Revolution? — Technology is moving at an incredibly fast pace, and this rapid advancement is seen in various sectors such as AI, semic…
S19
Sangeet Paul Choudary — During its initial launch, UberEats offered workers £20 per hour. As consumer demand grew and the platform gathered mome…
S20
The Gig Economy: Positioning Higher Education at the Center of the Future of Work (USAID Higher Education Learning Network) — Furthermore, the rise of online learning has also raised important questions about its potential to address the needs of…
S21
Host Country Open Stage — Context-specific solutions are essential rather than one-size-fits-all approaches
S22
UNSC meeting: Conflict prevention: women and youth — The speaker emphasises the critical role of conflict prevention in the United Nations’ mandate, particularly for the Sec…
S23
Open Forum: Empowering Bytes / DAVOS 2025 — This comment highlights the complexity of data issues and the need for nuanced approaches rather than one-size-fits-all …
S24
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion Moderator Amitabh Kant NITI — Bhattacharya identifies specific challenges faced by blue-collar workers including limited access to job opportunities, …
S25
Multistakeholder Partnerships for Thriving AI Ecosystems — Bhattacharya asserts that countries with large populations like India fundamentally require technology integration to ac…
S26
Multistakeholder Partnerships for Thriving AI Ecosystems — Dr. Bärbel Koffler emphasized that governments must create frameworks and governance structures to ensure AI benefits ar…
S27
Global challenges for the governance of the digital world — The session was structured into two rounds of questions. The first round focused on the obstacles and opportunities for …
S28
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Panel Discussion Moderator Sidharth Madaan — Perhaps the most significant intervention in the discussion came from Kapoor’s observation that the entire conversation …
S29
Science AI &amp; Innovation_ India–Japan Collaboration Showcase — I come from a consulting background. The first thing I did was just divide it by the population, right? Look at a per ca…
S30
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — Factors such as restricted access to computing resources and data further impede policy efficacy. Nevertheless, the cont…
S31
Responsible AI in India Leadership Ethics &amp; Global Impact — “One size doesn’t fit all”[111]. “See, it is a very diverse element and there is a different kind of templates which we …
S32
Exploring Digital Transformation for Economic Empowerment in Africa: Opportunities, Challenges, and Policy Priorities (International Trade and Research Centre, Nigeria) — Furthermore, issues such as cybercrime and digital breaches pose additional threats that need to be addressed. In order …
S33
WSIS Action Line Facilitators Meeting: 20-Year Progress Report — Need for inclusive approaches targeting vulnerable communities Need to be youth-centric, focus on vulnerable communitie…
S34
Framework to Develop Gender-responsive Cybersecurity Policy | IGF 2023 WS #477 — The analysis also emphasizes the significance of including vulnerable populations in policy considerations. Often, vulne…
S35
Building Inclusive Societies with AI — Summary:Both speakers strongly agree that uniform solutions fail to address the diverse challenges faced by different ca…
S36
Young voices from Africa – Harnessing digital tools for sustainable trade — Additionally, the scarcity of data on the informal sector makes it challenging to design appropriate policies that can e…
S37
Open Forum #23 Protecting Refugees Digital Resilience Info Integrity — All three speakers emphasize that one-size-fits-all solutions don’t work and that interventions must be tailored to loca…
S38
WS #133 Platform Governance and Duty of Care — Moderate disagreement with significant implications. While speakers agree on the need for platform accountability, their…
S39
Building Inclusive Societies with AI — “At the end of the reports, who is charged with the execution?”[37]. “I think it’s time for all of us to understand that…
S40
WS #395 Applying International Law Principles in the Digital Space — Corporate Accountability and Platform Responsibility Corporate accountability requires moving beyond voluntary commitme…
S41
[Parliamentary session 2] Striking the balance: Upholding freedom of expression in the fight against cybercrime — Platform Responsibility and Accountability Primary responsibility for content moderation and platform accountability
S42
Bridging the Digital Skills Gap: Strategies for Reskilling and Upskilling in a Changing World — Adapting global best practices to local contexts while maintaining international cooperation and knowledge sharing Addr…
S43
1. Introduction — – 1) Make digital literacy an integral part of basic literacy education process . As today’s children are exposed to dig…
S44
Digital Education Strategy and Implementation Plan — The situation analysis and several studies indicate a huge gap between students’ foundation ICT skills and those desired…
S45
Strengthening the Measurement of ICT for Sustainable Development: 20 Years of Progress and New Frontiers — Michael Frosch:Well, the work has started, I would say. I didn’t bring a presentation because I realized I will never ma…
S46
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — Cooperative models allow users to collectively negotiate with technology companies rather than being powerless as indivi…
S47
A digital public infrastructure strategy for sustainable development – Exploring effective possibilities for regional cooperation (University of Western Australia) — However, there is uncertainty surrounding the most beneficial approach to government services and trade platforms. Curre…
S48
African Union (AU) Data Policy Framework — With the increasing complexity and adaptiveness of the global communications system, both newer and more traditional for…
S49
Closing Session  — Sustained collaboration between governments, industry, and other stakeholders is essential for translating recommendatio…
S50
Informal multistakeholder consultations — Business sector needs to be realistic to survive. Highlighting this, private sector engagement in cybersecurity is stro…
S51
WS #152 a Competition Rights Approach to Digital Markets — Moderate to high disagreement on methods and approaches, but strong consensus on the fundamental problem of platform dom…
S52
Harnessing digital public goods and fostering digital cooperation: a multi-disciplinary contribution to WSIS+20 review — Onica advocates for supporting community networks and exploring different financial models for connectivity, similar to …
S53
NEW FORMS OF WORK IN THE DIGITAL ECONOMY — Competition among independent workers is not new, but competition in platform markets can reinforce existing…
S54
Building Inclusive Societies with AI — India’s 490 million informal workforce faces five systemic roadblocks: discovery/trust, steady demand, fair payment, ups…
S55
Building Inclusive Societies with AI — Romal Shetty identifies five key challenges that the informal workforce faces based on their study. These roadblocks pre…
S56
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Panel Discussion Moderator Amitabh Kant NITI — References a NEETI report studying blue-collar workers including carpenters, plumbers, hospitality workers, and Anganwad…
S57
Redrawing the Geography of Jobs / Davos 2025 — Audience: Hello, can you hear me? I’m Suin Lee, I’m one of the shop social entrepreneur working in education sector. …
S58
Science AI &amp; Innovation_ India–Japan Collaboration Showcase — I come from a consulting background. The first thing I did was just divide it by the population, right? Look at a per ca…
S59
Scaling Innovation Building a Robust AI Startup Ecosystem — The ceremony demonstrated the diversity and sophistication of India’s startup landscape, with recognized companies spann…
S60
AI as critical infrastructure for continuity in public services — Human factors such as fear of replacement and communication style are major barriers to AI adoption. Simple, clear messa…
S61
AI for Bharat’s Health_ Addressing a Billion Clinical Realities — The discussion revealed that technical capabilities often exceed institutional readiness for AI adoption. Behavioral cha…
S62
Panel Discussion AI in Healthcare India AI Impact Summit — Maybe I’ll do the risk first, and then I’ll talk about a few use cases. And by the way, thank you for the comments that …
S63
Open Forum #61 WSIS to WSIS+20: Enduring Principle of Internet Governance — The tone of the discussion was generally positive and collaborative, with panelists emphasizing the successes of the mul…
S64
How Multilingual AI Bridges the Gap to Inclusive Access — The tone was consistently collaborative, optimistic, and mission-driven throughout the conversation. Speakers demonstrat…
S65
Summit Opening Session — The tone throughout is consistently formal, diplomatic, and collaborative. Speakers maintain an optimistic and forward-l…
S66
Session — The discussion maintains a consistently academic and diplomatic tone throughout. Both participants approach the topic wi…
S67
Opening address of the co-chairs of the AI Governance Dialogue — The tone is consistently formal, diplomatic, and optimistic throughout. It maintains a ceremonial quality appropriate fo…
S68
New Technologies and the Impact on Human Rights — The discussion maintained a collaborative and constructive tone throughout, despite addressing complex and sometimes con…
S69
Evolving Threat of Poor Governance / DAVOS 2025 — The tone was largely serious and analytical, with panelists offering thoughtful insights on complex governance challenge…
S70
Defending Our Voice: Global South Participation in Digital Governance — This comment created a notable shift in the room’s energy and forced panelists to move beyond critique toward actionable…
S71
Afternoon session — The discussion began with a collaborative and appreciative tone as various stakeholders shared their visions and commitm…
S72
From Technical Safety to Societal Impact Rethinking AI Governanc — The discussion began with a formal, academic tone but became increasingly critical and urgent throughout. Speakers expre…
S73
Building Population-Scale Digital Public Infrastructure for AI — The tone is optimistic and collaborative throughout, with speakers sharing concrete examples of successful implementatio…
S74
Day 0 Event #154 Last Mile Internet: Brazil’s G20 Path for Remote Communities — The tone of the discussion was largely optimistic and solution-oriented, with speakers sharing examples of successful le…
S75
AI for Social Good Using Technology to Create Real-World Impact — The tone was consistently optimistic and collaborative throughout, with speakers demonstrating genuine enthusiasm for AI…
S76
WS #302 Upgrading Digital Governance at the Local Level — The discussion maintained a consistently professional and collaborative tone throughout. It began with formal introducti…
S77
Next-Gen Industrial Infrastructure / Davos 2025 — The tone was largely optimistic and forward-looking, with speakers enthusiastically sharing their visions and initiative…
S78
Need and Impact of Full Stack Sovereign AI by CoRover BharatGPT — Overall Tone:The conversation maintained an optimistic and patriotic tone throughout, with both participants expressing …
S79
Closing Session  — The tone throughout the discussion was consistently formal, collaborative, and optimistic. It maintained a celebratory y…
S80
Keynote-Nikesh Arora — Overall Tone:The tone begins optimistically, celebrating AI’s rapid progress and potential, then shifts to a more cautio…
S81
Building the Next Wave of AI_ Responsible Frameworks &amp; Standards — – Arundhati Bhattacharya- Ankush Sabharwal Bhattacharya advocates for cloud-native solutions with trust layers to ensur…
S82
5. — – The gig economy has received enormous public attention over the past year. Is this attention warranted? Will crowdwo…
S83
Greener economies through digitalisation — In terms of data regulations, Stewart firmly opposes data localisation laws. She argues that these laws can be easily ov…
S84
West vs East: Approaches to fighting corruption — 4. In India, the Supreme Court will directly hear ‘public interest litigation’ and often decide weighty matters on the s…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
S. Anjani Kumar
3 arguments139 words per minute381 words163 seconds
Argument 1
Multi‑sector collaboration (industry, development, government) is prerequisite for platform success
EXPLANATION
Anjani Kumar stresses that solving informal sector challenges requires coordinated effort from industry, development agencies, and government. He frames the ecosystem as essential for any digital platform to be effective.
EVIDENCE
He states that “all of the ecosystem has to come together to solve for this problem” and introduces a panel representing industry, development, and government, highlighting the need for cross-sector partnership [4]. He also mentions showing a video to give context about the informal sector [1].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Multiple sources stress that multi-stakeholder partnerships involving industry, government and development agencies are essential for successful digital platforms and ecosystem interventions [S1][S12][S13][S4].
MAJOR DISCUSSION POINT
Cross‑sector partnership for platform implementation
Argument 2
Effective collaboration across sectors is essential for translating recommendations into action
EXPLANATION
The speaker argues that without coordinated action among stakeholders, policy recommendations remain unimplemented. Collaboration is presented as the bridge between reports and real‑world impact.
EVIDENCE
He notes that the panel brings together representatives from industry, development, and government, underscoring the need for joint effort to move from recommendations to execution [3-4].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Collaboration across governments, industry and other stakeholders is highlighted as the key mechanism for turning policy recommendations into concrete outcomes [S13][S1].
MAJOR DISCUSSION POINT
Collaboration as implementation catalyst
AGREED WITH
Arundhati Bhattacharya, Romal Shetty
Argument 3
Ecosystem collaboration is the foundation for overcoming behavioral resistance
EXPLANATION
Anjani Kumar links multi‑sector collaboration to the ability to address behavioral barriers among informal workers. A united ecosystem can design and pilot interventions that gain trust.
EVIDENCE
He reiterates that “all of the ecosystem has to come together” to solve the problem, implying that such collaboration is needed to tackle resistance to change [4].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The literature notes that a coordinated ecosystem is required to address behavioral barriers and foster adoption of digital solutions [S12][S13].
MAJOR DISCUSSION POINT
Collaboration to address behavioral change
R
Romal Shetty
5 arguments152 words per minute921 words361 seconds
Argument 1
Reports require an execution authority; a platform provides traceability and implementation oversight
EXPLANATION
Shetty points out that reports generate valuable recommendations, but without a designated executor they stall; a digital platform can create the needed traceability and accountability.
EVIDENCE
He remarks that “different sets of people came together to contribute” and implies the need for a mechanism to turn suggestions into action [113-115]; later he references the committee-recommended platform that could help “Uberize” and create demand, suggesting a role for execution oversight [137-140].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for an accountable execution body and traceability mechanisms is explicitly discussed, emphasizing that without such authority recommendations stall [S1][S13].
MAJOR DISCUSSION POINT
Need for accountable execution of recommendations
AGREED WITH
Arundhati Bhattacharya
DISAGREED WITH
Arundhati Bhattacharya, Manisha Verma
Argument 2
Prioritising upskilling is vital to close productivity gaps identified in the study
EXPLANATION
Shetty emphasizes that productivity gaps stem from inefficient workflows, and that upskilling informal workers is essential to bridge these gaps within the next 12‑18 months.
EVIDENCE
He asks the panel how to “increase productivity 10x” and queries the guardrails needed, highlighting upskilling as a priority to address workflow inefficiencies [79-80].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Upskilling is identified as a critical lever for participants in the gig economy and for improving productivity in emerging work contexts [S20].
MAJOR DISCUSSION POINT
Upskilling as a productivity lever
AGREED WITH
Arundhati Bhattacharya, Manisha Verma, Aditya Natraj
Argument 3
Guardrails are needed to ensure technology augments rather than replaces informal workers
EXPLANATION
Shetty calls for safeguards that ensure digital tools support workers, improve safety and earnings, and do not lead to job displacement.
EVIDENCE
In his question to Aditya he asks “what guardrails do you think should be in place so that technology augments workers, improves their safety and earnings, and does not really replace them altogether?” [79-80].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Guardrails for technology adoption are repeatedly called for to protect workers, improve safety and earnings, and prevent displacement [S1][S4][S18].
MAJOR DISCUSSION POINT
Protective safeguards for tech adoption
Argument 4
Platform concept “Uberize” aims to generate demand, build skills, and enable gig‑type opportunities for informal workers
EXPLANATION
Shetty describes a proposed platform that would create demand for informal services, provide skill‑building pathways, and function similarly to gig‑economy models like Uber.
EVIDENCE
He states that “the platform that the committee recommended … to also help to Uberize, to create demand, to also build skills also” [137-140].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Examples from UberEats illustrate both the potential and challenges of gig-platform models, while broader gig-economy analyses provide context for demand generation and skill building [S19][S20].
MAJOR DISCUSSION POINT
Gig‑style platform for informal sector
AGREED WITH
Arundhati Bhattacharya, Aditya Natraj
Argument 5
One‑size‑fits‑all solutions are ineffective; nuanced approaches are required to foster adoption
EXPLANATION
Shetty argues that a single, uniform solution cannot address the diverse needs of informal workers; interventions must be tailored to specific contexts and personas.
EVIDENCE
He notes that “different sets of people came together to contribute” indicating the need for varied solutions [113-115]; he also references the broader discussion that “one size fits all solutions are ineffective” in the context of digital adoption [319-322].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Several sources argue that context-specific, nuanced solutions are essential rather than one-size-fits-all approaches for digital adoption [S21][S23].
MAJOR DISCUSSION POINT
Need for tailored interventions
AGREED WITH
Arundhati Bhattacharya, Aditya Natraj
A
Arundhati Bhattacharya
5 arguments169 words per minute1281 words452 seconds
Argument 1
Digital marketplace essential for discovery, credentials, opportunities, and payment accountability
EXPLANATION
Arundhati argues that a digital marketplace is crucial for informal workers to be discovered, showcase their credentials, access job opportunities, and ensure timely, transparent payments.
EVIDENCE
She illustrates a plumber who lacks knowledge of nearby opportunities and explains that a marketplace would let workers list credentials and access jobs, while also noting that payment delays persist because there is “no footprint” without a digital platform [34-38][42-44].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI-powered marketplaces are presented as solutions to job discovery, credential verification and payment delays for blue-collar workers [S24][S25].
MAJOR DISCUSSION POINT
Marketplace as a digital enabler for informal workers
AGREED WITH
Romal Shetty, Aditya Natraj
DISAGREED WITH
Aditya Natraj
Argument 2
Upskilling must be verifiable through digital certification to match evolving technology needs
EXPLANATION
She stresses that upskilling initiatives need a verifiable digital certification system so that workers can prove they have acquired new skills aligned with rapid technological change.
EVIDENCE
She mentions the need for “verifiable certification” after upskilling and ties it to a digital platform that can record such credentials [37-38].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The lack of verifiable skill certification is identified as a barrier, underscoring the need for digital credentials linked to upskilling initiatives [S24].
MAJOR DISCUSSION POINT
Verified digital credentials for upskilling
AGREED WITH
Romal Shetty, Manisha Verma, Aditya Natraj
DISAGREED WITH
Aditya Natraj
Argument 3
Private sector innovation, when aligned with government support, can scale solutions for informal workers
EXPLANATION
Arundhati highlights that private‑sector initiatives can drive impactful solutions, but scaling them requires coordinated government partnership.
EVIDENCE
She observes that “private sector coming in full force, along with the government” can make a great difference, emphasizing the need for public-private collaboration [120-124].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Industry-government collaboration is repeatedly highlighted as critical for scaling innovative solutions for the informal sector [S4][S12][S13].
MAJOR DISCUSSION POINT
Public‑private synergy for scaling innovation
AGREED WITH
Manisha Verma, Aditya Natraj
Argument 4
Digital footprints create accountability, encouraging trust among informal workers and payers
EXPLANATION
She points out that digital platforms leave an audit trail of transactions, which builds accountability and trust between workers and those who pay them.
EVIDENCE
She notes that without a digital platform “there is no footprint about the delays” and that such a footprint is essential for accountability [42-44].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Digital footprints are emphasized as essential for auditability, accountability and building trust between workers and payers [S1][S24].
MAJOR DISCUSSION POINT
Traceability as trust builder
AGREED WITH
Romal Shetty
Argument 5
Absence of a designated authority hampers implementation of report recommendations
EXPLANATION
Arundhati observes that while reports generate valuable suggestions, the lack of an accountable body to execute them leads to poor implementation.
EVIDENCE
She questions “who is charged with the execution?” and notes that there is “no downside” for non-execution, indicating a gap in accountability [45-50].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The discussion points out that without a clear execution authority, report recommendations remain unimplemented [S1][S13].
MAJOR DISCUSSION POINT
Need for an execution authority
AGREED WITH
S. Anjani Kumar, Romal Shetty
DISAGREED WITH
Manisha Verma, Romal Shetty
M
Manisha Verma
5 arguments145 words per minute1918 words792 seconds
Argument 1
State‑run skilling and innovation initiatives leverage simple mobile tools to connect demand with supply
EXPLANATION
Manisha describes how Maharashtra’s SEED department uses simple mobile‑enabled platforms to match skilled workers with industry demand, creating a digital bridge between supply and demand.
EVIDENCE
She outlines the SEED department’s oversight of ITIs, short-term skilling programs, a state board for accreditation, and the Maharashtra State Innovation Society, all of which together form a digital ecosystem that links skill supply to market demand [55-78].
MAJOR DISCUSSION POINT
Digital skilling ecosystem linking supply and demand
AGREED WITH
Arundhati Bhattacharya, Romal Shetty, Aditya Natraj
Argument 2
SEED department oversees ITIs, short‑term programs, accreditation, and inclusive skilling for vulnerable groups
EXPLANATION
Manisha details the comprehensive mandate of the SEED department, which includes managing over a thousand institutes, accrediting private providers, and targeting vulnerable populations such as prisoners, people with disabilities, women, and tribal communities.
EVIDENCE
She mentions the department’s oversight of 1,000+ ITIs, the Maharashtra State Skilling Society’s short-term programs, the state board’s accreditation role, and specific inclusion initiatives for jail inmates, people with disabilities, women, and tribal areas [61-77].
MAJOR DISCUSSION POINT
Inclusive vocational governance
AGREED WITH
Aditya Natraj, Arundhati Bhattacharya
Argument 3
Public‑private partnership policy empowers industry to manage ITI curricula and training for longer terms
EXPLANATION
Manisha explains a PPP policy that allows industry anchor partners to run ITIs for 10‑20 years, giving them freedom to design curricula, provide expert faculty, and align resources with industry needs.
EVIDENCE
She describes the PPP policy that grants industry management of ITIs for a decade or two, with curriculum design freedom and resource convergence, and notes its alignment with the national PM Setu scheme [274-276].
MAJOR DISCUSSION POINT
Industry‑led vocational training through PPP
Argument 4
Maharashtra’s startup initiatives (hackathons, Startup Week, direct work orders) drive social impact across clean energy, health, agriculture, etc.
EXPLANATION
Manisha showcases Maharashtra’s vibrant startup ecosystem, highlighting hackathons, grant challenges, a statewide Startup Week, and direct government work orders that support socially impactful startups in sectors such as clean energy, mobility, agriculture, health, and education.
EVIDENCE
She cites nearly 35,000 registered startups, district-level presence, hackathons, grant challenges, Startup Week attracting ~3,000 entries, financial support for IPR, quality testing, and concrete examples like Sagar Defense, a health diagnostic app, menstrual-hygiene solutions, and a wheelchair-to-two-wheeler conversion [146-176][166-183].
MAJOR DISCUSSION POINT
State‑driven startup ecosystem for social good
Argument 5
Trust‑building examples (funded tribal homestays) illustrate the need for localized, low‑risk pilots
EXPLANATION
Manisha shares a personal anecdote of funding tribal homestays with a modest budget, demonstrating how small, locally tailored pilots can build trust, showcase impact, and encourage community participation.
EVIDENCE
She recounts allocating a one-lakh-rupee fund to build homestays in tribal villages, the subsequent success of the venture, and her own visit to experience the fireflies and hospitality, emphasizing the value of such grassroots initiatives [240-270].
MAJOR DISCUSSION POINT
Grassroots pilots as trust builders
A
Aditya Natraj
5 arguments185 words per minute1857 words600 seconds
Argument 1
Aggregation models (co‑operatives, marketplace ratings) are key to improve quality and market access for blue‑collar workers
EXPLANATION
Aditya argues that aggregating blue‑collar workers through cooperatives, brand‑like models, or rating platforms can raise service quality and open up market opportunities that are otherwise fragmented.
EVIDENCE
He outlines models such as FabIndia, Amul, Seva, and Urban Clap, explaining how each aggregates workers to improve quality, distribute incentives, and create recognizable brands for consumers [188-215].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Marketplace aggregation models such as cooperatives and rating platforms are cited as ways to raise service quality and expand market access for informal workers [S24][S25].
MAJOR DISCUSSION POINT
Worker aggregation for quality and market access
AGREED WITH
Arundhati Bhattacharya, Romal Shetty
DISAGREED WITH
Arundhati Bhattacharya
Argument 2
Bottom‑quartile workers lack basic education; programs must go beyond 10th‑standard prerequisites
EXPLANATION
He highlights that the poorest quartile often have less than six years of schooling, making standard 10th‑grade‑based training unsuitable; programs must be designed for lower literacy levels.
EVIDENCE
He notes that 40% of families in the target states lack anyone with six years of education, and many existing programs assume a 10th-standard baseline, which excludes the bottom quartile [108-110].
MAJOR DISCUSSION POINT
Education gap in the poorest segment
DISAGREED WITH
Arundhati Bhattacharya
Argument 3
National Rural Livelihood Mission (NRLM) and State‑level SRLM are critical aggregation mechanisms for policy rollout
EXPLANATION
Aditya emphasizes that NRLM and its state counterpart SRLM are essential for aggregating informal workers, enabling quality improvement, technology deployment, and incentive structures.
EVIDENCE
He states that “the government’s main program of NRLM, and the SRLM, … is extremely critical for aggregating workers at various levels” [214-215].
MAJOR DISCUSSION POINT
Policy‑driven aggregation for implementation
Argument 4
Workers fall into four adoption profiles; programs must be tailored to each segment’s technology familiarity
EXPLANATION
He categorises informal workers into four groups based on their familiarity with technology—from no phone to smartphone‑savvy youths—arguing that interventions must be customized for each group.
EVIDENCE
He describes the four categories: over-50s with no phone, dumb-phone users, smartphone users who only use it for entertainment, and young smartphone-savvy workers, illustrating the diversity of adoption readiness [309-318].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for context-specific, nuanced interventions for diverse user groups is highlighted in multiple sources [S21][S23].
MAJOR DISCUSSION POINT
Segmented digital adoption strategy
AGREED WITH
Romal Shetty, Arundhati Bhattacharya
Argument 5
Community‑led development experience can inform startup‑driven interventions for trust and adoption
EXPLANATION
Aditya notes his background in community‑led development and suggests that such experience can guide startups in designing trustworthy, locally relevant solutions for informal workers.
EVIDENCE
He references his participation in the Niti Ayog committee and the Pyramid Foundation’s community-led development work, indicating deep experience in grassroots interventions [81-84].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Collaborative, community-led approaches are recommended to guide startups in designing trustworthy, locally relevant solutions [S13][S12].
MAJOR DISCUSSION POINT
Leveraging community development for startup impact
Agreements
Agreement Points
Multi‑sector collaboration and a clear execution authority are essential for turning recommendations into action for the informal sector
Speakers: S. Anjani Kumar, Arundhati Bhattacharya, Romal Shetty
Multi–sector collaboration (industry, development, government) is prerequisite for platform success Effective collaboration across sectors is essential for translating recommendations into action Absence of a designated authority hampers implementation of report recommendations Reports require an execution authority; a platform provides traceability and implementation oversight
All three speakers stress that solving informal-sector challenges requires a coordinated ecosystem of industry, development agencies and government, and that without a designated authority to execute recommendations, reports remain idle and impact is limited [4][122-124][113-115].
POLICY CONTEXT (KNOWLEDGE BASE)
This consensus mirrors calls in WSIS-related reports for a designated authority to drive implementation and for sustained multi-stakeholder collaboration [S39][S49][S35][S50].
A digital marketplace/aggregation platform is needed to discover workers, showcase credentials, ensure timely payment and improve market access
Speakers: Arundhati Bhattacharya, Romal Shetty, Aditya Natraj
Digital marketplace essential for discovery, credentials, opportunities, and payment accountability Platform concept “Uberize” aims to generate demand, build skills, and enable gig‑type opportunities for informal workers Aggregation models (co‑operatives, marketplace ratings) are key to improve quality and market access for blue‑collar workers
The panel agrees that a digital platform that aggregates informal workers and connects them with demand is crucial for visibility, credential verification and payment traceability, thereby improving quality and market reach [34-38][42-44][137-140][188-215].
POLICY CONTEXT (KNOWLEDGE BASE)
Platform-centric solutions are debated alongside cooperative alternatives; discussions on governance, data ownership and integration echo findings on cooperative models and platform design choices [S46][S47][S53].
Upskilling informal workers is vital and must be linked to verifiable digital certification and tailored to workers’ technology familiarity
Speakers: Arundhati Bhattacharya, Romal Shetty, Manisha Verma, Aditya Natraj
Upskilling must be verifiable through digital certification to match evolving technology needs Prioritising upskilling is vital to close productivity gaps identified in the study State‑run skilling and innovation initiatives leverage simple mobile tools to connect demand with supply Workers fall into four adoption profiles; programs must be tailored to each segment’s technology familiarity
All speakers highlight upskilling as a key lever for productivity; it should be certified digitally, supported by mobile-enabled state programmes and adapted to four distinct worker adoption profiles ranging from no phone to smartphone-savvy youths [37-38][79-80][55-58][309-318].
POLICY CONTEXT (KNOWLEDGE BASE)
Best-practice frameworks stress digital certification linked to dual-education and ICT literacy programmes, emphasizing alignment with local skill levels [S42][S43][S44].
Digital footprints and traceability are needed to build trust and hold parties accountable
Speakers: Arundhati Bhattacharya, Romal Shetty
Digital footprints create accountability, encouraging trust among informal workers and payers Reports require an execution authority; a platform provides traceability and implementation oversight
Both emphasize that without a digital audit trail payments and service quality cannot be monitored, and that a platform providing such traceability is essential for accountability and trust [42-44][113-115].
POLICY CONTEXT (KNOWLEDGE BASE)
Platform accountability and corporate duty of care are highlighted in recent governance debates, with emphasis on traceable data and robust data-governance frameworks [S38][S40][S48][S36].
Interventions must be tailored; one‑size‑fits‑all solutions are ineffective for the diverse informal workforce
Speakers: Romal Shetty, Arundhati Bhattacharya, Aditya Natraj
One‑size‑fits‑all solutions are ineffective; nuanced approaches are required to foster adoption So basically again you know there cannot be a cookie cutter solution to all of this because the persuasions are so different the challenges are so different you necessarily need to solve for people in different ways Workers fall into four adoption profiles; programs must be tailored to each segment’s technology familiarity
The panel concurs that the informal sector is heterogeneous; solutions need to be customized to different worker segments rather than applying a uniform model [120-122][113-115][319-322][309-318].
POLICY CONTEXT (KNOWLEDGE BASE)
Multiple fora have underscored the need for context-specific interventions, rejecting uniform solutions for informal workers [S35][S37][S33].
Inclusive approaches targeting vulnerable groups (women, tribal communities, bottom‑quartile workers) are essential
Speakers: Manisha Verma, Aditya Natraj, Arundhati Bhattacharya
SEED department oversees ITIs, short‑term programs, accreditation, and inclusive skilling for vulnerable groups What is AI going to do for this girl … tribal … bottom‑quartile … lower growth rate … need to focus on them Private sector innovation, when aligned with government support, can scale solutions for informal workers
All three stress the need to reach women, tribal populations and the poorest quartile through inclusive skilling, tailored interventions and public-private scaling to ensure no group is left behind [76-77][94-99][101-104][120-122].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy papers repeatedly call for gender-responsive and youth-centric measures that explicitly include women, tribal peoples and other marginalized groups [S32][S33][S34][S35].
Similar Viewpoints
Both argue that cross‑sector collaboration must be paired with a clear execution body to move from reports to concrete impact [4][113-115].
Speakers: S. Anjani Kumar, Romal Shetty
Effective collaboration across sectors is essential for translating recommendations into action Reports require an execution authority; a platform provides traceability and implementation oversight
Both see aggregation—through a digital marketplace or cooperative models—as central to improving visibility, quality and market access for informal workers [34-38][42-44][188-215].
Speakers: Arundhati Bhattacharya, Aditya Natraj
Digital marketplace essential for discovery, credentials, opportunities, and payment accountability Aggregation models (co‑operatives, marketplace ratings) are key to improve quality and market access for blue‑collar workers
Both highlight that public‑private partnerships enable industry to lead curriculum design and scale innovations for the informal sector [274-276][120-124].
Speakers: Manisha Verma, Arundhati Bhattacharya
Public‑private partnership policy empowers industry to manage ITI curricula and training for longer terms Private sector innovation, when aligned with government support, can scale solutions for informal workers
Both stress the necessity of segment‑specific interventions rather than uniform solutions, citing diverse technology adoption levels among workers [113-115][319-322][309-318].
Speakers: Romal Shetty, Aditya Natraj
One‑size‑fits‑all solutions are ineffective; nuanced approaches are required to foster adoption Workers fall into four adoption profiles; programs must be tailored to each segment’s technology familiarity
Unexpected Consensus
Simple low‑tech interventions can generate large impact for informal workers
Speakers: Arundhati Bhattacharya, Manisha Verma
Digital marketplace essential for discovery, credentials, opportunities, and payment accountability Trust‑building examples (funded tribal homestays) illustrate the need for localized, low‑risk pilots
While most discussion focused on high-tech digital platforms, both speakers highlighted very basic interventions-a stone-age equipment upgrade and a one-lakh rupee homestay fund-that dramatically improved livelihoods, showing consensus that low-tech pilots are valuable [129-132][240-254].
Overall Assessment

The panel shows strong convergence on the need for multi‑sector collaboration, a digital platform/marketplace, focused upskilling with verifiable certification, accountability through digital footprints, and tailored solutions that address vulnerable groups.

High consensus across industry, development and government representatives, indicating a solid foundation for coordinated policy and program design to empower the informal workforce.

Differences
Different Viewpoints
Responsibility for execution and accountability of platform implementation
Speakers: Arundhati Bhattacharya, Manisha Verma, Romal Shetty
Absence of a designated authority hampers implementation of report recommendations Public–private partnership policy empowers industry to manage ITI curricula and training for longer terms Reports require an execution authority; a platform provides traceability and implementation oversight
Arundhati questions who will be charged with executing the study’s recommendations, noting that without an accountable body there are no consequences for non-execution [45-50]. Manisha argues that the government should act mainly as a catalyst and not claim primary ownership, emphasizing that industry-led partnerships (PPP) should drive implementation [143-146]. Romal stresses that a clear execution authority and a digital platform are needed to turn reports into action and to provide traceability for payments and delivery [113-115][137-140]. The three speakers therefore disagree on which entity-government, private sector, or a dedicated authority-should lead and be held accountable for implementing solutions.
POLICY CONTEXT (KNOWLEDGE BASE)
Debates on platform governance stress the need for a clear duty-of-care and accountable execution body, as reflected in recent WSIS and platform-responsibility discussions [S38][S39][S41][S40].
Approach to upskilling: digital certification vs addressing basic education gaps
Speakers: Arundhati Bhattacharya, Aditya Natraj
Upskilling must be verifiable through digital certification to match evolving technology needs Bottom‑quartile workers lack basic education; programs must go beyond 10th‑standard prerequisites
Arundhati argues that upskilling initiatives need a verifiable digital certification system so workers can prove new skills, linking this to a digital platform [37-38]. Aditya points out that a large share of the poorest workers have less than six years of schooling and that many existing programs assume a 10th-standard baseline, making such digital upskilling unsuitable without first addressing foundational education gaps [108-110]. This creates a disagreement on whether digital certification alone is sufficient or whether basic literacy must be tackled first.
POLICY CONTEXT (KNOWLEDGE BASE)
The tension between digital credentialing and foundational ICT literacy is documented in upskilling strategies that combine basic education with certification pathways [S42][S43][S44].
Preferred aggregation mechanism: digital marketplace vs cooperative/ rating‑based models
Speakers: Arundhati Bhattacharya, Aditya Natraj
Digital marketplace essential for discovery, credentials, opportunities, and payment accountability Aggregation models (co‑operatives, marketplace ratings) are key to improve quality and market access for blue‑collar workers
Arundhati proposes a digital marketplace where informal workers can list credentials, discover jobs and create a payment footprint, arguing this is essential for transparency and accountability [34-38][42-44]. Aditya describes alternative aggregation approaches-cooperatives, brand-like models (FabIndia, Amul, Seva) and rating platforms (Urban Clap)-as ways to improve service quality and market access without necessarily relying on a single marketplace [188-215]. The speakers therefore disagree on the optimal structure for aggregating informal workers.
POLICY CONTEXT (KNOWLEDGE BASE)
Stakeholders contrast market-driven aggregation with cooperative data-ownership models, a theme echoed in analyses of cooperative platforms and integration choices [S46][S47][S53].
Unexpected Differences
Government’s role in the startup ecosystem versus private‑sector driven growth
Speakers: Manisha Verma, Arundhati Bhattacharya
Public–private partnership policy empowers industry to manage ITI curricula and training for longer terms Private sector innovation, when aligned with government support, can scale solutions for informal workers
Manisha describes the Maharashtra startup ecosystem as organically grown and asserts that the government should not claim credit, positioning itself mainly as a catalyst [143-146]. Arundhati, however, emphasizes that scaling innovative solutions requires strong public-private collaboration and that the government must play a decisive role in enabling ecosystems [120-124]. This contrast in the perceived ownership of startup success was not anticipated given the shared focus on societal impact.
POLICY CONTEXT (KNOWLEDGE BASE)
The literature highlights the importance of public-private partnerships while noting the private sector’s pragmatic role in driving startup ecosystems [S35][S49][S50].
Platform “Uberize” concept versus cooperative aggregation models
Speakers: Romal Shetty, Aditya Natraj
Platform concept “Uberize” aims to generate demand, build skills, and enable gig‑type opportunities for informal workers Aggregation models (co‑operatives, marketplace ratings) are key to improve quality and market access for blue‑collar workers
Romal promotes a unified digital platform that would “Uberize” informal work, creating demand and skill pathways [137-140]. Aditya argues that multiple aggregation models-cooperatives, brand-like structures, and rating systems-are essential and that a single platform may not capture the diversity of workers’ needs [188-215]. The divergence between a single gig-platform vision and a pluralistic aggregation approach was not foreseen.
POLICY CONTEXT (KNOWLEDGE BASE)
Critiques of “Uberization” stress alternative cooperative structures and concerns over platform dominance, aligning with calls for community-owned digital infrastructures [S46][S51][S52].
Overall Assessment

The panel shows broad consensus that digital interventions are needed for the informal sector, but there are notable disagreements on who should own and be accountable for implementation, the design of upskilling pathways, and the optimal aggregation mechanism. These tensions revolve around the balance of government versus private sector leadership, the need to address foundational education gaps before digital certification, and whether a single marketplace platform or a suite of cooperative models best serves workers.

Moderate to high disagreement on governance and design choices, which could impede coordinated action unless a clear, jointly‑owned execution framework is established. The differing views may lead to fragmented pilots and slower scaling of solutions.

Partial Agreements
All four speakers agree that digital solutions—whether a platform, marketplace, or mobile‑enabled skilling system—are needed to address challenges faced by the informal workforce. They differ on the specific design and governance of those solutions, but share the common goal of leveraging ICTs to improve discovery, upskilling, and accountability [113-115][137-140][34-38][55-78][309-318].
Speakers: Romal Shetty, Arundhati Bhattacharya, Manisha Verma, Aditya Natraj
Reports require an execution authority; a platform provides traceability and implementation oversight Digital marketplace essential for discovery, credentials, opportunities, and payment accountability State‑run skilling and innovation initiatives leverage simple mobile tools to connect demand with supply Workers fall into four adoption profiles; programs must be tailored to each segment’s technology familiarity
All three acknowledge that collaboration between public and private actors is essential, yet they differ on the balance of roles—Arundhati stresses a need for an accountable authority, Manisha emphasizes industry‑led management through PPP, and Romal calls for a platform that can provide oversight and traceability [34-38][274-276][113-115].
Speakers: Arundhati Bhattacharya, Manisha Verma, Romal Shetty
Digital marketplace essential for discovery, credentials, opportunities, and payment accountability Public–private partnership policy empowers industry to manage ITI curricula and training for longer terms Reports require an execution authority; a platform provides traceability and implementation oversight
Takeaways
Key takeaways
A digital marketplace/platform is essential to discover informal workers, showcase credentials, generate demand, and ensure timely payment with traceable accountability. Execution authority is missing; without a designated body, reports and recommendations remain unimplemented. Aggregation models (cooperatives, rating‑based marketplaces, sector‑specific platforms) are critical to improve quality, market access, and earnings for blue‑collar workers. Skilling and vocational education must be linked to digital certification and continuously updated to match rapid technology change; inclusive programs are needed for vulnerable groups and for the bottom‑quartile lacking basic education. Public‑private partnership (PPP) policies that give industry long‑term control of ITI curricula and training can align skill supply with industry needs. Maharashtra’s startup ecosystem (hackathons, Startup Week, direct work orders, sector‑focused innovations) demonstrates how private‑sector solutions can be scaled for social impact. Behavioral adoption varies across four distinct user profiles; one‑size‑fits‑all digital interventions fail, requiring tailored outreach, trust‑building pilots, and localized low‑risk experiments. Multi‑sector collaboration (government, industry, development NGOs) is a prerequisite for any platform or policy to succeed.
Resolutions and action items
Proposal to create a dedicated execution authority (or designate an existing agency) to own, implement and monitor the digital platform for informal workers. Development of a national‑level digital marketplace that records worker credentials, upskilling certifications, job demand, and payment timelines. Leverage the NRLM/SRLM mechanisms to aggregate informal workers and feed them into the digital platform. Expand Maharashtra’s PPP policy for ITIs, allowing industry anchor partners to design curricula and provide expert faculty for 10‑20 years. Scale short‑term, evening skilling programs and certify them digitally, especially for vulnerable groups (women, tribal, disabled, ex‑inmates). Continue and expand Maharashtra’s Startup Week and direct work‑order scheme to channel government procurement to high‑impact social startups. Design tiered digital adoption programmes aligned with the four identified user categories (age/technology familiarity). Pilot localized interventions such as tribal homestays funded through untied “Nucleus Budget” style grants to build trust and demonstrate impact.
Unresolved issues
Who exactly will be appointed as the execution authority and how will its accountability be enforced? Funding model for the proposed digital platform and associated upskilling certification infrastructure. Mechanisms to ensure technology augments informal workers rather than displaces them, especially in AI‑driven interventions. Scalable approach to address payment delays by large corporates and government agencies beyond the platform’s traceability. Concrete strategy to bring the bottom‑quartile (lacking basic education) into the formal market and provide appropriate entry‑level skilling. How to harmonize multiple aggregation models (cooperatives, marketplace ratings, private‑sector platforms) without fragmenting the ecosystem. Extent of industry commitment required for deep partnership; specific incentives or obligations were not defined.
Suggested compromises
Government acts as a catalyst and facilitator (providing policy, funding, and oversight) while industry takes operational lead on curriculum design and platform management (PPP model). Combine simple, low‑tech interventions (e.g., upgraded tools for tribal artisans) with digital solutions to address immediate productivity gaps without over‑reliance on high‑tech AI. Use both cooperative‑based aggregation (e.g., NRLM) and marketplace‑rating models to cater to different worker segments, balancing collective bargaining power with individual choice. Implement a phased digital adoption plan that matches the four user profiles, allowing early adopters to drive momentum while providing extra support to low‑tech users.
Thought Provoking Comments
India is great at putting out fantastic reports. At the end of the reports, who is charged with the execution? There has to be an authority that will take charge, run with it, and be accountable for actually implementing it.
Challenges the prevailing focus on reports and recommendations by highlighting the lack of execution responsibility, calling for a concrete governance mechanism.
Shifted the conversation from identifying problems to demanding actionable accountability. Prompted other panelists to consider implementation frameworks and set the tone for discussing practical steps rather than just analysis.
Speaker: Arundhati Bhattacharya
When we talk about productivity gaps, we must consider the bottom quartile: women married before 18, tribal communities, and those with no education. What will AI do for a 20‑year‑old girl who already has two children?
Introduces a social‑structural lens to the productivity discussion, moving beyond technology to deep-rooted gender and education issues.
Redirected the dialogue to address systemic inequities, influencing subsequent remarks about targeting interventions for the most vulnerable groups and highlighting the need for inclusive policies.
Speaker: Aditya Natraj
The key to improving blue‑collar services is aggregation – models like FabIndia, Amul, or UrbanClap show how organizing workers can raise quality, trust, and earnings.
Provides concrete, varied models for scaling informal work, illustrating how collective organization can solve quality and market access problems.
Introduced a new thematic strand about worker aggregation, leading Manisha to discuss state‑level aggregation programs (NRLM, SRLM) and prompting consideration of policy levers for organizing informal labor.
Speaker: Aditya Natraj
In a tribal bamboo‑working village, simply replacing stone‑age tools with better equipment dramatically improved product quality and market price—no fancy tech needed.
Demonstrates that low‑tech, context‑specific interventions can have outsized impact, challenging the assumption that high‑tech solutions are always required.
Balanced the high‑tech focus of the discussion, encouraging participants to consider simple, locally‑driven innovations as part of the solution mix.
Speaker: Arundhati Bhattacharya
Technology adoption varies across four ‘Indias’: older workers with no phone, those with dumb phones, smartphone users who don’t use them for business, and young workers comfortable with apps. One‑size‑fits‑all programs fail.
Offers a nuanced behavioral framework for digital adoption, highlighting the need for segmented strategies.
Deepened the analysis of implementation challenges, leading Manisha to reference tailored training programs and prompting the panel to think about differentiated outreach.
Speaker: Aditya Natraj
I funded a one‑lakh‑rupee pilot for homestays in a tribal firefly area, and three years later the community was thriving with tourists, showcasing tourism’s potential for inclusive growth.
Provides a concrete, personal success story that illustrates how small, targeted interventions can unlock economic opportunities in remote communities.
Served as a turning point that highlighted tourism as a viable sector for informal workers, prompting Arundhati to emphasize hospitality’s multiplier effect and expanding the conversation beyond traditional skill sectors.
Speaker: Manisha Verma
Overall Assessment

The discussion began with a broad framing of informal sector challenges, but pivotal comments—particularly Arundhati’s call for execution accountability, Aditya’s focus on the bottom quartile and aggregation models, and the nuanced insights on technology adoption—shifted the dialogue toward concrete, implementable strategies. Personal anecdotes, such as Manisha’s tribal homestay pilot and Arundhati’s bamboo‑tool example, grounded the conversation in real‑world impact, prompting the panel to balance high‑tech aspirations with low‑tech, context‑specific solutions. Collectively, these thought‑provoking remarks redirected the panel from abstract problem‑identification to actionable, inclusive pathways for scaling and sustaining informal sector growth.

Follow-up Questions
Who should be accountable for executing the recommendations from the informal sector report?
Arundhati highlighted the lack of a designated authority to implement report suggestions, indicating a need to define execution responsibility.
Speaker: Arundhati Bhattacharya
What digital platform architecture is needed to comprehensively address worker discovery, upskilling, payment, and accountability for informal workers?
Both pointed to the necessity of a unified digital platform, but specifics of design, governance and integration remain unresolved.
Speaker: Arundhati Bhattacharya, Romal Shetty
Why is the bottom quartile of the informal workforce not plugged into the market, and what targeted interventions can bring them in?
Aditya emphasized the large gap for the poorest segment and called for deeper investigation into barriers and effective outreach strategies.
Speaker: Aditya Natraj
What is the potential of the tourism/hospitality sector for job creation and foreign‑exchange earnings, and how can it be better leveraged?
She noted the sector’s under‑performance despite its labor‑intensive nature and suggested a dedicated study to unlock its opportunities.
Speaker: Arundhati Bhattacharya
Which aggregation models (e.g., cooperative, private‑brand, marketplace) are most effective for improving quality, market access and earnings for blue‑collar workers?
Aditya discussed various aggregation approaches and indicated the need to evaluate their relative impact on informal workers.
Speaker: Aditya Natraj
How can digital adoption programs be tailored to the four distinct user groups (age/technology proficiency) among informal workers?
He identified four user categories with differing tech readiness, implying research is needed to design segmented adoption interventions.
Speaker: Aditya Natraj
What are the outcomes and best practices of public‑private partnership (PPP) policies for ITI curriculum design and industry involvement?
Manisha described PPP initiatives but called for assessment of their effectiveness and scalability.
Speaker: Manisha Verma
What impact does the ‘Startup Week’ initiative and direct government work orders have on scaling socially impactful startups?
She highlighted the program’s success stories and suggested systematic evaluation of its broader impact.
Speaker: Manisha Verma
How can tender procurement processes be reformed to reduce barriers for startups seeking government contracts?
Manisha recounted challenges startups face in government tenders, indicating a need for policy and procedural research.
Speaker: Manisha Verma
What guardrails are needed to ensure technology augments informal workers without displacing them?
Romal asked about safeguards; while discussed, concrete frameworks and metrics remain an open research area.
Speaker: Romal Shetty, Aditya Natraj
How can timely payment mechanisms be enforced across corporates and government to support informal workers?
Arundhati stressed pervasive payment delays and the lack of accountability, pointing to a need for mechanisms that ensure prompt payments.
Speaker: Arundhati Bhattacharya

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Agentic AI in Focus Opportunities Risks and Governance

Agentic AI in Focus Opportunities Risks and Governance

Session at a glanceSummary, keypoints, and speakers overview

Summary

The session opened with Jason Oxman outlining a two-part panel that would first examine the business case for agentic AI and then explore the public-policy measures needed to encourage and safeguard its use [1-3]. Austin Mayron, Acting Director of the U.S. Center for AI Standards and Innovation (CAISI), explained that the organization, re-branded from the AI Safety Institute in June 2025, now focuses on standards and innovation rather than safety alone [13-18]. He noted that CAISI sits within the Department of Commerce and co-locates with NIST, positioning it as a front-door for industry to engage with government and to develop voluntary standards and best practices [19-30]. CAISI has recently launched an AI-agent standards initiative, issued a request for information on agent security, and announced sector-specific listening sessions for health-care, education and finance to gather industry challenges [32-38].


Prith Banerjee of Synopsys described how “agentic engineers”-AI-driven agents that complement human designers-are being used to accelerate chip-to-system design, enabling faster product cycles in automotive and aerospace applications [55-95]. Caroline Louveaux of MasterCard highlighted the shift from assistive AI to operational AI that autonomously detects fraud and initiates payments, and she outlined four guardrails-know your agent, security by design, clear consumer intent, and traceability-to ensure trust and accountability [101-118][218-231]. Syam Nair of NetApp explained that agents embedded near storage controllers improve data quality and allow real-time risk detection, positioning the company at roughly level-three of a five-level agent maturity model [128-143].


Austin emphasized a bottom-up, grassroots approach to standards, suggesting CAISI could develop benchmarks for handling personally identifiable information and interoperability to reduce adoption barriers [156-170]. Prith warned that autonomous physical systems such as self-driving cars or software-defined aircraft present safety risks that require extensive verification and validation before deployment [179-207]. Syam added that robust data governance and multi-level guardrails are essential because agents act on data without empathy, and ultimate accountability must remain with human owners [235-248].


Several panelists converged on the OECD as the primary multilateral forum for AI policy, with Danielle noting its influence on the EU AI Act and Sam and Carly recommending it alongside regional standards bodies and events like Singapore International Cyber Week [383-395][401-406]. Ellie and Combiz called for technical benchmarks and broader multistakeholder platforms such as the International Consortium of Safety Institutes and the UN-ITU to ensure inclusive, standards-driven governance, concluding that aligning industry-led standards with policy will enable safe, trusted adoption of agentic AI [410-415][426-434][435-439].


Keypoints

CAISI/NIST are spearheading a standards-driven, industry-focused initiative to enable safe adoption of agentic AI. The Center for AI Standards and Innovation (CAISI) operates within the Department of Commerce and partners with NIST to develop voluntary standards, has issued an RFI on AI-agent security, and is holding sector-specific listening sessions on health-care, education, and finance to gather industry challenges and shape best-practice guidelines. [13-18][26-30][32-38][156-166][168-171]


Major business use cases of agentic AI are emerging across sectors. Synopsys is deploying “agentic engineers” that augment human chip designers, accelerating silicon-to-system development for cars, aircraft, and data-centers. [55-62][88-95] MasterCard is moving from assistive AI to operational AI that autonomously detects fraud, triages signals, and initiates secure payments, requiring millisecond-scale decisions. [105-112] NetApp is embedding agents at the storage controller to improve data quality, reduce latency, and support multi-cloud AI workloads without moving data through cumbersome pipelines. [128-143]


All firms stress the need for robust enterprise guardrails and risk-management frameworks. MasterCard’s “agentic payments” playbook outlines four guardrails: (1) Know your agent, (2) security-by-design, (3) clear consumer intent, and (4) traceability/audibility to ensure accountability and prevent misuse (e.g., the accidental sushi order). [218-226][227-231] NetApp highlights multi-level guardrails-public-private partnership, data governance, and human accountability-to contain the broader “blast radius” of agent errors. [235-248]


The panel converges on concrete policy recommendations centered on voluntary standards and global coordination. Participants urge policymakers to lean on NIST-style consensus standards, adopt the OECD AI Principles as a universal reference, and develop technical benchmark platforms for multi-agent systems. They also point to complementary bodies such as the International Consortium of Safety Institutes, the ITU/UN, and regional forums (e.g., Singapore International Cyber Week) for ongoing collaboration. [260-267][275-283][304-313][386-395][401-407][410-418][429-434]


Overall purpose/goal:


The discussion is designed to map the business opportunities and public-policy implications of “agentic AI,” showcase real-world use cases, identify the risks and necessary guardrails, and gather industry input that can shape U.S. and global standards and regulatory approaches.


Overall tone and its evolution:


The conversation begins with an upbeat, collaborative tone as moderators introduce the panel and speakers share enthusiastic visions of agentic AI’s potential. As the dialogue moves to specific use cases, the tone remains optimistic but introduces cautionary notes (e.g., Prith’s safety warnings, Caroline’s sushi-order anecdote). When discussing guardrails and policy, the tone becomes more measured and risk-aware, emphasizing responsibility, accountability, and the need for clear standards. The session closes on a constructive, appreciative note, with panelists expressing confidence that their recommendations will inform policymakers and drive responsible innovation.


Speakers

Jason Oxman – Moderator/Host; President & CEO, Information Technology Industry Council (ITI) [​S20][​S21]


Austin Mayron – Acting Director, U.S. Center for AI Standards and Innovation (CAISI); Senior Legal Advisor to the Undersecretary of Commerce for Intellectual Property; Director, United States Patent and Trademark Office [​S11][​S12]


Prith Banerjee – CTO and SVP, Synopsys (electronic design automation and semiconductor IP) [​S4]


Caroline Louveaux – Chief Privacy AI and Data Responsibility Officer, MasterCard [transcript]


Syam Nair – Chief Product Officer, NetApp (global multi‑cloud service provider) [​S5]


Sam Kaplan – Assistant General Counsel for Global Policy, Palo Alto Networks [​S6][​S7]


Carly Ramsey – Lead Public Policy, Asia Pacific, Cloudflare [​S8][​S9][​S10]


Jennifer Mulvaney – Public Policy (lobbyist) role, Adobe [​S14]


Danielle Gilliam‑Moore – Director, Global Public Policy, Salesforce (AI policy work) [​S19]


Ellie Sakhaee – Public Policy team member, Google; Ph.D. in Computer Science/Machine Learning [​S22][​S23]


Combiz Abdolrahimi – Industry professional with former regulator experience; specific role/company not clearly specified [​S1]


Additional speakers:


(none)


Full session reportComprehensive analysis and detailed insights

Opening framing – Jason Oxman opened the AI Impact Summit by explaining that the event would feature a two-part panel: the first half would make the business case for “agentic AI”, and the second half would explore the public-policy measures needed to both encourage and safeguard its deployment [1-3]. He introduced the opening speaker, Austin Mayron, Acting Director of the U.S. Center for AI Standards and Innovation (CAISI), and underscored the administration’s focus on this emerging technology [7-12].


CAISI overview – Austin outlined CAISI’s recent evolution. Originally founded as the U.S. AI Safety Institute, it was re-branded in June 2025 under Commerce Secretary Howard Lutnick to focus on standards and innovation rather than solely on safety [16-18]. He noted that CAISI sits within the Department of Commerce, serving as the “front door” for industry to engage with the U.S. government [19-22] and co-locates with the National Institute of Standards and Technology (NIST) [26-30]. This partnership lets CAISI leverage NIST’s historic, non-regulatory role of promoting economic growth through voluntary standards [27-29].


Current CAISI actions – CAISI has launched an AI-agent standards initiative, issuing a Request for Information (RFI) on AI-agent security and encouraging comments on NIST’s draft on AI identity and verification [32-37]. It also announced sector-specific listening sessions for health-care, education and finance to identify adoption barriers and shape best-practice guidelines [38-41]. Austin explains the agency’s bottom-up, grassroots approach and urges industry to submit comments on the AI-agent security RFI and to participate in the upcoming listening sessions [140-150]. He adds that CAISI could develop benchmarks and evaluation methods to give firms confidence that AI agents handle personally identifiable information (PII) in compliance with regulatory obligations [160-163].


Business-side use-cases


Synopsys/Ansys – Prith Banerjee described Synopsys as the leading provider of electronic design automation tools that enable the design of billion- and trillion-transistor chips for companies such as NVIDIA, AMD and Qualcomm [58-60]. After Synopsys’s $35 billion acquisition of Ansys, the firm now positions itself as a “chips-to-systems” company [61-63] and is a $10 billion company with a market-cap of $100 billion[64-66]. He explained that modern products-software-defined cars, aircraft and data-centres-require “software-defined verification and validation” before silicon is fabricated [66-68]. To meet accelerating product cycles, Synopsys has created “agentic engineers”: AI-driven agents that perform lower-level reasoning tasks, complementing human engineers while keeping humans in the loop [88-93]. Prith framed this as “physical AI”, where agentic technology augments the design of complex physical systems [94-95].


MasterCard – Caroline Louveaux highlighted MasterCard’s long-standing use of AI for security, noting a shift from assistive AI that merely recommends actions to operational AI that autonomously detects fraud, triages signals and initiates secure payments [105-108]. Because decisions must be made in milliseconds, she stressed that agents must operate within clearly defined values, permissions and human-oversight boundaries [109-115]. MasterCard has codified this into a four-point “agentic payments” playbook: (1) know your agent; (2) security-by-design; (3) clear consumer intent; (4) traceability and auditability [218-231]. She illustrated the importance of intent with a recent incident where an employee’s casual query led the agent to place a sushi order using the employee’s card details [227-230].


NetApp – Syam Nair explained NetApp’s strategy of embedding AI agents close to storage controllers to improve data quality and reduce latency for multi-cloud workloads [135-137]. By processing data at the source, agents can support real-time risk detection-particularly for cybersecurity threats that now break out in under a minute [138-140]. NetApp places itself at roughly level-three of a five-level agent maturity model, reflecting early-stage but rapidly advancing capabilities [141-143]. He argued that robust data governance-tracking lineage, ensuring integrity and preventing manipulation-is the core guardrail, because agents lack empathy and make decisions solely on data [241-245]. Ultimately, accountability remains with human owners, not the agents themselves [246-248].


Enterprise guardrails discussion – Austin explains the bottom-up approach and again urges participation in the RFI and listening sessions [140-150]. Prith introduced a responsible-AI perspective, emphasizing that verification and validation must be exhaustive before hardware prototyping [170-176] and warning that unchecked acceleration of agentic AI in physical systems could enable scenarios such as an autonomous car in Mumbai or a software-defined aircraft being hijacked and weaponised [188-207]. He concluded that safe AI must be embedded throughout the engineering workflow, especially for high-risk domains such as data-centres or even nuclear arsenals [202-207]. Syam expanded on the notion of “blast radius”, noting that a network of agents can amplify errors far beyond traditional insider threats [235-238]. He advocated for multi-level guardrails: public-private partnerships to define operational constraints, rigorous data-governance to secure the inputs that power agents, and clear human accountability for outcomes [239-248].


Policy recommendations


Human-first principle – Jennifer Mulvaney reinforced that policy should protect people before models and focus on preventing harm [260-267].


Autonomy continuum – Ellie Sakhaee proposed regulating AI based on the autonomy continuum-moving from human-in-the-loop to human-on-the-loop to human-in-command as agents become more capable-mirroring the FAA’s evolution from “pilot always in sight of drones” to “pilot in command of drones” [279-288].


Open models and standards – Carly Ramsey stressed that “open models, open standards are really interesting and are allowing people to access tools they might not normally be able to access,” and called for global harmonisation so regional frameworks such as Singapore’s align with NIST [304-313][306-307].


Standards organisations – Sam Kaplan added that standards organisations are essential for mapping the three-dimensional risk surface of agentic AI, which now includes kinetic consequences [333-338] and highlighted the International Consortium of Safety Institutes as another multilateral venue [400-404].


Agile ministry-led frameworks – Danielle Gilliam-Moore highlighted the importance of agile, ministry-led governance frameworks that can act faster than lengthy ISO processes, noting that “ISO controls take about three years” and citing ISO 42001 as an example of a long-running standard-development process [351-362].


Concrete operational guidance – Combiz Abdolrahimi echoed the demand for playbooks, benchmarks and clear governance structures rather than abstract principles [429-434] and suggested broader multilateral forums such as the UN/ITU and AI for Good initiatives.


Multilateral coordination – Across the panel, participants converged on the OECD as the primary forum for AI policy, noting its influence on the EU AI Act and its role in shaping global reporting frameworks [386-393].


Points of agreement & disagreement – All participants agreed on the necessity of robust guardrails, the centrality of standards, and the value of international coordination. Notable disagreements included: (1) voluntary, consensus-based standards versus sector-specific, agile regulatory frameworks; (2) data-governance as the foundational guardrail (Syam) versus security-focused standards (Sam); and (3) the optimal multilateral venue-OECD (Danielle), International Consortium of Safety Institutes/UN/ITU (Combiz), or regional events such as Singapore International Cyber Week (Carly).


Key take-aways


– CAISI/NIST are building a voluntary, standards-driven framework for agentic AI security and PII handling.


– Leading firms (Synopsys, MasterCard, NetApp) are already deploying “agentic engineers” to accelerate design, payments and data-center operations.


– Robust guardrails-verification/validation, data governance, clear human accountability, and sector-specific playbooks-are essential for safe deployment.


– Policy should follow an autonomy-continuum approach, leverage open models/standards, and be coordinated through multilateral bodies (OECD) while allowing agile, ministry-led actions where needed.


Closing remarks – Jason thanked the panelists, acknowledged the Government of India for hosting the summit, and expressed optimism that the recommendations will inform policymakers and enable safe, trusted adoption of agentic AI [435-444].


Session transcriptComplete transcript of the session
Jason Oxman

Our second discussion will be this panel, which will discuss the business case use of agentic AI. And then we’ll follow that with a second panel, which will discuss the public policy implications of agentic AI. That is to say, what government should be doing to encourage and to safeguard the use of agentic AI. We all know that agentic AI is quite literally the AI of agents. And there’s been a lot of discussion here at the AI Impact Summit about how agentic AI is creating new opportunities for jobs, for societal benefits, for use cases across different industries. And one of the most important questions is, of course, what public policy solutions are going to be necessary to encourage the use of agentic AI.

So I’m very pleased to welcome as our opening speaker, Austin Mayron, who is the Acting Director of the Center for AI Standards and Innovation, and a senior, you have the longest title in the world, Austin. Thank you. Senior Legal Advisor to the Undersecretary of Commerce for Intellectual Property and Director of the United States Patent and Trademark Office. office. Austin, we are thrilled to have you here. You have some very interesting updates on how the U .S. administration is approaching agentic AI, including what the office is doing, which I think is enormously important as well. So you’re going to join us for a few minutes of table -setting remarks, if you will, and we’re thrilled to have you here.

Austin, I’ll turn it over to you.

Austin Mayron

Absolutely. Thank you, Jason. Thank you to ITI, and thank you all for coming today. As Jason said, my name is Austin Mayron, and I’m the Acting Director of the U .S. Center for AI Standards and Innovation, also called CAISI. CAISI was originally founded as the U .S. AI Safety Institute, but last year in June of 2025, Secretary of Commerce Howard Lutnick refounded us as the Center for AI Standards and Innovation. That signaled a shift away from safety principles, more towards standards and innovation. I think there’s two organizational aspects of CAISI that are worth note. The first is that we’re located within the Department of Commerce. We are very focused on helping industry. The Secretary has tasked us to be the front door for industry to the United States government, and we really see ourselves as serving in that role.

We collaborate with various aspects of the AI ecosystem, including the Frontier Labs, for instance, on pre -deployment evaluations. And we like to partner with industry to help understand government. As one example, sometimes there’s a lack of AI expertise within the U .S. government. And CAISI, because we have talent from Frontier AI Labs, we’re able to help explain novel concepts to other aspects of the administration. The other aspect of our organization that bears no is that we’re located with the NIST, the National Institute for Standards and Technology. And the thing that’s worth noting there is that NIST, throughout its history, it hasn’t been a regulatory organization. It’s been an organization that’s promoted economic growth and technological development by developing standards and facilitating the development of standards and best practices.

And so CAISI, we see our role as partnering with industry to develop the standards and best practices they need to flourish. And here, we’re here today to talk about AI agents, which is an incredibly timely topic. And so I thank ITI for organizing this. Just this week, CAISI, my organization, we kicked off an AI agent standards initiative. Our goal is to hear from industry how traditional standards work, best practices, guidelines can help unlock and facilitate adoption. So one area where we’ve already started that work is on AI agent security. We put out a request for information or RFI about what challenges industry is facing with AI agent security. Our colleagues at NIST at the Information Technology Laboratory also have a publication out for comment on AI identity and verification, which we encourage you, if you’re interested, please look at the documents, review them, send in your comments.

We also announced this week that we’re going to be holding sector specific listening sessions on barriers to adoption, the sectors of health care, education and finance. And our goal here is we want to learn actually what are the challenges that industry is facing. These AI agents, they have tremendous potential, but we want to understand. How CAISI and NIST and the U .S. government can help unlock adoption through standards and best practices. So I’m delighted to be here and take part in this conversation and learn more from my fellow panelists.

Jason Oxman

Thanks, Austin, so much. Really appreciate your being here and helping set the stage for us for our discussion of agentic AI. As I mentioned, we have three great experts here to start us off on the business side discussion before we move to the policy side discussion, because I really think it’s important for us to understand exactly what use cases of agentic AI are happening across different segments of the AI stack. So we’re very fortunate to have three experts here to help us with this discussion. Prith Banerjee is the CTO and SVP of Synopsys, the design software automation semiconductor company. Great to have you here, Prith. Caroline Louveaux is Chief Privacy AI and Data Responsibility Officer at MasterCard.

Caroline, thanks for being here. And also delighted to have Syam Nair, who is Chief Product Officer at NetApp, the global multi -cloud service provider. And so the three of them are each going to share a couple. A couple minutes. of opening remarks on agentic AI use cases. What we’ve asked them each to do is share with all of you kind of the top favorite agentic AI use case that’s happening so that we can use that as a way to frame the discussion around business and policy to solutions. So if we could, Prith, I’ll start with you for your favorite agentic AI use case that’s happening at Synopsys.

Prith Banerjee

Sure. So I’m Prith Banerjee, and my role is to look at sort of future directions of where Synopsys is headed. And agentic AI is actually the core of this. But before I do that, I want to share with you what Synopsys does. Synopsys is the leading provider of electronic design automation tools and IP to design chips. So the chips from, say, NVIDIA or AMD or Broadcom, Qualcomm are designed with these billion transistor chips, trillion transistor chips designed with Synopsys tools. But the opportunity that Synopsys has, seen is these chips are going into systems, systems that are like cars or… aircraft or spacecraft or system data centers, healthcare, et cetera, right? So we have this vision of chips to systems that, and because of that, Synopsys recently acquired Ansys for $35 billion, right, to be a chips to systems company.

I came into Synopsys as CTO at Ansys. So now the challenge that I want to share with all of you is as you are designing a car, right, it’s a software -defined car, right, a Tesla car has more than 100 million lines of C code in that car. That code runs on an ECU, an ECU designed by NXP or STMicro or Qualcomm. And that chip is still not yet designed, right? It is being designed with, say, Synopsys tools, but you’re writing software on the tool or on that chip, and so you have to do what is called software -defined verification validation, right, before the software is, before the chip is designed. Right. And that. that control will control the electric brakes, the electric steering, the autonomous driving of the car.

And the car is, it’s a physical product, it is being driven on the road, right? And so you use ANSYS physics simulation like Fluent for aerodynamics or LS Dyna for crash or HFS for electromagnetic. So essentially what we are doing is bringing the physics of the world around us powered by AI along with the chip design in this what we call intelligent product design which is silicon design. So the chip inside any complex design, software enabled, so you can do software updates over there, updates and AI driven. So that’s all the context and if we are a $10 billion company with a market cap of 100 billion. So the agentic AI part is the following, that the pace of innovation in the world is changing.

You used to design a new car every 7 years or maybe 5 years. That pace of innovation is changing. like Tesla, Elon Musk said we have to do it every year. Every year they want to bring a new car to market. Or NVIDIA Jensen, right? The chip design used to be every three years. NVIDIA Jensen says you have to do it every year. So the pace of innovation is becoming faster and the complexity. You used to have a chip with maybe a million transistors. Now it’s a billion transistors. It’s a trillion transistors. It’s incredibly complex. And then you have the chip with all the complicated system. The complexity is so hard that you used to have human designers at the Qualcomm, NVIDIA, etc.

who could use those things using the Synopsys tools. You cannot do that anymore. It is very, very hard. That’s where agentic AI is coming in. So at Synopsys what we have created is agentic engineers. These are like human engineers that are not trying to take the jobs of human engineers away. They are going to complement the job of a human engineer so you at Broadcom, Qualcomm, we have a hundred thousand engineers. but you will be complemented with another 200 ,000 agentic engineers from Synopsys who will do the lower level reasoning job like a human, right? But the human will still be in the loop to make sure that you are not doing drastic sort of bad things, right?

This is the incredible opportunity. But as the world talks about agentic AI in the world of large language models and data and words as tokens, our world is what we call physical AI, which is physics, and it’s the physical AI part where we are applying our agentic engineering technology to. Very, very exciting area.

Jason Oxman

That’s great. And I love how you described the human engineers being complemented by, not replaced by, the agentic AI that’s helping them be more efficient and do their jobs better. Caroline, I think of payments networks as having used AI for decades, literally. The fact that you can take a plastic card and tie it back to a, a human being, no matter where they are in the world, is actually truly remarkable. When you think about how payments networks work, it is truly remarkable, the technology. especially since you’re processing literally millions of transactions a second around the world. So with that, you look over global AI for MasterCard, and I’m curious how agentic AI is influencing the work that you and your colleagues do to make these payments rails run around the world.

Caroline Louveaux

Absolutely, and hi, everyone. It’s great to be here with you. As you said, for MasterCard, AI is nothing new. We have been leveraging AI for decades to make our payment network safer and more secure for everyone. Now with agentic, we are moving from AI systems that recommend to AI systems that act, right? And in cybersecurity and payments, the shift is already real today. AI agentic systems are being deployed, for example, to detect suspicious transactions, to triage fraud signals, to initiate secure payment flows. If you think about it, if we want to be able to detect and to block fraud in real time, decisions have to be made in milliseconds. at scale. And of course, while speed and scale matter a lot, accountability is a must.

What’s important is that these agents don’t make decisions with open -ended autonomy. They must act within clear values, principles, within clear permissions. What is the agent allowed to do? What is not allowed to do? And when does a human need to step in? And of course, humans have to have full oversight end -to -end. So, I mean, there are many other use cases. I’m happy to talk more about that, but I think that’s really our main use case. But of course, the technology is moving really, really fast. We are now talking about this multi -agent ecosystem that raises a whole new range of opportunities as well as novel challenges. And so that’s where these kind of summits where we all come together are really, really important to really get it right.

Jason Oxman

I love how you characterize it as moving from what we call assistive AI to operational AI. In other words, instead of just helping with a task, the AI, as an agent, can actually take a task on. Still oversight in the system, and that, I should have previewed this. We’re going to come back around and talk to the panelists about guidelines and protections, and as Austin importantly noted at the outset, the security of the system, how that’s built in as well. And, Siam, I want to come to you next. The multi -cloud that NetApp operates obviously is moving data around the world on behalf of customers, storing data around the world and allowing your customers to access data in a multi -cloud environment.

How is agentic AI helping NetApp with that level of customer service?

Syam Nair

Thank you. So NetApp actually, as you said, multi -cloud, we both power public cloud as well as private cloud. Many of the largest infrastructure is actually the data infrastructure. It’s built on NetApp. I’m a file storage standpoint. One of the key challenges in AI itself is having quality of data. Data quality is super important, and the previous session actually talked about it. And data quality, especially from unstructured, truly unstructured, how do you really get the structured value out of it? And that’s where agents can actually help and agents help, which is we are developing agents which are sitting closer to the storage controller. If you know the storage architecture says that without moving data and going through cluttered pipelines and, you know, positioning the data ready for AI, you can actually have the data at the source itself, which will be ready for AI.

And how this helps is, you know, many of the areas, cybersecurity, as it continues to grow as a threat, you know, 59 seconds is the average breakout of a threat these days, risk and threat will become super important to manage. And you need to do that at the layer where the data sits. So agentic has a really good use case with respect to that. We are still in our journey, early journey in terms of building these capabilities. One would say, look, if you have five levels of AI where, you know, agentic AI where level one is mostly assisted, co -pilot to autonomous agents, running a network of agents at level five, we’re still in that journey somewhere in the three range.

And that’s what we see from customers in terms of how they want to leverage data. So that’s one of my favorite use cases in preparing the data, making sure that the right data is available both for the agents and the agents can make it available for the use cases.

Jason Oxman

Yeah, interesting. So the agents are actually helping you expose any risks that may need to be addressed as part of that provisioning of data. And, Austin, I’m going to ask you to set up our second round question with me, not for me. And that is, you know, the industry has a responsibility to inform governments about risks and how they’re being addressed. So as we move into the next question for the panel around enterprise guardrails that companies are seeing. So, Austin, I’m going to ask you to set up your question. And then I’m going to ask you to set up your question. And then I’m going to ask you to set up your question. So, Austin, I’m going to ask you to set up your question.

And then I’m going to ask you to set up your question. anything in particular you would flag that you’re looking to hear from industry in the U .S. administration about those guardrails. You are overseeing an operation that asks for industry input, which I think is rare and particularly great. So thank you for doing that. Perhaps some practice tips that you can provide to everyone in the room about what it is helpful to provide government, the U .S. administration or other government colleagues that you’ve heard from on these issues and how it’s helpful to provide that information.

Austin Mayron

Yeah, absolutely. So at CAISI , our focus right now is truly on unlocking innovation and adoption. And we work in the standards space, and so we look to how NIST -fostered standards and best practices and guidelines documents can help with that innovation and that adoption. And so the NIST process, the way it normally works is we like to gather and collaborate. It can be an industry to… understand the challenges they’re facing. It’s more of a bottom -up, grassroots approach than a top -down. We’re not sitting there in Washington and saying, you know, this is the problem and we’re going to fix it. We take a little bit of humility and say, we don’t actually know what the problem is until we talk to the people who are closest to the issue, because we only have a narrow slice of the world from our vantage point, and the people who are actually in the field working on innovation, working on adoption, they have a better sense of what the barriers are.

And so we encourage everyone in industry and across the ecosystem to really engage with us, to tell us the problems that you’re encountering, and we have structured formal ways for you to do that. For instance, the request for information on AI agent security, I think it’s open for about another month, and some have already submitted comments, but we look forward to comments. As I said, we’re also convening listening sessions, I think in April, on barriers to adoption, particularly on agent issues for education, healthcare, and finance. We’re starting with those three sectors, but we really welcome that type of engagement, because we want to facilitate adoption. And one example that I sort of like to use…

I don’t know if it’s actually a barrier to adoption, but let’s say in a regulated field like healthcare or education, there’s PII, and there’s a reluctance to adopt because it’s unclear how the AI agents and systems are treating PII and whether it will satisfy regulatory burdens. CAISI could play a role in helping settle concerns about that because we could develop benchmarks, methodologies, and evaluation methods to give industry the confidence they need that, for instance, the model that they’re looking to procure and adopt and implement handles PII the way they need to to satisfy their regulatory obligations. So that’s a way where Casey, through measurement science, best practices, and standards, can help facilitate adoption. We’re also looking at interoperability, and we’ll have more about that in the coming months.

Jason Oxman

That’s great. Really appreciate that, Austin. And I love the focus on voluntary, industry -driven, consensus -based standards because that’s how the tech industry prefers to operate. It’s better than government regulation, particularly because those standards are global in nature, and NIST is a great example, as you noted, of support of those voluntary. consensus -based industry standards, which we would all prefer to operate. And, Prith, I’ll come back to you on this question of, I guess I’d call them guardrails, kind of the enterprise guardrails around risk management that you’re putting in place. Governments are paying attention. We want to handle these issues in the private sector. What are you seeing that’s important as far as those enterprise guardrails for risk management?

Prith Banerjee

So that’s a great question. Actually, at the AI Summit yesterday, there were a lot of speakers, from starting with Prime Minister Modi to President Macron, everybody kind of talked about responsible, safe AI and AI for everyone. But I want everybody in the audience to understand what is going on in this world, right? So there is a problem, right? You have a video that you can watch on, say, YouTube or Facebook, and you want to prevent a young child from watching that, right? And that is responsible AI, and you want to make sure that a 12 -year -old doesn’t watch it. But if he or she watches it, it’s not the end of the world. I mean, yes, you have seen this, but the world that we live in is this intelligent product design, right?

You are designing a car, and we have, as Syam was mentioning, level 1, which is assistive, all the way to level 5, which is fully autonomous. Now, imagine a world – I’m now doing the scary part so you understand how scary it can be, right? An autonomous car that is driving on the streets of Mumbai, right? And it’s supposed to be autonomous, making sure the pedestrians and the cows are being avoided. But suppose there is a cyber attack, right? And somebody goes in, and you want to use that car as a weapon, right? As you know, there are terrorists that go in, and they bang into these things, right? So we have to make sure that these software -defined systems – just imagine an airplane, right?

You know what has happened in the past. In 9 -11, an airplane hit a thing. So you could imagine a software -defined airplane being used as a – as a missile, right? So this is how important it is because unlike the world of Facebook and Google, and I’m not undermining Facebook, Google, I’m just saying you are dealing with people watching stuff and saying like, unlike, right? We are dealing with physical AI interacting with the real world. If real world some things happen, some really dangerous things can happen, right? And so we have to be extra careful. So that’s the challenge. What we are trying to do is to make sure that as part of this agentic engineering workflow, we are doing it in a responsible manner, in a safe manner, right?

And the work that we are doing in terms of verification, validation. So the software flow that we do before we actually do a hardware prototyping, we do full like 100 % coverage at the digital level. So we are designing the airplane on the computer, designing the car on the computer with as close to 100 % guarantee. Nothing is 100%. but I want you to understand how much more complicated this is right because in the hands we can design software defined sort of data centers or software defined nuclear arsenals right in the hands of the wrong person some bad things can happen so we have to be extra careful about the responsible safe AI that we do for our intelligent product design.

It is happening software defined is happening but we have to be super careful.

Jason Oxman

Thank you, sometimes the best way to get people to pay attention to what you’re saying is to scare them and so you’ve certainly done that and Caroline there’s a lot of bad stuff happening on the payment systems as well and the consequences of fraud and security breaches are or actual shutdown of the network is almost impossible to contemplate global commerce grinding to a halt I don’t know if you want to scare people like that as well when you talk about.

Caroline Louveaux

Let me go there.

Jason Oxman

Go ahead.

Caroline Louveaux

With enterprise guidelines coming to New Delhi I watched the companion it’s a movie around romance robot I’m not going to spoil the end, but that’s actually a scary story for sure. Now, back to the MasterCard vault. The principle is very simple. Autonomy can only scale if there’s trust. And so at MasterCard, we think we have a role to play when it comes to agentic commerce, meaning you use an agent to make payments on your behalf. And so we want these agentic payments to be safe and secure and trusted. And therefore, we came up with a playbook with four key guardrails. The first one is know your agent. Before an agent acts and before it makes a payment, we want to make sure that it’s verified and trusted.

So everyone needs to know that it’s a legitimate agent and not a rogue robot or a fraudster. Important, right? The second one, of course, is security by design. It has to remain the foundation. And so we are leveraging advanced technologies around customer authentication, tokenization. to make sure that the sensitive credentials, for example, your card number, is not visible and not exposed to third parties, to the merchants, to the agents, or anything like that. Third, and that’s a bit new, we want to make sure that we have clear consumer intent. The consumer has to be always in control of what he or she authorizes the agent to purchase on his or her behalf. We learned this the practical way just a couple of months ago.

An employee at Massaca decided to ask an agent, hey, are you able to buy sushis? The idea was just to test the agent’s capability to do so, but the agent took the question literally and placed an order using the employee card details on file. So, lesson learned, clarity matters, clarity of the intent that can be verified, otherwise you end up with these platters of sushis. And then last but not least, everything has to be, traceable and auditable. and that’s needed if you want to be able to give consumers the redress if things go wrong, dispute resolution and of course to make the regulators happy and comfortable and so these guardrails are not there to slow adoption, you know, if done well they’re going to be key to scale adoption in a way that is trusted by design.

Jason Oxman

Great, sushi is not scary but the use case you described is, so appreciate that

Caroline Louveaux

It’s only sushi, we’re good.

Jason Oxman

It’s only sushi, that’s right Syam, you get to wrap us up because we’re closing the panel out You don’t have to scare people if you don’t want to but I’d love to hear how NetApp is thinking about enterprise guardrails for risk management around agentic AI

Syam Nair

Yeah, no scary stories I think one of the ways I would say this is, you know, as humans we used to make mistakes but it was much more contained. As sometimes in enterprises you had insider threat but it was much more contained. But now you’re talking about a network of agents where the blast radius in terms of an error or a mistake or a threat is much more profound. So guardrails become important. They need to be at multiple levels. Number one is public -private partnership in identifying the guardrails in terms of how agents need to operate, being very specific to the enterprise, being very specific to the business is important, and working together with the customers, in some cases consumers, others in business -to -business, understanding the use case and for which how we need to build guardrails within the system.

And more importantly, I think, and I’ll go back to what one needs to figure out is the governance of the data because data is the one that is actually going to power how agents make these decisions, right? Unlike human, there is no empathy built into the agent, at least not at this point, and it is not making decisions based on situational awareness. It’s making decisions based on the data. And if the data can be manipulated, if the lineage of data is not properly understood, if it is not really governed, if there are no guardrails for that, then you could actually get outcomes from agents that are going to be scary. The last piece of this is, look, unlike agents, which can do everything, agents cannot take accountability.

They’re not responsible. They can’t take accountability. It’s the humans. It’s the business owner who takes it. So having those guardrails work in tandem with the customer, consumer, with the public -private sector partnership is super important in terms of defending.

Jason Oxman

Thank you. Thank you. policymakers looking at. And what should policymakers look at? Our goal in the tech industry, obviously, is to ensure that public policy is inspirational to innovators, that it doesn’t interfere with the ability of innovators to get the products and services out to market that we all want to see and benefit from. But of course, policymakers have other things in mind. They want to make sure that consumers are protected. They want to make sure that safety and security is part of the design of products that are deployed into the market. So we have a great industry panel of experts who are going to share their views on what policymakers should be thinking about and what they should be doing to inspire the use of agentic AI while also addressing important public policy concerns.

So I’ll ask each of our panelists to address that and to introduce themselves. Jennifer, I already said who you are. You can just introduce yourself and your company, and let’s take that as the prompt. And you get to pick one thing that you think policymakers should be most focused on. focusing on.

Jennifer Mulvaney

Great. Thank you, Jason. Jennifer Mulvaney with Adobe. And, you know, I learned a great Hindi term yesterday watching the prime minister speak, and that is mahaf, human. I mean, you really think about policy. Policy, you know, has been around since the dawn of time, and it really is about helping to prevent harms against humans. And so that is what policy still is meant to do today. I think when policymakers look at anything, whether it’s tech or welfare or tax policy, it’s what does this policy mean for humans and how to prevent harm and what does that mean? And we as lobbyists in Washington, D .C., or my former role there, you humans go in and talk about what it means for whatever that stakeholder group you’re talking about is.

So we’re now in a world of policy actually governing systems, not just people. But I think that the prime minister’s focus on human is something that Adobe talks a lot about as well, that should be humans before models. Our CEO of Adobe often says it’s not what we can do with technology, it’s what we should do. And I really love that statement because that really does think about what is this going to mean for humans? How can we advance that agenda?

Jason Oxman

Love that. Thank you, Jennifer. Yep. Ellie Sakhaee.

Ellie Sakhaee

Hi, everyone. I’m Ellie Sakhaee. I am part of public policy team within Google. Several of our colleagues in the previous panel mentioned that agentic AI is not a point in development, right? So it’s, as we think about agentic AI, we should be thinking about the continuum, depending on agent’s autonomy, depending on their access to memory, depending on the context of use, and depending on their ability to do long -term planning and basically act on the real world. So that is why I think it’s important when we think about policy to think about this continuum of agents rather than something is agentic and something is not agentic. That being said, I think that one of the main safeguards that we talk about is human in the loop for agentic AI.

And that also varies significantly with the ability or the reliability of an agent. So as we move from agents that need confirmation for every single step that they want to take, they need human approval. As we move from them to agents that are more autonomous, we should be thinking about moving from human in the loop to human on the loop or human in command. A similar analogy to this is how Federal Aviation Administration in the U .S. thinks about moving from pilot being always in sight of drones to pilots being in command of drones. So as the safety of these drones improve and safety of AI systems to keep track of these drones through detection and avoid system improves, we can move from pilot.

always keeping an eye line with the drone to pilot being on the loop or pilot being in command. So I think these analogies within different industries allow us to think about agents. And another thing that I think policymakers, as they think about agents, should consider is that agents may be a new technology, but they, at the end of the day, they may cause harm. So we should be thinking about regulating the use or application or the harm that they actualize compared to regulating the underlying technology. Otherwise, we end up regulating, let’s say, the AI models that by the time that the regulation goes into effect, the AI model has evolved into something that is now agentic.

Jason Oxman

Makes sense, and appreciate your perspective. And I should have noted that you’re not only doing public policy work for Google, but you’re actually a real agent. You’re a real computer scientist, Ph .D., machine learning. She knows how the machines think. which is important as well. And sometimes they talk to us, right? Sometimes. Let’s go to Carly Cloudflare next.

Carly Ramsey

Great. Thank you. Hi, everyone. My name is Carly Ramsey. I lead public policy for Asia Pacific for Cloudflare. I’m based in Singapore. And Cloudflare, just for those of you who don’t know us, Cloudflare runs a global network, and we kind of sit in between our customers and their users, and we protect the traffic that goes back and forth and take a large majority of all the AI model providers are our customers as well, so we’re protecting that traffic as it goes back and forth. So we have a unique viewpoint. We also offer developer tools as well, and people are building AI agents off of Cloudflare, so there’s that angle that Cloudflare sees as well. So, like you said, choose one thing that we recommend to policymakers.

That’s a hard one, but I was thinking in keeping with the theme of this summit, which is very much about inclusive AI, I think that something that policymakers should consider is whether or not we’re making agentic AI specifically available. for everyone, right? So that becomes, is it accessible? Are the standards perhaps open? I think open models, open standards are really interesting and are allowing people to access tools that they might not normally be able to access. And so as policymakers think about diffusing this technology more widely, maybe just even outside of the enterprises, one thing that as someone who sits in Asia Pacific, and this is really concerning to me, is like how do we ensure that the different governments when they’re making these tools accessible are talking to each other?

And I think ITI has a really neat role to play in that actually because we all know that NIST is the gold standard and these are voluntary standards. They’re often referenced a lot in Asia actually. Singapore just came out with their own framework on agentic AI governance, right? And the question is, is that going to be compatible with whatever NIST is going to put out? Big question. Singapore is a leader in cybersecurity standards in this region. And I’ve had some interesting conversations here in these past couple of days about India. India, obviously, with the bastion of tech talent that we see in India, they want to be involved in standard development and for the global south.

You know what I mean? So great. And how do we get them involved? And how do we make sure that as global companies that they’re not – all of these standards aren’t contradicting each other as well, right? So that harmonization piece is very important.

Jason Oxman

So important. Technology doesn’t want to stop at borders. It wants to serve the world, and such an important issue. Sam, Palo Alto? Palo Alto? Perfect. Palo Alto.

Sam Kaplan

You conveniently sat the two cyber companies, cybersecurity companies, next to each other. So my name is Sam Kaplan. I’m the Assistant General Counsel for Global Policy at Palo Alto Networks. And for those of you that don’t know us, we’re the world’s largest pure play cybersecurity company. Can you hear me? Yeah. Okay. There it’s better. Sorry. I need to project better. Anyways, I think, Jason, to pivot off of your question, I think, you know, at a high level, one of the – The one last question. and I think if we could impart to policymakers is, you know, start with the standards organizations, to tell you the truth. The standards organizations, both in the United States but also abroad, Carly referred to the Singapore agency, but they are in the midst of developing these voluntary frameworks that are really serving as the foundation, not only to understanding the technology but to better understand sort of the risk picture that we are facing when it comes to these types of technologies, where we started with traditional model security frameworks when it comes to LLMs that are all based on sort of prompt and responses.

These standards -setting organizations are now very, very deep into sort of developing these same standards on agentic, and as they are painting a better picture and working with industry to understand how that risk picture is changing and how what was once sort of… almost a two -dimensional… understanding of the risk when it comes to AI models is now very much a three -dimensional picture when you’re looking at agents, because these are the parts of the models that all of a sudden have arms and legs. So when you’re looking at this from a security perspective, you’re taking what could be sort of a digital threat that can sort of metastasize on networks. These are threats that all of a sudden can have kinetic consequences in real life as these agents are executing decisions across the financial system from your previous panel, but across autonomous systems.

So understanding that risk picture is going to be critically important. And last, I think that really pivots into one of the themes from the summit itself, as policymakers, in particular policymakers, are looking at sort of responsible and safe deployment. They need to understand and appreciate that security, security of those models, security of those agents, is a foundational layer to increasing trust, to facilitating response. deployment of AI because it’s the best way to secure and, as much as we can, understand the behavior of these models and agents as they’re interacting with the ecosystem and now the real physical world that we’re seeing.

Jason Oxman

Yeah, and policymakers are keeping an eye on all the products and services to see if that is done well or not, in which case they may step in. All right, to follow your thematic, we’re moving from cybersecurity to enterprise software. You’re going to take my joke, aren’t you? You sat me next to condos. I know, I know. It’s not my joke, it’s Sam’s joke. But, yes, I’m going to take it. I’m going to take it. So, Danielle, please commence the enterprise software portion of our program. I can speak for you if you want me to. I’m joking.

Danielle Gilliam-Moore

Danielle with Salesforce. I’m our director of global public policy, and I lead our AI policy work. The panelists have said a lot of great things, and they’ve also stolen a lot of what I’m going to say, so I’ll try to make this short. But when we think about AI, I think there’s – A governance response. Okay. needs to happen and when we talk about governance I think a lot of people conflate governance with regulation and governance is more than regulation. Governance can be regulation but it’s also standards, it’s also global norms, it’s also you know risk and quality assurance procedures in companies and so along with the standards piece I think a critical thing to remember is that you know ISO controls takes about three years to that process so it’s quite a long process.

So when you look at the ISO 42001 standard it’s a great standard but it’ll take time to further build on that which I think then makes in organizations likeness the different safety Institute’s incredibly important in filling in the gaps while work is being done to bring about new controls around agentic. The other thing I’ll say is on regulation there’s this emerging framework that it was first kind of started in the UK but I’m seeing governments like Indonesia on the other hand, there’s a lot of government that’s how we can make sure that we’re not just looking at the data and the data is is being used to make sure that we’re not just looking at the data and the data take this on of instead of having this large overarching AI regulation they’re looking at they’re allowing the different ministries that have core competencies on things like financial services or health care to take the lead so you have a more diffuse model that’s happening and I encourage I would encourage lawmakers to look at that you know some of these agencies have years and years and years and years of relationships and expertise and so wouldn’t they be best placed to think about not necessarily regulations but frameworks rules that best suit you know a small startup that isn’t that is operating you know a financial services agent or something like that some edge use case I think that is a more agile way to look at agentic which you know agility does I think bring about adoption and is very key to adoption.

Thanks.

Jason Oxman

Perfect. Combiz is anything left for you to say?

Combiz Abdolrahimi

I was just going to say ditto to everything that Danielle said because that’s basically what I was going to say, and she said it way better than I could ever do. Yeah, calm these up. They’re a human service now. I guess I would add just having worked in government now within industry, there’s kind of – I like to think like I could sort of have like the vantage point of like a former regulator, policymaker as well as now in industry. And I think what we are looking for and what we’ve heard earlier today is like we want clarity. We want clarity. We want standards. We want to – like we want to see what good governance looks like.

Don’t give us – if I could give a message to governments and regulators, don’t give us sort of theoretical abstract principles, but give us actually what practical standards, what does good governance look like, operational clarity, playbooks, model frameworks. Jason and I, I remember – for many years ago when I was at Treasury and you were at ETF, you know, there was this line, like, you know, these technologies are rapidly evolving. And as they’re evolving, policies and regulations need to evolve with them. Otherwise, it’s going to stifle these innovations, and it’s going to actually create more harm than good.

Jason Oxman

Well put. Well put. All right, so now that we’ve provided a wish list for regulators, the next question, and Danielle, I’m going to give you the chance to go first because of your observation that sometimes panels go down the line and it’s not fair to the people who are at the end of the panel. I think that’s absolutely true. I would have let Kambiz go first, but you’re speaking for the enterprise software industry generally. So the question is, you know, one of the big themes here at the AI Impact Summit is unification of the policy agenda across countries, across governments, across regions. So. So is there a particular platform you’ve seen or organization you’ve seen?

Is there a particular place where conversations like the ones we’ve been talking about here should be taking place? You know, the U .S., India, like -minded governments around the world, they want to be all on the same page. But there is a tendency for India -specific standards, for U .S.-specific standards. There’s a tendency for that in the physical world and in the digital world, and that’s very difficult for us to operate in. So in the agentic AI arena, I’m curious from all of you if there is a particular multilateral venue or a particular platform or a particular thing you’ve seen work well that you would recommend to governments here that they look to for this.

And, Danielle, have I bought you enough time to come up with your answer so that I can call on you first?

Danielle Gilliam-Moore

I woke up this morning knowing the answer to this question. Oh, excellent. Okay. I live for this question. It’s all yours. Which is the OECD. All right. The OECD, I think, is kind of – it’s not worth it. I remember it all started, but there was this really interesting moment where the OECD puts out principles in – was it 2019, I believe? And then it was like it set the floor for everyone else. I mean, the EU AI Act’s definitions are based off of those principles. We’ve seen draft legislation at the state level that’s based off of the OECD AI principles. Globally, when I was doing rounds of meetings in APAC, they were looking at the OECD principles.

So I feel like the world is shouting OECD and a lot of the regulatory work that they’re doing, but they don’t necessarily say they’re not always looking there. But the OECD has been doing such interesting work. They now have the reporting framework. They’re doing work with GPI. Them having that Hiroshima AI process framework, that was them taking the work of the G7 and bringing it into what they’re doing. So the OECD is doing so much work to reach out, and so I would encourage governments to look at what the OECD is doing and help them built.

Jason Oxman

That’s great. Sam? You can pick the same one if you want to or …

Sam Kaplan

well and I’m actually going to layer it because I think Danielle is exactly right I think when you’re looking from a policy and higher level governance the OECD has been the leader in this there are structures in place through the OECD to develop these if you look at legislation regulatory proposals that have come out even across the various US states they’ve based definitions off of what the OECD so that has been a foundational piece I think so from a broader perspective I think that’s a good layer I think you know the one that has potential I would like it to see move more tactical rather than being a little bit esoteric and studying is the International Consortium of Safety Institutes I think the structures are there you have the right players that are coming to the table I think if those organizations like what Cassie’s doing right now are advancing you know more tactical standards to create a taxonomy when it comes to agentic AI security how are we measuring how the attack is going to affect the surface has changed when it comes to agents.

To understand the scope of the scale of this problem, I think there’s a great deal of potential, but I think you need sort of these two levelings to talk policy and standards.

Jason Oxman

Fantastic. Carly?

Carly Ramsey

Just to add something different to the discussion is that based in Singapore, what I’ve seen in the years that I’ve been there is that the Singapore International Cyber Week has been, every year has gotten more attendance from governments from all around the world. So that is a potential, it’s an annual event, and so the positioning is on policy, bringing governments to discuss cyber policy. And so potentially that is an area that could be considered, sure, that the varying countries from around the world, the different, like India is well attended at Singapore International Cyber Week, make sure that they all have a voice in the future of Identity AI.

Jason Oxman

That’s great. Love it. Ellie, do you have a preferred platform? Multilateral?

Ellie Sakhaee

Yes, I’m going to add to what my colleague said here, and that is technical benchmarks. We talk about the standards, but we may understand what agents do, but we don’t fully understand what multi -agent systems may do. They may have emerging risks. They may have completely different behaviors that we don’t really know because we don’t really have real versions of multi -agent systems. There are some emerging, but the risk surface will change as these agents interact with each other. So I think the academic community, industry, all of us have a role to play to develop and expand the benchmarks for multi -agent systems to make sure before we put them into the world, they are tested.

Jason Oxman

Great. Jennifer, and then Cambiz, you’re going to get the last word.

Jennifer Mulvaney

Thank you for sharing. So what I would just say is I think that definitely OECD comes to mind as the largest. Most credible group, and I think that makes sense. But we do have to think about having space for some of the smaller, more regional groups as well. I’m speaking in Tokyo. couple weeks at the Friends of Hiroshima G7, where they had their principles there back when Japan hosted the G7. So I think that’s really important to have those types of smaller regional, perhaps even focused on specific policy areas to then feed into the bigger consortium in a way that people can understand. So I think that’s really important.

Jason Oxman

That’s great. That’s great. Kambiz, close us out.

Combiz Abdolrahimi

Yeah, hopefully. So actually, I was surprised that nobody mentioned the one that I was like, please don’t mention it. Please don’t mention it. Let me do it. So we’re talking about standards. We’re talking about technical benchmarks. We’re talking about principles. We’re talking about coordination at a global scale, private sector, governments, academia, institutions, the ITU, the UN, AI for Good. I mean, they do all of that. And I think that we want to engage with more countries, with more stakeholders in this conversation and make sure that we are being inclusive. and that’s one of the sort of multilateral forums that I would look to.

Jason Oxman

That’s a terrific one. Thanks for adding one to the list at the end of the round. This has been a fantastic discussion. I love the way we paired the business discussion of Agentic AI with the policy recommendations, and hopefully policymakers will pay attention to what we’re doing. ITI is proud to represent all of the companies here on the panel here today as part of the global tech industry and particularly proud to be partnered with Government of India on the AI Impact Summit. Our congratulations to the Prime Minister and to the entire Government of India for this incredible, incredible gathering. Thank you to all of you for being here to be a part of this important discussion, and please join me in thanking our terrific panelists.

Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (11)
Factual NotesClaims verified against the Diplo knowledge base (4)
Confirmedhigh

“CAISI was originally founded as the U.S. AI Safety Institute and was re‑branded in June 2025 under Commerce Secretary Howard Lutnick to focus on standards and innovation rather than solely on safety.”

The knowledge base confirms that CAISI began as the U.S. AI Safety Institute and was re-branded under Commerce Secretary Howard Lutnick, though the exact month (June 2025) is not specified in the sources [S23] and [S29].

Additional Contextmedium

“CAISI sits within the Department of Commerce, serving as the “front door” for industry to engage with the U.S. government.”

Sources note that CAISI is part of the Department of Commerce and is positioned to work with industry on standards, aligning with the described “front-door” role [S23] and [S29].

Additional Contextlow

“CAISI co‑locates with the National Institute of Standards and Technology (NIST).”

While the knowledge base highlights CAISI’s collaboration with NIST on standards initiatives, it does not explicitly confirm a physical co-location; the relationship is described as a partnership rather than shared premises [S33] and [S29].

Confirmedhigh

“CAISI has launched an AI‑agent standards initiative.”

The knowledge base mentions a major new government initiative aimed at supporting the development of AI-agent standards, confirming the launch of such an initiative by CAISI [S29].

External Sources (71)
S1
Agentic AI in Focus Opportunities Risks and Governance — -Combiz Abdolrahimi- Role/company not clearly specified, appears to work in industry with former government experience
S2
https://dig.watch/event/india-ai-impact-summit-2026/agentic-ai-in-focus-opportunities-risks-and-governance — Thanks, Austin, so much. Really appreciate your being here and helping set the stage for us for our discussion of agenti…
S3
Agentic AI in Focus Opportunities Risks and Governance — Thanks, Austin, so much. Really appreciate your being here and helping set the stage for us for our discussion of agenti…
S4
Agentic AI in Focus Opportunities Risks and Governance — Sure. So I’m Prith Banerjee, and my role is to look at sort of future directions of where Synopsys is headed. And agenti…
S5
Agentic AI in Focus Opportunities Risks and Governance — -Syam Nair- Chief Product Officer at NetApp (global multi-cloud service provider)
S6
Agentic AI in Focus Opportunities Risks and Governance — You conveniently sat the two cyber companies, cybersecurity companies, next to each other. So my name is Sam Kaplan. I’m…
S7
Agentic AI in Focus Opportunities Risks and Governance — -Sam Kaplan- Assistant General Counsel for Global Policy at Palo Alto Networks (cybersecurity company)
S8
Agentic AI in Focus Opportunities Risks and Governance — Great. Thank you. Hi, everyone. My name is Carly Ramsey. I lead public policy for Asia Pacific for Cloudflare. I’m based…
S9
https://dig.watch/event/india-ai-impact-summit-2026/agentic-ai-in-focus-opportunities-risks-and-governance — And I think ITI has a really neat role to play in that actually because we all know that NIST is the gold standard and t…
S10
Agentic AI in Focus Opportunities Risks and Governance — Great. Thank you. Hi, everyone. My name is Carly Ramsey. I lead public policy for Asia Pacific for Cloudflare. I’m based…
S11
Agentic AI in Focus Opportunities Risks and Governance — 951 words | 194 words per minute | Duration: 293 secondss Absolutely. Thank you, Jason. Thank you to ITI, and thank you…
S12
https://dig.watch/event/india-ai-impact-summit-2026/agentic-ai-in-focus-opportunities-risks-and-governance — Absolutely. Thank you, Jason. Thank you to ITI, and thank you all for coming today. As Jason said, my name is Austin May…
S13
S14
Agentic AI in Focus Opportunities Risks and Governance — -Jennifer Mulvaney- Public policy role at Adobe
S15
Agentic AI in Focus Opportunities Risks and Governance — The policy panel provided concrete recommendations for government approaches to agentic AI governance. Jennifer Mulvaney…
S16
Agentic AI in Focus Opportunities Risks and Governance — – Ellie Sakhaee- Caroline Louveaux
S17
Agentic AI in Focus Opportunities Risks and Governance — Ellie Sakhaee advocated for regulating applications and harms rather than underlying technologies, noting that technolog…
S18
https://dig.watch/event/india-ai-impact-summit-2026/agentic-ai-in-focus-opportunities-risks-and-governance — Yeah, and policymakers are keeping an eye on all the products and services to see if that is done well or not, in which …
S19
Agentic AI in Focus Opportunities Risks and Governance — Danielle Gilliam-Moore: Danielle with Salesforce. I’m our director of global public policy, and I lead our AI policy wo…
S20
Driving U.S. Innovation in Artificial Intelligence — 7. Jason Oxman – President & CEO, Information Technology Industry Council 8. Julia Stoyanovich – Associate Professor, De…
S21
Agentic AI in Focus Opportunities Risks and Governance — -Jason Oxman- Moderator/Host, appears to be with ITI (Information Technology Industry Council)
S22
Agentic AI in Focus Opportunities Risks and Governance — -Ellie Sakhaee- Public policy team member at Google, Ph.D. in computer science/machine learning
S23
https://app.faicon.ai/ai-impact-summit-2026/agentic-ai-in-focus-opportunities-risks-and-governance — Hi, everyone. I’m Ellie Sakhaee. I am part of public policy team within Google. Several of our colleagues in the previou…
S24
Agentic AI in Focus Opportunities Risks and Governance — 505 words | 146 words per minute | Duration: 206 secondss Hi, everyone. I’m Ellie Sakhaee. I am part of public policy t…
S25
Setting the Rules_ Global AI Standards for Growth and Governance — So I would say the Manav mission, it’s welfare, human -centric, and all those aspects are there. And from the governance…
S26
Discussion Report: AI Implementation and Global Accessibility — -Deployment: Maintaining what he identified as four key guardrails: “fairness, accountability, privacy, security”
S27
Advancing Scientific AI with Safety Ethics and Responsibility — Thanks thank you very much Shyam for having me and good morning to everyone and welcome to this session. So I think okay…
S28
US tech leaders oppose proposed export limits — A prominenttechnology trade grouphas urged the Biden administration to reconsider a proposed rule that would restrict gl…
S29
U.S. AI Standards_ Shaping the Future of Trustworthy Artificial Intelligence — Austin Marin, Acting Director of the US Center for AI Standards and Innovation, introduced a major new government initia…
S30
From agentic AI to agreement technologies: LLMs as a new layer in diplomatic negotiation — In current discourse, agentic AI usually refers to systems that can pursue goals with limited supervision. Such systems …
S31
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — All kinds of fantastic applications already that we’re seeing right across the economy. We’re using increasingly agentic…
S32
Diplomatic policy analysis — Global collaboration:Policy analysis helps identify shared interests and opportunities for cooperation, fostering consen…
S33
U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence — The discussion reveals extraordinary consensus among all speakers on the fundamental principles of AI agent standards de…
S34
How Trust and Safety Drive Innovation and Sustainable Growth — Explanation:Despite representing different perspectives (UK regulator, Singapore regulator, and industry), there was une…
S35
How Trust and Safety Drive Innovation and Sustainable Growth — Despite representing different perspectives (UK regulator, Singapore regulator, and industry), there was unexpected cons…
S36
Standardisation – The Key to Unlock the Sustainable Development Goals (SDGs) — Jachia used the World Trade Organization (WTO) definitions to highlight the key difference – compliance with standards i…
S37
Agentic AI in Focus Opportunities Risks and Governance — Evidence:Reference to NIST as a great example of supporting voluntary consensus-based industry standards, and emphasis o…
S38
Agentic AI in Focus Opportunities Risks and Governance — “If the data can be manipulated, if the lineage of data is not properly understood, if it is not really governed, if the…
S39
Decoding the UN CSTD Working Group on Data Governance – draft — Political context:Stated that politics lurks in the background of the work, leading to divergent views on the meaning an…
S40
How to construct a global governance architecture for digital trade — Current governance arrangements that underpin data flows are incoherent and fragmented, reflecting conflicting private i…
S41
E-Commerce Legal and Regulatory Framework for Data Governance in Developing Countries ( Nigeria Customs Service) — Data access and analysis vary, making it important to consider both when deriving insights. Data governance applies to e…
S42
How AI Drives Innovation and Economic Growth — Summary:The speakers show broad agreement on AI’s transformative potential for development but significant disagreements…
S43
How AI Drives Innovation and Economic Growth — The speakers show broad agreement on AI’s transformative potential for development but significant disagreements on impl…
S44
From Innovation to Impact_ Bringing AI to the Public — Whilst maintaining an optimistic outlook, the discussion acknowledges important limitations and risks. Sharma emphasises…
S45
E-diplomacy – the new normal — We have a High Level Panel on e-diplomacy at 1300. This will be presented very much from the perspective of diplomats. A…
S46
Introduction — Such individual efforts, however, must be underpinned by sustained, networked collaboration. Without that collaborat…
S47
Policies and platforms in support of learning: towards more coherence, coordination and convergence — 279. There is a nascent, apparently unintended, convergence among the principal platforms used by the participating orga…
S48
Agentic AI in Focus Opportunities Risks and Governance — Evidence:CAISI launched an AI agent standards initiative, issued an RFI on AI agent security, and announced sector-speci…
S49
Agentic AI in Focus Opportunities Risks and Governance — And I think ITI has a really neat role to play in that actually because we all know that NIST is the gold standard and t…
S50
U.S. AI Standards_ Shaping the Future of Trustworthy Artificial Intelligence — The initiative follows NIST’s century-long approach of helping industry develop voluntary standards through consensus ra…
S51
U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence — – Michael Sellitto- Owen Lauder- Michael Brown Industry-led, consensus-based approach to standards development is prefe…
S52
From agentic AI to agreement technologies: LLMs as a new layer in diplomatic negotiation — In current discourse, agentic AI usually refers to systems that can pursue goals with limited supervision. Such systems …
S53
How the Global South Is Accelerating AI Adoption_ Finance Sector Insights — “And spanning all of those, I think the most impactful use cases that we have seen, certainly in fraud and scams remedia…
S54
How agentic AI is transforming cybersecurity — Cybersecurity is gaining a new teammate—one that never sleeps and acts independently.Agentic AIdoesn’t wait for instruct…
S55
Discussion Report: AI Implementation and Global Accessibility — -Deployment: Maintaining what he identified as four key guardrails: “fairness, accountability, privacy, security”
S56
AI Meets Cybersecurity Trust Governance &amp; Global Security — -Governance Challenges and the Need for Cross-Sector Collaboration: Multiple speakers emphasized that fragmented convers…
S57
AI Meets Cybersecurity Trust Governance & Global Security — Governance Challenges and the Need for Cross-Sector Collaboration: Multiple speakers emphasized that fragmented conversa…
S58
Interim Report: — 43. The Advisory Body is tasked with presenting options on the international governance of AI. We reviewed, among others…
S59
How the EU’s GPAI Code Shapes Safe and Trustworthy AI Governance India AI Impact Summit 2026 — 2433 words | 169 words per minute | Duration: 862 secondss And I want to stay on this theme of training the user, if yo…
S60
Announcement of New Delhi Frontier AI Commitments — Opening remarks and framing of the event
S61
Opening of the session/OEWG 2025 — El Salvador: Thank you, Chairman. In line with the opening words, El Salvador hopes to provide comments to all the dif…
S62
WSIS Action Line C5: Building Trust in Cyberspace — ## Evolution of Cybersecurity Norms Implementation This evolution includes the development of practical tools such as s…
S63
Are AI safety institutes shaping the future of trustworthy AI? — As AI advances at an extraordinary pace, governments worldwide are implementing measures to manage associated opportunit…
S64
State of play of major global AI Governance processes — Alan Davidson:Well, thank you, Dr. El-Masri. And a quick thank you and congratulations to the ITU and to Secretary Gener…
S65
US AI Safety Institute director steps down amid uncertainty — Elizabeth Kelly, the inaugural director of theUnited StatesAI Safety Institute, hasstepped downfrom her role after a yea…
S66
United States International Cyberspace &amp; Digital Policy Strategy — To advance the NSS and the NCS effectively, promoting, building, and maintaining a secure digital ecosystem must be acco…
S67
Biden discusses national cybersecurity with tech giants and education institutions — TheUS President Joe Biden met with private sector and education leadersto discuss how to improve cybersecurity in the US…
S68
US government seeks partnership to develop secure and standardised digital ID — The US government is inviting companies with expertise in digital identities on mobile devices to express their interest…
S69
US administration releases National Standards Strategy for Critical and Emerging Technology — The US Government has published aNational Standards Strategy for Critical and Emerging Technologyaimed at bolstering tec…
S70
FIRST SECTION — 289. According to the Government, insofar as intercepted material could not be read, looked at or listened to by a pers…
S71
Navigating the Digital Future: Standards-led Digital Economy (BSI) — Additionally, voluntary standards reinforce global trade and promote interoperability. However, challenges were identifi…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
A
Austin Mayron
1 argument194 words per minute951 words293 seconds
Argument 1
Standards as gateway for adoption (Austin Mayron)
EXPLANATION
Austin explains that standards are the primary mechanism to unlock innovation and accelerate the adoption of agentic AI. CAISI’s work focuses on developing and promoting standards and best‑practice guidelines in partnership with NIST and industry to lower barriers for deployment.
EVIDENCE
He notes that CAISI just launched an AI agent standards initiative to gather industry input on traditional standards, best practices and guidelines [32-34]. The organization issued a request for information on AI-agent security, pointed to a NIST publication on AI identity and verification, and announced sector-specific listening sessions on health-care, education and finance to learn about adoption barriers [35-38]. He also gives a concrete example of how CAISI could create benchmarks and evaluation methods for handling personally identifiable information in regulated sectors such as health-care, thereby giving companies confidence to adopt agents safely [164-170].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Austin’s remarks about CAISI launching an AI-agent standards initiative, issuing a request for information on AI-agent security, and holding sector-specific listening sessions are documented in the panel transcript [S3] and summarized in the discussion overview [S1].
MAJOR DISCUSSION POINT
Standards as gateway for adoption
DISAGREED WITH
Jason Oxman, Sam Kaplan, Carly Ramsey, Danielle Gilliam-Moore
S
Sam Kaplan
1 argument173 words per minute675 words233 seconds
Argument 1
Standards bodies essential for security (Sam Kaplan)
EXPLANATION
Sam argues that standards‑setting organizations are crucial for establishing the security foundations needed for safe deployment of agentic AI. They help translate emerging technical risks into concrete, actionable frameworks that both industry and regulators can rely on.
EVIDENCE
He states that standards bodies in the United States and abroad are developing voluntary frameworks that serve as the foundation for understanding the risk picture of agentic AI, moving from a two-dimensional view of model risk to a three-dimensional view that includes agents with “arms and legs” [333-338]. He emphasizes that security of models and agents is a foundational layer for building trust and facilitating deployment [336-338].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Sam Kaplan’s claim that standards-setting organizations provide the security foundations for safe agentic AI deployment is corroborated by his statements describing voluntary frameworks and a three-dimensional risk view in the transcript [S3].
MAJOR DISCUSSION POINT
Standards bodies essential for security
DISAGREED WITH
Syam Nair
P
Prith Banerjee
1 argument171 words per minute1262 words442 seconds
Argument 1
Agentic engineers for chip design (Prith Banerjee)
EXPLANATION
Prith describes how Synopsys is creating “agentic engineers” – AI‑driven agents that perform low‑level reasoning tasks in chip and system design, complementing human engineers rather than replacing them. This approach is intended to keep pace with the accelerating innovation cycles and growing complexity of modern silicon.
EVIDENCE
He explains that the pace of innovation for cars and chips is moving from multi-year cycles to yearly cycles, and that the complexity has grown from millions to trillions of transistors, making it impossible for human designers alone to handle the workload [74-84]. Synopsys therefore provides “agentic engineers” that work alongside human engineers, handling lower-level reasoning while humans remain in the loop for oversight [88-94].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Prith Banerjee’s description of “agentic engineers” that complement human chip designers is recorded in the discussion summary, highlighting the need for AI-driven low-level reasoning in silicon design [S1].
MAJOR DISCUSSION POINT
Agentic engineers for chip design
C
Caroline Louveaux
1 argument163 words per minute678 words249 seconds
Argument 1
Four payment guardrails (Caroline Louveaux)
EXPLANATION
Caroline outlines a four‑point guardrail framework that MasterCard uses to ensure agentic payments are safe, trusted and auditable. The guardrails cover agent verification, security‑by‑design, clear consumer intent, and end‑to‑end traceability.
EVIDENCE
She lists the guardrails: (1) “Know your agent” – verify and trust the agent before it can act [218-222]; (2) “Security by design” – use advanced authentication and tokenization to protect credentials [223-225]; (3) “Clear consumer intent” – ensure the consumer explicitly authorizes purchases, illustrated by an incident where an agent ordered sushi using an employee’s card details [226-230]; and (4) “Traceable and auditable” – maintain logs for redress and regulator confidence [231].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Caroline Louveaux outlines a four-pillar guardrail framework for agentic payments-agent verification, security-by-design, clear consumer intent, and traceability-detailed in the panel notes [S1] and reiterated in the transcript [S3].
MAJOR DISCUSSION POINT
Four payment guardrails
S
Syam Nair
1 argument183 words per minute645 words210 seconds
Argument 1
Data governance as core guardrail (Syam Nair)
EXPLANATION
Syam emphasizes that robust data governance is the foundational guardrail for agentic AI, because agents make decisions solely based on the data they receive. Without clear lineage, quality controls and governance, agents can produce harmful outcomes.
EVIDENCE
He points out that if data is manipulated, its lineage is unclear, or governance is missing, agents can generate “scary” results, highlighting the need for strong data governance as a core safeguard [241-245].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Syam Nair emphasizes data governance as the foundational guardrail for agentic AI, warning of risks from manipulated or poorly governed data, as captured in his remarks in the transcript [S3].
MAJOR DISCUSSION POINT
Data governance as core guardrail
DISAGREED WITH
Sam Kaplan
J
Jason Oxman
1 argument153 words per minute2123 words831 seconds
Argument 1
Voluntary consensus over regulation (Jason Oxman)
EXPLANATION
Jason argues that the tech industry prefers voluntary, industry‑driven consensus standards rather than top‑down government regulation, because such standards are globally applicable and better aligned with rapid innovation cycles.
EVIDENCE
He states that voluntary, consensus-based standards are “better than government regulation” and that NIST’s voluntary standards are global in nature, which the industry prefers [172-175].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Jason Oxman argues that voluntary, industry-driven consensus standards are preferable to top-down regulation and praises NIST’s global voluntary standards; this position is documented in the panel transcript [S3] and supported by the ITI discussion on NIST and Singapore frameworks [S23].
MAJOR DISCUSSION POINT
Voluntary consensus over regulation
DISAGREED WITH
Austin Mayron, Sam Kaplan, Carly Ramsey, Danielle Gilliam-Moore
J
Jennifer Mulvaney
1 argument223 words per minute333 words89 seconds
Argument 1
Human‑first policy principle (Jennifer Mulvaney)
EXPLANATION
Jennifer stresses that policy should be grounded in a human‑first principle, ensuring that technology serves people and prevents harm rather than focusing solely on technical capabilities.
EVIDENCE
She references a Hindi term “mahaf, human” and says policy should always ask what it means for humans and how to prevent harm, quoting Adobe’s mantra that it’s not what technology can do but what it should do for people [262-270].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Jennifer Mulvaney stresses a human-first policy principle, quoting Adobe’s mantra that technology should serve people, which is reflected in her panel comments [S3].
MAJOR DISCUSSION POINT
Human‑first policy principle
D
Danielle Gilliam-Moore
1 argument189 words per minute635 words201 seconds
Argument 1
Agile ministry‑led governance (Danielle Gilliam-Moore)
EXPLANATION
Danielle proposes an agile, ministry‑led governance model where sector‑specific agencies (e.g., health, finance) develop tailored frameworks, allowing faster, more appropriate regulation for agentic AI use cases.
EVIDENCE
She notes that governments such as the UK and Indonesia are letting ministries with core competencies lead regulation, creating a more diffuse and agile approach that can better serve startups and niche applications [354-358].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Danielle Gilliam-Moore proposes agile, ministry-led governance with sector-specific frameworks, citing examples from the UK and Indonesia, as described in the discussion overview [S1] and her panel remarks [S3].
MAJOR DISCUSSION POINT
Agile ministry‑led governance
DISAGREED WITH
Combiz Abdolrahimi, Carly Ramsey
E
Ellie Sakhaee
1 argument146 words per minute505 words206 seconds
Argument 1
Regulate based on autonomy continuum (Ellie Sakhaee)
EXPLANATION
Ellie suggests that regulation should be tied to the autonomy level of agents, moving from strict human‑in‑the‑loop requirements for low‑autonomy agents to human‑on‑the‑loop or human‑in‑command for higher‑autonomy systems.
EVIDENCE
She describes a continuum where agents shift from needing step-by-step human approval to more autonomous operation, using the FAA’s drone pilot analogy to illustrate the transition from “pilot always in sight” to “pilot in command” [279-284].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Ellie Sakhaee suggests regulation tied to the autonomy level of agents, using the FAA drone pilot analogy to illustrate the continuum, as detailed in her statements in the transcript [S3].
MAJOR DISCUSSION POINT
Regulate based on autonomy continuum
C
Carly Ramsey
1 argument188 words per minute547 words173 seconds
Argument 1
Open standards & global harmonization (Carly Ramsey)
EXPLANATION
Carly calls for open, accessible standards and global harmonization so that agentic AI tools can be used worldwide without conflicting regulatory regimes. She highlights the need for compatibility between NIST standards and regional frameworks such as Singapore’s.
EVIDENCE
She notes that policymakers should consider whether agentic AI is accessible to everyone, whether standards are open, and whether regional frameworks (e.g., Singapore’s) align with NIST’s voluntary standards, stressing the importance of harmonization across jurisdictions [304-319].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Carly Ramsey calls for open, accessible standards and global harmonization, noting alignment between NIST voluntary standards and Singapore’s framework, as recorded in the panel discussion [S3] and the ITI commentary on NIST’s global role [S23].
MAJOR DISCUSSION POINT
Open standards & global harmonization
DISAGREED WITH
Danielle Gilliam-Moore, Combiz Abdolrahimi
C
Combiz Abdolrahimi
1 argument165 words per minute334 words120 seconds
Argument 1
Inclusive multilateral forums (Combiz Abdolrahimi)
EXPLANATION
Combiz advocates for inclusive multilateral platforms—such as the ITU, UN, and AI for Good—that bring together governments, industry, academia and civil society to develop coordinated standards and benchmarks for agentic AI.
EVIDENCE
He lists a range of multilateral bodies (ITU, UN, AI for Good) that already work on standards, benchmarks and principles, and urges broader engagement to ensure inclusivity in global discussions [432-434].
MAJOR DISCUSSION POINT
Inclusive multilateral forums
DISAGREED WITH
Danielle Gilliam-Moore, Carly Ramsey
Agreements
Agreement Points
Similar Viewpoints
Unexpected Consensus
Differences
Different Viewpoints
Voluntary consensus standards vs. sector‑specific regulatory frameworks
Speakers: Jason Oxman, Austin Mayron, Sam Kaplan, Carly Ramsey, Danielle Gilliam-Moore
Voluntary consensus over regulation (Jason Oxman) Standards as gateway for adoption (Austin Mayron) Standards bodies essential for security (Sam Kaplan) Open standards & global harmonization (Carly Ramsey) Agile ministry‑led governance (Danielle Gilliam-Moore)
Jason, Austin, Sam and Carly argue that industry-driven, voluntary consensus standards (e.g., NIST, CAISI initiatives) are the preferred way to enable safe agentic AI adoption, emphasizing global harmonisation and security foundations [172-175][32-34][333-338][304-307][308-313]. Danielle counters that a more agile, ministry-led regulatory approach, where sector-specific agencies craft tailored frameworks, is needed to keep pace with innovation and protect consumers [354-358]. This reflects a split between a standards-first, largely voluntary model and a regulatory, sector-focused model.
POLICY CONTEXT (KNOWLEDGE BASE)
The distinction mirrors WTO-based definitions that standards are voluntary while regulations are compulsory, and aligns with calls from regulators to rely on voluntary consensus standards such as NIST rather than sector-specific mandates [S36][S37][S34].
Primary guardrail focus: data governance vs. security standards
Speakers: Syam Nair, Sam Kaplan
Data governance as core guardrail (Syam Nair) Standards bodies essential for security (Sam Kaplan)
Syam stresses that robust data governance-clear lineage, quality controls, and governance-is the foundational guardrail for agentic AI because agents act on data alone [241-245]. Sam emphasizes that security-focused standards and frameworks are the essential foundation for trust and safe deployment, describing a three-dimensional risk view that includes agents with “arms and legs” [333-338]. Both see guardrails as critical but prioritize different aspects (data vs. security).
POLICY CONTEXT (KNOWLEDGE BASE)
Discussions emphasize that guardrails should be rooted in robust data governance-tracking lineage and integrity-rather than solely security checklists, reflecting concerns about fragmented data-realm policies noted in multilateral analyses [S38][S39][S40].
Preferred multilateral platform for coordination
Speakers: Danielle Gilliam-Moore, Combiz Abdolrahimi, Carly Ramsey
Agile ministry‑led governance (Danielle Gilliam-Moore) Inclusive multilateral forums (Combiz Abdolrahimi) Open standards & global harmonization (Carly Ramsey)
Danielle points to the OECD as the primary venue for policy coordination on AI [386-393]. Combiz advocates broader inclusion of bodies such as the ITU, UN, and AI-for-Good to ensure global inclusivity [432-434]. Carly highlights the Singapore International Cyber Week as a practical forum for bringing governments together on cyber-policy and AI governance [404-406]. The speakers differ on which multilateral mechanism should lead the coordination effort.
POLICY CONTEXT (KNOWLEDGE BASE)
The preference for a multilateral coordination platform echoes proposals such as the Swiss-led e-diplomacy portal and broader calls for sustained, transparent collaboration to overcome fragmented digital policy discussions [S45][S46].
Unexpected Differences
Optimistic view of agentic AI complementing humans vs. caution about autonomous risks
Speakers: Prith Banerjee, Ellie Sakhaee
Agentic engineers for chip design (Prith Banerjee) Regulate based on autonomy continuum (Ellie Sakhaee)
Prith presents agentic AI as “agentic engineers” that augment human designers without replacing them, emphasizing efficiency and speed [88-94]. Ellie, however, warns that as agents become more autonomous, regulation must shift to ensure safety, highlighting potential harms if autonomy is unchecked [279-284]. The contrast between a largely positive, complementary framing and a cautionary regulatory stance was not anticipated given the overall collaborative tone of the panel.
POLICY CONTEXT (KNOWLEDGE BASE)
The split between an optimistic view of agentic AI as a human complement and caution over autonomous risks reflects the broader debate captured in recent AI-growth forums, where speakers highlighted both transformative potential and the need for risk-aware governance [S42][S43][S44].
Overall Assessment

The panel showed broad consensus that guardrails, standards, and governance are essential for safe agentic AI. The main fissures revolve around the preferred mechanism for achieving those guardrails: a voluntary, standards‑driven, globally harmonised approach versus sector‑specific, ministry‑led regulatory frameworks; and the relative emphasis on data governance versus security standards. Additionally, there is a subtle tension between an optimistic view of agentic AI as a productivity enhancer and a more cautious stance that ties regulation to autonomy levels.

Moderate. While participants share common goals (trust, safety, adoption), they diverge on policy pathways and priority guardrails. This suggests that future policy discussions will need to reconcile voluntary standards with targeted regulatory measures and align data‑centric and security‑centric perspectives to create coherent, scalable frameworks for agentic AI.

Partial Agreements
All agree that guardrails are essential for safe agentic AI deployment, but differ on the primary mechanism: Caroline proposes a four‑point payment‑specific framework (verification, security‑by‑design, clear intent, traceability) [218-222][223-225][226-230][231]; Syam stresses data governance as the underlying guardrail [241-245]; Sam focuses on security standards to map the risk surface [333-338]; Austin highlights standards development and sector‑specific listening sessions to address adoption barriers [32-34][156-158].
Speakers: Caroline Louveaux, Syam Nair, Sam Kaplan, Austin Mayron
Four payment guardrails (Caroline Louveaux) Data governance as core guardrail (Syam Nair) Standards bodies essential for security (Sam Kaplan) Standards as gateway for adoption (Austin Mayron)
All concur that regulation and standards must evolve with agentic AI capabilities. Ellie proposes tying regulation to the autonomy level of agents, moving from human‑in‑the‑loop to human‑in‑command as autonomy rises [279-284]. Sam stresses that standards bodies provide the security foundation needed for trust [333-338]. Jason argues that voluntary, consensus‑based standards are preferable to top‑down regulation [172-175]. They share the goal of adaptive governance but differ on the balance between standards and formal regulation.
Speakers: Ellie Sakhaee, Sam Kaplan, Jason Oxman
Regulate based on autonomy continuum (Ellie Sakhaee) Standards bodies essential for security (Sam Kaplan) Voluntary consensus over regulation (Jason Oxman)
Takeaways
Key takeaways
Government agencies (CAISI, NIST) are positioning standards as the primary gateway for safe, scalable adoption of agentic AI rather than direct regulation. Industry sees agentic AI as a productivity multiplier: Synopsys uses “agentic engineers” to augment chip design; MasterCard deploys autonomous agents for real‑time fraud detection; NetApp embeds agents at the storage layer to improve data quality and risk detection. Four practical guardrails for agentic payments were outlined (know your agent, security‑by‑design, clear consumer intent, traceability/auditability). Data governance and multi‑level risk controls were highlighted as essential for other sectors. Policymakers are urged to adopt a human‑first principle, focus on the autonomy continuum, and favor agile, ministry‑led governance frameworks over monolithic regulation. Voluntary, consensus‑based standards (NIST, OECD, International Consortium of Safety Institutes) and open‑access frameworks are viewed as the most effective way to achieve global harmonisation. Multilateral coordination platforms such as the OECD, Singapore International Cyber Week, and broader UN/ITU‑style forums were identified as the preferred venues for aligning standards across regions.
Resolutions and action items
CAISI announced an open Request for Information on AI‑agent security (open for another month) and invited industry comments. CAISI will hold sector‑specific listening sessions in April for healthcare, education, and finance to gather barriers to adoption. Panelists encouraged companies to submit feedback to NIST publications on AI identity and verification. MasterCard shared its four‑point guardrail playbook as a model for other firms to adopt. NetApp committed to further develop storage‑proximate agents and to define data‑governance guardrails internally. Participants agreed to promote voluntary standards through existing bodies (NIST, OECD, International Consortium of Safety Institutes).
Unresolved issues
How to achieve concrete technical interoperability between emerging regional frameworks (e.g., Singapore’s AI governance framework vs. NIST/OECD standards). Specific metrics and benchmark suites for multi‑agent system security and risk assessment remain undefined. The precise regulatory trigger point along the autonomy continuum (when to shift from human‑in‑the‑loop to human‑on‑the‑loop) was discussed but not settled. Mechanisms for ongoing public‑private collaboration on data lineage and accountability beyond voluntary standards were not detailed. Whether a unified global standard can accommodate sector‑specific nuances (healthcare, finance, education) without stifling innovation remains open.
Suggested compromises
Adopt voluntary, consensus‑based standards (NIST/OECD) as the default, reserving direct regulation for high‑risk, high‑autonomy use cases. Combine human‑in‑the‑loop oversight for lower‑autonomy agents with human‑on‑the‑loop or human‑in‑command models for higher‑autonomy agents, mirroring aviation safety practices. Allow ministries or sector‑specific agencies to craft agile, tailored governance frameworks that feed into broader international standards, balancing speed and consistency. Encourage open‑source model and standard development to ensure accessibility while still providing security‑by‑design guardrails.
Thought Provoking Comments
CAISI was originally founded as the U.S. AI Safety Institute, but last year it was refounded as the Center for AI Standards and Innovation, signaling a shift away from safety principles toward standards and innovation.
Highlights a strategic policy pivot from a risk‑avoidance mindset to one that emphasizes enabling industry through standards, revealing how government is re‑positioning its role in AI governance.
Set the stage for the discussion on how government can facilitate adoption rather than impose regulation; prompted other speakers to reference standards‑focused initiatives and led to detailed mentions of RFIs, listening sessions, and sector‑specific work.
Speaker: Austin Mayron
We have created agentic engineers – AI agents that complement human engineers, not replace them, acting as lower‑level reasoning workers while humans stay in the loop.
Introduces the novel concept of ‘agentic engineers’ and frames agentic AI as an augmentation tool for complex chip‑and‑system design, linking technical capability with workforce implications.
Shifted the conversation from abstract policy to concrete industry use‑cases; sparked concerns about safety when Prith later described autonomous cars and aircraft being weaponized, leading to a deeper discussion on risk management.
Speaker: Prith Banerjee
AI agentic systems are moving from recommending to acting – they must operate within clear values, permissions, and with full human oversight end‑to‑end.
Distinguishes between assistive and operational AI, emphasizing accountability and the necessity of guardrails for real‑time decision making in payments.
Prompted the panel to explore concrete guardrails; directly led to Caroline’s later articulation of a four‑point playbook and influenced others to discuss oversight mechanisms.
Speaker: Caroline Louveaux
Our playbook for agentic payments includes four guardrails: know your agent, security by design, clear consumer intent, and traceability/audibility.
Provides a tangible, actionable framework that moves the discussion from theory to practice, illustrating how industry can self‑regulate safely.
Served as a reference point for subsequent speakers (e.g., Syam on data governance, Prith on safety) and anchored the later conversation on enterprise‑level risk management.
Speaker: Caroline Louveaux
Data governance is the core guardrail because agents make decisions based on data; without proper lineage and control, manipulated data can produce scary outcomes.
Elevates data quality from a technical detail to a central security concern, linking it to the broader risk of agentic AI’s ‘blast radius.’
Expanded the scope of the guardrail discussion to include data pipelines, influencing the panel to consider multi‑layered safeguards and public‑private partnerships.
Speaker: Syam Nair
We should think of the agentic AI continuum – from human‑in‑the‑loop to human‑on‑the‑loop to human‑in‑command – similar to how the FAA evolves pilot oversight of drones.
Introduces a nuanced framework for scaling autonomy, providing a clear lens for policymakers to align regulatory intensity with agent capability.
Redirected the policy discussion toward graduated oversight models, prompting others (e.g., Sam Kaplan, Danielle Gilliam‑Moore) to discuss tiered standards and agile governance.
Speaker: Ellie Sakhaee
Policy should focus on regulating the use or application that causes harm rather than the underlying model itself, otherwise regulation will lag behind evolving technology.
Challenges a common regulatory approach and suggests a more effective, outcome‑based strategy.
Influenced the conversation about practical standards versus abstract principles, reinforcing Combiz’s call for operational clarity and Sam’s emphasis on standards bodies.
Speaker: Ellie Sakhaee
Human‑first principle: policy must protect humans before models; it’s not about what technology can do, but what it should do.
Re‑centers the ethical debate on human impact, reminding the panel that technological possibilities must be weighed against human welfare.
Provided a moral anchor that resonated throughout the discussion, especially when paired with Caroline’s trust‑by‑design guardrails and Prith’s safety warnings.
Speaker: Jennifer Mulvaney
Open standards and global harmonization are essential; we need to ensure frameworks like Singapore’s align with NIST to avoid contradictory regulations.
Highlights the geopolitical dimension of AI standards, emphasizing the need for cross‑border compatibility to prevent fragmentation.
Shifted the dialogue toward multilateral coordination, leading to the later round where participants named OECD, ITU, and other forums as preferred platforms.
Speaker: Carly Ramsey
Governance is more than regulation; we need agile, sector‑specific frameworks (e.g., ministries with deep expertise) to fill gaps while broader standards like ISO are being developed.
Offers a pragmatic approach to bridging the lag between fast‑moving technology and slow‑moving formal standards, advocating for decentralized, expertise‑driven governance.
Prompted a consensus on the need for both high‑level principles (OECD) and practical, industry‑led standards, influencing the final recommendations on multilateral venues.
Speaker: Danielle Gilliam‑Moore
Governments should provide concrete, operational standards and playbooks rather than abstract principles; clarity and practical guidance are what industry needs.
Summarizes the recurring demand for actionable guidance, reinforcing the earlier calls for standards, benchmarks, and clear governance.
Served as a concluding reinforcement that tied together the earlier points about standards, guardrails, and the need for tangible policy tools.
Speaker: Combiz Abdolrahimi
Overall Assessment

The discussion was driven forward by a series of pivotal insights that moved it from high‑level introductions to concrete, actionable frameworks. Austin’s framing of CAISI’s standards‑first approach set the governmental tone, while Prith’s ‘agentic engineers’ and safety scenario introduced the technical stakes. Caroline’s shift from assistive to operational AI and her four‑guardrail playbook gave the conversation a practical backbone, which Syam expanded by foregrounding data governance. Ellie’s continuum of autonomy provided a nuanced regulatory lens, prompting a broader debate on outcome‑based versus model‑based regulation. Contributions from Jennifer, Carly, Danielle, and Combiz kept the focus on human impact, global harmonization, and the need for agile, sector‑specific governance. Together, these comments created a layered narrative: first establishing the policy context, then illustrating industry use‑cases and risks, followed by concrete guardrails, and finally converging on the multilateral mechanisms needed to implement them. This progression shaped a cohesive dialogue that balanced optimism about agentic AI’s potential with a clear-eyed call for pragmatic, human‑centered standards and international coordination.

Follow-up Questions
Submit comments to the Request for Information (RFI) on AI agent security to inform standards development
The RFI seeks industry input on security challenges, essential for creating effective standards and best practices.
Speaker: Austin Mayron
Identify sector-specific barriers to adoption of AI agents in health care, education, and finance through listening sessions
Understanding unique challenges in these sectors will guide targeted standards and policy support for adoption.
Speaker: Austin Mayron
Develop benchmarks, methodologies, and evaluation methods to ensure AI agents handle PII in compliance with regulatory obligations
Providing measurable standards will give industry confidence that agents meet privacy and compliance requirements.
Speaker: Austin Mayron
Create standards and best practices for AI agent identity verification and authentication
Clear guidelines are needed to secure agent interactions and prevent rogue or fraudulent agents.
Speaker: Austin Mayron
Research verification and validation techniques to achieve near‑100% coverage for software‑defined physical systems (e.g., autonomous cars, aircraft)
High‑assurance validation is critical to prevent catastrophic failures in safety‑critical agentic AI applications.
Speaker: Prith Banerjee
Investigate safeguards against malicious use of agentic AI in critical infrastructure and weaponizable systems
Understanding and mitigating risks of AI‑driven attacks on physical assets is essential for public safety.
Speaker: Prith Banerjee
Study methods to ensure clear consumer intent verification in agentic payment transactions to avoid unintended purchases
Preventing misinterpretation of agent commands protects consumers and maintains trust in AI‑driven commerce.
Speaker: Caroline Louveaux
Explore data governance, lineage tracking, and guardrails for data that powers AI agents
Robust data governance is necessary to prevent manipulation and ensure reliable agent decisions.
Speaker: Syam Nair
Design multi‑level enterprise guardrails and public‑private partnership frameworks for managing agentic AI risk
Coordinated guardrails across stakeholders help contain the broader blast radius of agent errors or threats.
Speaker: Syam Nair
Define accountability structures so that humans, not agents, bear responsibility for AI‑driven outcomes
Clarifying legal and operational accountability is vital for governance and liability management.
Speaker: Syam Nair
Determine appropriate human‑in‑the‑loop, human‑on‑the‑loop, and human‑in‑command models based on agent autonomy levels
Tailoring human oversight to agent capabilities balances safety with efficiency.
Speaker: Ellie Sakhaee
Focus regulation on the harms caused by agentic AI applications rather than on the underlying models themselves
Targeted regulation can keep pace with rapidly evolving models while mitigating real‑world risks.
Speaker: Ellie Sakhaee
Harmonize open standards for agentic AI across regions (e.g., NIST vs. Singapore frameworks) to ensure global interoperability
Consistent standards prevent fragmentation and enable inclusive, worldwide adoption of agentic AI.
Speaker: Carly Ramsey
Utilize multilateral platforms such as the OECD, International Consortium of Safety Institutes, and Singapore International Cyber Week for coordinated policy dialogue
These venues facilitate global consensus and sharing of best practices for agentic AI governance.
Speaker: Danielle Gilliam‑Moore, Sam Kaplan, Carly Ramsey, Combiz Abdolrahimi
Develop technical benchmarks for multi‑agent systems to evaluate emerging risks before deployment
Benchmarks will help assess complex interactions and prevent unforeseen systemic failures.
Speaker: Ellie Sakhaee
Create agile, sector‑specific regulatory frameworks (diffuse model) to accelerate adoption while maintaining safety
Flexible approaches allow faster response to industry needs without stifling innovation.
Speaker: Danielle Gilliam‑Moore
Produce practical standards, playbooks, and operational guidance for good governance of agentic AI
Concrete tools enable organizations to implement governance effectively rather than relying on abstract principles.
Speaker: Combiz Abdolrahimi
Prioritize security as a foundational layer for trust in agentic AI deployments
Robust security underpins confidence in AI agents and is essential for safe, widespread adoption.
Speaker: Sam Kaplan

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Building Trusted AI at Scale – Keynote Anne Bouverot

Building Trusted AI at Scale – Keynote Anne Bouverot

Session at a glanceSummary, keypoints, and speakers overview

Summary

The session featured Anne Bouvreau, France’s Special Envoy for Artificial Intelligence, who was introduced as a diplomat, technologist and former GSMA director, and highlighted as a key figure in France’s AI governance and international cooperation efforts [1-4]. She was welcomed to speak at the AI Impact Summit, a platform that brings together experts to discuss ethical and responsible AI regulation [5-9].


Bouvreau opened by noting that she had helped organize the Paris AI Summit a year earlier and that India’s decision to host the current summit signals both symbolic and strategic importance for the Global South [13-18]. She emphasized that AI is a global transformation that must be shaped by all nations, citing India’s large market, vibrant ecosystem, and its ranking as the world’s third most competitive AI market according to the Stanford AI Index [20-23]. Bouvreau pointed to the longstanding Franco-Indian partnership and the “year of Franco-India” as a foundation for shared understanding of AI stakes [24-27].


She contextualized AI within intense geopolitical competition, referencing US “Stargate” investments and China’s DeepSeek initiative, while noting the emergence of coalitions of willing countries-including France, India, Brazil, Japan, Germany and Canada-that seek inclusive and sovereign AI development [28-33]. Describing a shift from the previous AI Action Summit in Paris to the current AI Impact Summit in Delhi, she highlighted a focus on tangible impacts in education, public health and everyday life [36-38].


As an illustration, she cited an AI tool at the All India Institute of Medical Sciences that can detect tuberculosis from a cough recorded on a smartphone, demonstrating a practical public-health application [41-43]. She also announced a pioneering data-sharing memorandum between India’s iSpirit and France’s Health Data Hub that will enable cross-border health data transfer while preserving privacy, facilitating joint research and new cures [44-48]. In academia, Bouvreau described the “RUSH” program of scientific exchanges between the two countries and her role chairing the board of France’s École Normale Supérieure, underscoring deep research collaboration [48-55]. She highlighted initiatives for the common good, such as launching an open-hardware tool to promote linguistic diversity and AI-powered translation for India’s 22 official languages, in partnership with Bashini and Current AI [59-62].


Addressing sustainability, she noted the coalition for sustainable AI launched in Paris and the Resiliency Working Group co-chaired by France and India, which will run a resilient AI challenge to reduce AI’s energy footprint [64-69]. Bouvreau stressed the need for child safety, calling for stronger age-verification mechanisms and anti-cyberbullying measures, aligning with French President Macron’s priorities [70-77]. Concluding, she affirmed that France stands ready to collaborate with India and other partners to build an inclusive, sustainable, sovereign AI ecosystem rooted in the common good, emphasizing that the future of AI must be written together with the world’s citizens [85-86].


Keypoints

Hosting the AI Impact Summit in India underscores a strategic push for global inclusivity.


The summit’s location in the Global South sends a powerful message that AI is not limited to a few nations or companies, and India’s market size and ecosystem make it an ideal host. [15-19]


AI is now a focal point of intense geopolitical competition, demanding multilateral cooperation and sovereign aspirations.


Recent moves by the US and China highlight a fierce geopolitical and economic race, prompting the formation of “coalitions of the willing” such as France, India, Brazil, Japan, Germany, and Canada to pursue inclusive and sustainable AI. [28-34]


France-India collaboration is being operationalised across key sectors:


Public health: AI tools that diagnose tuberculosis from a cough sound. [40-43]


Data sharing: A pioneering privacy-preserving health-data transfer framework between iSpirit (India) and the Health Data Hub (France). [44-48]


Research & academia: Joint scientific exchanges under the “RUSH” program and leadership ties with École Normale Supérieure. [49-55]


Initiatives for the common good focus on open resources and linguistic diversity.


An open-hardware tool for AI-powered translation, developed with Bashini and Current AI, aims to support India’s 22 official languages and promote culturally representative AI systems. [55-63]


A strong commitment to sustainable and safe AI, especially for children.


France and India co-chair a Resiliency Working Group to address AI’s energy footprint, launch a resilient-AI challenge, and push for robust age-verification and anti-cyberbullying measures. [64-77]


Overall purpose/goal:


The discussion serves to showcase France’s diplomatic leadership in shaping AI governance by forging a deep, multi-dimensional partnership with India. It aims to launch concrete collaborative projects, promote inclusive and sustainable AI development, and position the France-India alliance as a model for global cooperation on AI impact.


Overall tone:


The tone is consistently diplomatic, optimistic, and forward-looking, emphasizing partnership and shared ambition. It begins with celebratory and inclusive language, moves into a more urgent, strategic framing when addressing geopolitical competition and sustainability, and concludes with a resolute call to action that balances innovation with responsibility. The tone remains constructive throughout, with a slight shift toward heightened seriousness when discussing safety and climate concerns.


Speakers

Anne Bouvreau


Role/Title: Special Envoy for Artificial Intelligence, France; former Director General of the GSMA; diplomat and technologist


Area of Expertise: AI governance, international cooperation, AI policy, public-health AI applications, data sharing and governance, research & academia, sustainable AI, AI safety for children


Source: [S1]


Speaker 1


Role/Title: Event host/moderator (introduces the keynote speaker)


Area of Expertise:


Source: [S3]


Additional speakers:


(None identified beyond the listed speakers)


Full session reportComprehensive analysis and detailed insights

Speaker 1 introduced Ms Anne Bouvreau, France’s Special Envoy for Artificial Intelligence, highlighting her diplomatic, technological, and former GSMA Director-General background, and noting the audience’s heightened concern about AI regulation and responsible AI before inviting her to the podium [1-9].


Ms Bouvreau began by greeting the assembly in Hindi and French and thanking the hosts [10-12]. She recalled her role in organising the Paris AI Summit a year earlier and explained that India’s decision to host the current AI Impact Summit sends a powerful symbolic and strategic signal: AI is not the preserve of a few nations or corporations but a global transformation that must be shaped by all [13-18]. She reinforced this point by citing India’s vast market, vibrant ecosystem, strong technological expertise, and entrepreneurial dynamism, noting that the Stanford AI Index ranks India third globally in AI market competitiveness [20-23].


The French-Indian partnership was presented as a cornerstone of the agenda. Ms Bouvreau described this year as the “year of Franco-India” and recalled the Paris discussions that first made AI geopolitics visible [24-27]. She contrasted the United States’ “Stargate” investment and China’s DeepSeek initiative-both emblematic of a fierce geopolitical and economic AI race-with the emergence of a “coalition of the willing” that includes countries such as France, India, Brazil, Japan, Germany, Canada, and many others [28-35]. She then marked the shift from the previous AI Action Summit in Paris to the present AI Impact Summit in Delhi, stressing a move from policy discussion to tangible impact in education, public health and everyday life [36-38].


Concrete collaboration was illustrated through several sector-specific initiatives. In public health, Ms Bouvreau highlighted an AI tool at the All India Institute of Medical Sciences that can analyse a cough recorded on a smartphone to distinguish early-stage tuberculosis from a common cold, demonstrating a practical, life-saving application of AI [40-43]. In data governance, she announced a pioneering memorandum of understanding between India’s iSpirit and France’s Health Data Hub that will enable the world’s first privacy-preserving cross-border health-data transfer, thereby facilitating joint research and the search for new cures [44-48].


Academic cooperation was showcased via the “RUSH” programme of scientific exchanges, which she described as a rapid-response effort to deepen Franco-Indian research ties; she also noted her role as chair of the board of the École Normale Supérieure and announced that the next RUSH edition will be hosted in France [49-55].


She announced the launch of an open-hardware tool-developed with Bashini and Current AI-to promote linguistic diversity and AI-powered translation across India’s 22 official languages, positioning India as an ideal test-bed for culturally representative AI systems [56-63].


Sustainability featured prominently. Ms Bouvreau recalled the coalition for sustainable AI created in Paris and explained that France and India now co-chair the Resiliency Working Group, which will oversee a “Resilient AI Challenge” aimed at reducing AI’s energy consumption and aligning development with climate goals [64-69]. She reminded the audience that AI’s massive energy demand can jeopardise climate-goal attainment, underscoring why sustainability must be built-in from the design stage [64-66].


Child safety was also foregrounded. Citing President Macron’s priority, she called for robust age-verification mechanisms and anti-cyberbullying measures, arguing that innovation must be paired with protection for the most vulnerable users [70-77]. She warned that AI must not become a tool that endangers children [70-71] and emphasized that innovation and protection can and must go hand in hand [72-73].


In her concluding remarks, Ms Bouvreau framed AI as a societal, cultural and political transformation that is already redefining work and public health, and posed a stark rhetorical question: will humanity shape AI or leave future generations to inherit an ungoverned technology? [78-85]. She affirmed France’s readiness to work with India and all willing partners to build an AI ecosystem that is inclusive, sustainable, sovereign and rooted in the common good, and concluded that the future of AI must not be written for the world, but with its citizens [85-87].


Both speakers converged on the need for inclusive, responsible AI governance through international cooperation-Speaker 1 highlighted growing concern over ethical regulation [7] while Ms Bouvreau described multilateral coalitions and joint projects [31-33][64-69]. The primary divergence lay in emphasis: Speaker 1 called for concrete regulatory frameworks, whereas Ms Bouvreau foregrounded partnership-driven impact initiatives and collaborative coalitions as the pathway to responsible AI [7][31-33]. This complementary dynamic reflects a broader policy context in which nations are seeking to balance rapid AI innovation with ethical safeguards, data-sovereignty and climate considerations [S1][S9][S20].


Overall, the keynote underscored the strategic importance of hosting the AI Impact Summit in the Global South, highlighted the deepening Franco-Indian alliance across public health, data sharing, research, linguistic inclusion, sustainability and child safety, and positioned the emerging “coalition of the willing” as a model for inclusive, sovereign AI development. The tone remained diplomatic and forward-looking, moving from geopolitical competition to collaborative action, and concluded with a clear call to shape AI proactively for the benefit of all.


Session transcriptComplete transcript of the session
Speaker 1

Well, it’s my great pleasure to invite our next keynote speaker, who is Ms. Anne Bouvreau, Special Envoy for Artificial Intelligence, France. Diplomat, a technologist, and former Director General of the GSMA, which is Global System for Mobile Communication Association. Ms. Bouvreau sits at the heart of France’s efforts to lead on AI governance and international cooperation. She has been instrumental in advancing the global conversation on responsible AI regulation by bridging innovation policy and multilateral diplomacy at the highest levels. So we are about to set the stage before I invite Ms. Bouvreau here, but indeed, this is one platform, the AI Impact Summit. Thank you. Where we do get the opportunity to listen to all these esteemed speakers as they put forth their points.

their remarks, and their valuable insights, which is based on years of experience, ladies and gentlemen. At the time, we are all concerned about AI regulations, and we are all concerned about ethical and responsible AI. It would be a pleasure to listen to our next keynote speaker. Ladies and gentlemen, with a round of applause, please welcome Ms. Anne Bouvreau, Special Envoy for Artificial Intelligence, France.

Anne Bouvreau

Namaste. Bonjour. Excellencies, distinguished guests, dear guests. Dear friends. Thank you so much for welcoming me here today at the AI Impact Summit. I had the privilege to lead the organization of the Paris Summit about exactly one year ago. It is in Paris that India announced to the world its desire, its ambition, its resolve to organize the AI Impact Summit that is taking place now. Holding an AI Summit in a country from the global south is very important from a symbolic perspective, but it is even more important from a strategic perspective. It sends a very powerful message to the world. AI is not a privilege of a few nations, not the preserve of a few companies.

It is a global transformation and it must be shaped by all. India is, in my view, the perfect country to host this summit. I don’t need to remind you about the scale of this market, the richness of the ecosystem, the strength of the technological expertise here, your incredible entrepreneurial dynamism. India has, over the years, positioned itself to be at the forefront of both AI development and adoption. Just to quote a source, the Stanford AI Index ranks India third globally in AI market competitiveness. This is not by chance. Yes. France and India have a longstanding partnership and I believe share a common understanding of what is at stake. This year is the year of Franco -India.

Franco -Indian or Indio -French innovation. And last year in Paris. the geopolitics of AI started to be very visible. Remember one year ago, the announcement of Stargate, the US saying that they were investing in AI to really dominate the world. And remember DeepSeek, China saying that they’re also in the race with a different way. AI is at the center of a fierce geopolitical and economical competition. But this also created a momentum for stronger collaboration between countries such as France, India, Brazil, Japan, Germany, Canada, and many others. Coalitions of the willing of the countries that have key talent in AI, who share a vision that it must be inclusive and sustainable and a legitimate solution. Aspiration for more sovereignty.

I believe this is a very key geopolitical moment. In Paris, we spoke about action. This year in Delhi, we speak about impact. We’re going from the AI Action Summit to the AI Impact Summit. Impact in education, in public health, impact that improves lives, not just in theory, but in practice. And there are a number of areas in which our strong partnership between France and India is very relevant and strategic. I’d like to start with public health. During my previous visit to India back in November, I was deeply impressed by some AI applications, and in particular by an AI application that I saw at AIMS, at the All India Institute for Medical Science. An application which, if you just cough into a smartphone, AI analyzes the sound and can be an early detector of tuberculosis versus a more classical cold or other viral illness.

This is a very important, very practical, very tangible application of AI for public health. Second, data sharing and data governance. The ongoing work between iSpirit here in India, the Health Data Hub in France, and other partners, and the recent MOU that was signed, will enable, I think, as a first in the world for data transfer, for health data transfer across borders in a privacy -preserving way. This will enable joint research. And finding new cures for diseases. Third, research and academia. I chair the board of one of France’s leading academic institutions, École Normale Supérieure, NormoSup. So this is a subject that is very dear to my heart. This week, there was a full program of scientific exchanges.

We called it RUSH because there’s a rush to cooperate between our two countries. This was a series of exceptional talks by researchers and heads of institutions. And the next edition of that will be held in France. Fourth, I want to talk about AI for the common good. And I was very pleased to hear John Palfrey from the MacArthur Foundation talk about current AI. Current AI is a foundation that, with the help of his foundation, but also of the United Nations, and also at the onset of France, India and other countries, and with other partners, we launched in Paris. This is a foundation to help sustain AI development for the common good by helping to enable open data sets, open source tools, whatever will not be funded by VCs and private funders.

This year, at this summit, we are launching an open hardware tool to promote linguistic diversity and AI -powered translation. This is a partnership between Bashini and Current AI. With its 22 official languages and many more being spoken here in India, India perfectly embodies the challenges and the opportunities of cultural representations in AI systems. This is faced by many countries around the world, but this is a perfect place, India, to launch this initiative. And finally… And fifth, and not least, sustainable AI. In Paris, we launched a coalition for sustainable AI. AI requires huge amounts of energy and risks putting our climate goals and our desire to preserve the planet at risk. So we launched this coalition and this year we co -chair, France co -chaired with India, the Resiliency Working Group.

And sustainability is really something that needs to be taught at the beginning by design in AI systems. It cannot be an afterthought. We’re launching today, together with India and other partners, a resilient AI challenge that will help find solutions in this very important area. And finally, we must speak about safety. Especially for children. This is a priority for President Macron, if you heard him speak yesterday. This is a priority for him because this is a priority for citizens in France. I believe this is a priority for parents and citizens around the world. AI can enable a number of great things in public health, in other areas, but it must not become a tool that endangers children.

We must demand and strengthen age verification mechanisms. We must fight against cyberbullying. Innovation and protection can and must go hand in hand. Excellencies, dear friends, AI is not only a technological transformation. It is a societal, cultural and political transformation. The question is not whether AI will change our societies. It is already redefining work. It will transform public health. The real question is, will we shape AI? Or will we tell our children that we didn’t even try? France stands ready to work with India and with all willing partners to build an AI ecosystem that is inclusive, sustainable, sovereign, and rooted in the common good. The future of AI must not be written for the world. It must be written with

Related ResourcesKnowledge base sources related to the discussion topics (20)
Factual NotesClaims verified against the Diplo knowledge base (6)
Confirmedhigh

“Ms Anne Bouvreau is France’s Special Envoy for Artificial Intelligence, a diplomat, technologist and former Director General of the GSMA.”

The knowledge base identifies Anne Bouverot as Special Envoy for AI, a diplomat and technologist, and former Director General of the GSMA [S1] and [S9].

Confirmedhigh

“Ms Bouvreau is Chair of the board of the École Normale Supérieure.”

Her role as Chair of the board of ENS is explicitly listed in the source [S9].

Confirmedmedium

“The Paris discussions a year earlier first made AI geopolitics visible.”

The transcript notes that “last year in Paris the geopolitics of AI started to be very visible” [S80].

Confirmedhigh

“The United States’ “Stargate” investment and China’s DeepSeek initiative are emblematic of a fierce geopolitical and economic AI race.”

Both the US “Stargate” programme and China’s DeepSeek project are mentioned as key AI-race examples in the knowledge base [S80] and [S82].

Additional Contextmedium

“A “coalition of the willing” includes countries such as France, India, Brazil, Japan, Germany, Canada and many others.”

The source lists middle-power members of the coalition-France, India, Germany, Japan, Canada and Australia-providing additional detail, though Brazil is not mentioned [S82].

Additional Contextmedium

“The AI Impact Summit is a collaborative event between France and India aimed at building trusted AI partnerships.”

The knowledge base describes the summit as a joint France-India effort to advance trusted AI and scientific discovery [S10].

External Sources (84)
S1
Building Trusted AI at Scale – Keynote Anne Bouverot — Namaste. Bonjour. Excellencies, distinguished guests, dear guests. Dear friends. Thank you so much for welcoming me here…
S2
THE FORGOTTEN FRENCH Exiles in the British Isles, 1940-44 — – – Mauriac , C., The Other de Gaulle (London, Angus &amp; Robertson, 1973) – Michel, H., Histoire de la France Libre (P…
S3
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S4
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S5
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S6
International Cooperation for AI &amp; Digital Governance | IGF 2023 Networking Session #109 — The analysis examines multiple perspectives on the importance of regulating AI. The speakers stress the necessity of reg…
S7
Evolving AI, evolving governance: from principles to action | IGF 2023 WS #196 — Breaking down new elements and challenges, identifying legal uncertainties to be clarified During the discussion, the s…
S8
IGF 2023 WS #313 Generative AI systems facing UNESCO AI Ethics Recommendation — Generative AI systems also pose risks to democracy, as they can spread misinformation and disinformation. Public regulat…
S9
Building Trusted AI at Scale – Keynote Anne Bouverot — It is a global transformation and it must be shaped by all. India is, in my view, the perfect country to host this summi…
S10
Scaling Trusted AI_ How France and India Are Building Industrial & Innovation Bridges — thank you good morning everyone thank you I’m Julie Rouget I’m director of the French Tech mission, so we support the gr…
S11
AI for Social Good Using Technology to Create Real-World Impact — First one is diagnosis and diagnosing TB in economically vulnerable communities isn’t easy. X -ray machines, sputum anal…
S12
Building Scalable AI Through Global South Partnerships — The institute’s work on tuberculosis—the world’s largest infectious disease killer—demonstrates AI’s potential to addres…
S13
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Vivek Raghavan Sarvam AI — And it’s a core technology that a country like India must understand. from the foundational level. Otherwise, we will be…
S14
AI for Democracy_ Reimagining Governance in the Age of Intelligence — Om Birla highlights India’s incredible diversity with 27 official languages, 19,500 dialects, and over 400 documented cu…
S15
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vivek Raghavan Sarvam AI — I come here to say that India can. And I think that’s the message I want to say. India can. And India can train state -o…
S16
Building Climate-Resilient Systems with AI — The session demonstrated both the remarkable potential of AI for climate solutions and the complex challenges involved i…
S17
Smaller Footprint Bigger Impact Building Sustainable AI for the Future — Bouvreau promotes the Resilient AI Challenge as a groundbreaking international effort specifically targeting the develop…
S19
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — The tone was consistently collaborative, optimistic, and forward-looking throughout the session. Delegates maintained a …
S20
Laying the foundations for AI governance — Need for international cooperation despite geopolitical challenges
S21
AI diplomacy — For centuries, power was defined by territory, armies, and economic might. Today, a new element is paramount: data and t…
S22
Military AI: Operational dangers and the regulatory void — In October 2022, the US Department of Commerce revealed a new export control on semiconductors and computing chips – mat…
S23
Keynote-HE Emmanuel Macron — The money race is important and we cannot discount it, but the outcomes and real value creation for our population is ev…
S24
India and France to strengthen digital partnerships — Indian Prime Minister Narendra Modi’s two-day visit to France, where he held discussions with French President Emmanuel …
S25
WS #159 Domain names: digital inclusion and innovation — 1. Linguistic Diversity and Inclusion Ram Mohan: Thank you so much. Can you hear me? Okay, great. Thank you so much, …
S26
Policies and platforms in support of learning: towards more coherence, coordination and convergence — 314. According to the data collected by the review team (see figure 19 below, where the number of organi…
S27
Stocktaking exercise of the Global Digital Compact process and how to link it to the WSIS +20 process — Henri Eli Monceau:Thank you very much, Renata. I would also like to add an element, perhaps, in relation to this taking …
S28
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — ### Preserving Linguistic Diversity Abhishek Singh: Thank you for convening this and bringing this very, very important…
S29
WS #172 Regulating AI and Emerging Risks for Children’s Rights — This question highlights the need to integrate safety considerations from the earliest stages of technology development,…
S30
WS #376 Elevating Childrens Voices in AI Design — Dr. Mhairi Aitken: Yeah, I mean, I would agree that children have particular rights, they have particular needs, unique …
S31
Safeguarding Children with Responsible AI — “As AI becomes more embedded in classrooms and in learning platforms, what governance or design choices are essential to…
S32
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — Qian Xiao:OK, well, I’m doing a lot of research on the international governance of AI. And from our perspective, we thin…
S33
Leaders’ Plenary | Global Vision for AI Impact and Governance Morning Session Part 1 — Bolivia emphasized that the development of AI and digital technologies must prioritize fairness and ethical consideratio…
S34
The Global Economic Outlook — Georgieva emphasizes the importance of making artificial intelligence accessible to all, not just a privileged few. She …
S35
Impact &amp; the Role of AI How Artificial Intelligence Is Changing Everything — That is why we must frame this not simply as technology policy, but as democratic governance. The choices made today abo…
S36
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Virginia Dignam: Thank you very much, Isadora. No pressure, I see. You want me to say all kinds of things. I hope that i…
S37
How to make AI governance fit for purpose? — Anne Bouverot described Europe’s evolution from regulation-focused approaches toward innovation and practical outcomes. …
S38
How the EU’s GPAI Code Shapes Safe and Trustworthy AI Governance India AI Impact Summit 2026 — Rather than creating prescriptive legislative requirements, the framework seeks to “build a culture of restraint” among …
S39
Day 0 Event #173 Building Ethical AI: Policy Tool for Human Centric and Responsible AI Governance — Chris Martin: Thanks, Ahmed. Well, everyone, I’ll walk through I think a little bit of this presentation here on what…
S40
Lightning Talk #209 Safeguarding Diverse Independent NeWS Media in Policy — ## Background and Research Context None identified beyond those in the speakers names list.
S41
Laying the foundations for AI governance — Lan Xue: Okay. I think my job is easier. I can say I agree with all of them. So I think that’s probably the easiest way….
S42
Defending Truth — Deepfakes, which are manipulated and misleading videos, are considered a concerning issue. They have the potential to be…
S43
morning session — In addition to the discussions surrounding confidence-building measures and the BWC, this expanded summary also emphasiz…
S44
Global AI Policy Framework: International Cooperation and Historical Perspectives — The speakers demonstrate significant consensus on key principles including the need for inclusive governance, building o…
S45
OPENING SESSION | IGF 2023 — In conclusion, Mr. Kishida Fumio’s contributions to the AI discourse underscore its potential for socio-economic develop…
S46
Why science metters in global AI governance — The discussion maintained a consistently serious, collaborative, and optimistic tone throughout. Speakers emphasized urg…
S47
Open Forum #33 Building an International AI Cooperation Ecosystem — The discussion maintained a consistently collaborative and optimistic tone throughout. Speakers were respectful and cons…
S48
Open Forum #75 Shaping Global AI Governance Through Multistakeholder Action — ### Diplomatic Engagement – Engaging in multi-stakeholder collaboration Devine Salese Agbeti: Thank you very much. Fir…
S50
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — The tone was consistently collaborative, optimistic, and forward-looking throughout the session. Delegates maintained a …
S51
Leaders’ Plenary | Global Vision for AI Impact and Governance Morning Session Part 2 — Minister Weishnaff, Excellencies, ladies and gentlemen, let me begin by giving our thanks and expressing our sincere app…
S52
Press Briefing by HMIT Ashwani Vaishnav on AI Impact Summit 2026 l Day 5 — Aur isme bahut mehnat karni padhegi. And we are prepared to put that hard work, put that effort. Our Prime Minister keep…
S53
Building Trusted AI at Scale – Keynote Anne Bouverot — Impact:This statement sets the foundational tone for the entire speech, establishing the philosophical framework that AI…
S54
Building Trusted AI at Scale – Keynote Anne Bouverot — This comment shifts the discussion from acknowledging competition to actively proposing strategic alliances. It introduc…
S55
Military AI: Operational dangers and the regulatory void — In October 2022, the US Department of Commerce revealed a new export control on semiconductors and computing chips – mat…
S56
Impact the Future – Compassion AI | IGF 2023 Town Hall #63 — The competition for market control in AI is intensifying, with Western companies such as Microsoft, AWS, Google, and Met…
S57
Hard Power: Wake-up Call for Companies / DAVOS 2025 — Mousavizadeh highlights the intensifying competition between the US and China in advanced technologies, particularly AI….
S58
Comprehensive Discussion Report: The Future of Artificial General Intelligence — International cooperation on minimum safety standards is needed, but geopolitical competition makes coordination difficu…
S59
Scaling Trusted AI_ How France and India Are Building Industrial & Innovation Bridges — This transcript captures discussions from the AI Impact Summit, a collaborative event between France and India focused o…
S60
Scaling Trusted AI_ How France and India Are Building Industrial &amp; Innovation Bridges — AI has been used by law enforcement but public has a fear to it and has a misunderstanding. perhaps or right understandi…
S61
India and France to strengthen digital partnerships — Indian Prime Minister Narendra Modi’s two-day visit to France, where he held discussions with French President Emmanuel …
S62
Inclusive AI_ Why Linguistic Diversity Matters — Collaboration as Public‑Good Open‑Source Initiative
S63
Multilingual Internet: a Key Catalyst for Access &amp; Inclusion | IGF 2023 Town Hall #75 — Mark Durdin:Yeah, thank you. I think it’s very interesting hearing a lot of this discussion, and I’m really resonating w…
S64
WS #159 Domain names: digital inclusion and innovation — 1. Linguistic Diversity and Inclusion Ram Mohan: Thank you so much. Can you hear me? Okay, great. Thank you so much, …
S65
Responsible AI for Shared Prosperity — Lingua Africa is a new multi-partner initiative that focuses on creating open, community-governed language infrastructur…
S66
Responsible AI for Children Safe Playful and Empowering Learning — A striking aspect of the discussion was LEGO’s unwavering commitment to child safety and privacy. Gonsalves detailed the…
S67
WS #172 Regulating AI and Emerging Risks for Children’s Rights — Jun Zhao: Right. OK, well, thank you very much for inviting me to be here. I wish I could be there in person very much…
S68
AI for Good – food and agriculture — High level of consensus with collaborative spirit. The speakers demonstrate unified vision for leveraging AI and robotic…
S69
Conversation: 01 — Artificial intelligence
S70
UNSC meeting: Scientific developments, peace and security — The French representative emphasised three key points regarding scientific progress and its role in fostering peace and …
S71
Ad Hoc Consultation: Friday 9th February, Morning session — The delegation commenced with an expression of thanks to the Chair, affirming their commitment to retaining the abbrevia…
S72
Taking Stock — This transcript captures the “Taking Stock” session from the 20th Internet Governance Forum (IGF) held in Norway in 2025…
S73
Parallel Session D1: Third Global Forum for National Trade Facilitation Committees — During a friendly morning address in Barbados, a World Bank Group representative began by thanking the hosts and organis…
S74
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Ebba Busch Deputy Prime Minister Sweden — Deputy Prime Minister Bush opened her remarks with a greeting in Hindi (“Namaste, ap kärsahein”) and expressed gratitude…
S75
Keynote Adresses at India AI Impact Summit 2026 — And critically, India brings strength. Peace doesn’t come from hoping adversaries will play fair. We all know they won’t…
S76
Leaders’ Plenary | Global Vision for AI Impact and Governance- Afternoon Session — Julie Sweet from Accenture highlighted another crucial advantage: India’s human capital. With over 350,000 employees in …
S77
The Global Power Shift India’s Rise in AI & Semiconductors — The panellists addressed fundamental changes in how knowledge is acquired and applied in the AI era. Singh emphasised th…
S78
The Global Power Shift India’s Rise in AI &amp; Semiconductors — And with the whole ecosystem around startups, we all know India is the third largest startup ecosystem of the world. Wit…
S79
Keynote-HE Emmanuel Macron — The speech concluded with a powerful reaffirmation of the central thesis: that the future of AI will be built by those w…
S80
https://dig.watch/event/india-ai-impact-summit-2026/building-trusted-ai-at-scale-keynote-anne-bouverot — Franco -Indian or Indio -French innovation. And last year in Paris. the geopolitics of AI started to be very visible. Re…
S81
Global Perspectives on Openness and Trust in AI — I don’t know, is really the answer. Governance is such a broad word. There’s a lot of, for example, open source is reall…
S82
Global Perspectives on Openness and Trust in AI — Bouverot highlighted how China’s strategic use of open source technologies, exemplified by DeepSeek’s emergence, demonst…
S83
AI race shows diverging paths for China and the US — The US administration’s new AI action plan frames global development as anAI racewith a single winner. Officials argue A…
S84
Keynote by Sangita Reddy Joint Managing Director Apollo Hospitals India AI Impact Summit — Dr. Sangita Reddy’s presentation demonstrates a sophisticated progression of thought that moves from reframing disadvant…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
Speaker 1
1 argument118 words per minute193 words97 seconds
Argument 1
Growing concern about AI regulations and the necessity of ethical, responsible AI frameworks
EXPLANATION
Speaker 1 highlights that the audience shares worries about how AI should be regulated and stresses the need for frameworks that ensure ethical and responsible use of AI. This sets the tone for the summit by underscoring the urgency of governance measures.
EVIDENCE
The speaker explicitly states that “we are all concerned about AI regulations, and we are all concerned about ethical and responsible AI” [7].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Multiple IGF sessions stress the urgency of AI regulation grounded in human-rights frameworks and the need to address legal uncertainties, supporting the call for ethical, responsible AI [S6][S7][S8].
MAJOR DISCUSSION POINT
Need for AI regulation and ethical AI
AGREED WITH
Anne Bouvreau
DISAGREED WITH
Anne Bouvreau
A
Anne Bouvreau
9 arguments116 words per minute1148 words590 seconds
Argument 1
Hosting the summit in India signals that AI is a global transformation, not a privilege of a few nations
EXPLANATION
Anne Bouvreau argues that locating the AI Impact Summit in the Global South demonstrates that AI development belongs to all countries, not just a handful of powerful nations. The venue sends a symbolic and strategic message of inclusivity in the AI revolution.
EVIDENCE
She notes that “Holding an AI Summit in a country from the global south is very important… It sends a very powerful message to the world. AI is not a privilege of a few nations, not the preserve of a few companies. It is a global transformation and it must be shaped by all” [15-18].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Anne Bouverot’s keynote explicitly states that AI must be shaped by all nations and that holding the summit in the Global South sends a powerful inclusive message [S1][S9].
MAJOR DISCUSSION POINT
Importance of hosting the AI Impact Summit in the Global South
Argument 2
France and India share a strategic partnership, co‑chairing initiatives such as the Resiliency Working Group
EXPLANATION
Bouvreau points out that France and India have a long‑standing partnership and are jointly leading key AI initiatives, exemplified by their co‑chairing of the Resiliency Working Group. This collaboration showcases bilateral leadership in AI governance.
EVIDENCE
She references the “longstanding partnership” between the two countries and mentions that “France co-chaired with India, the Resiliency Working Group” [24-27][66].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The keynote notes that France co-chaired the Resiliency Working Group with India, illustrating the bilateral partnership in AI governance [S9].
MAJOR DISCUSSION POINT
Franco‑Indian partnership and broader international AI cooperation
Argument 3
A “coalition of the willing” (France, India, Brazil, Japan, Germany, Canada, etc.) promotes inclusive, sovereign AI development
EXPLANATION
The speaker describes a multilateral coalition of countries that possess AI talent and share a vision of inclusive, sovereign, and sustainable AI. This coalition aims to coordinate policies and resources across borders.
EVIDENCE
She outlines that “Coalitions of the willing of the countries that have key talent in AI, who share a vision that it must be inclusive and sustainable and a legitimate solution” [31-33].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Bouverot introduces a “coalition of the willing” of countries with AI talent that share a vision of inclusive, sovereign AI development [S1][S9].
MAJOR DISCUSSION POINT
Franco‑Indian partnership and broader international AI cooperation
DISAGREED WITH
Speaker 1
Argument 4
An AI app at AIIMS that analyzes cough sounds on a smartphone can early‑detect tuberculosis, illustrating tangible health impact
EXPLANATION
Bouvreau highlights a concrete AI application developed in India where a smartphone records a cough and the AI model distinguishes tuberculosis from other illnesses, offering an early detection tool for public health.
EVIDENCE
She recounts seeing “an AI application that if you just cough into a smartphone, AI analyzes the sound and can be an early detector of tuberculosis versus a more classical cold or other viral illness” at the All India Institute for Medical Science [41-44].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The summit keynote describes the AI cough-analysis tool for TB detection at AIIMS, and additional reports detail the same technology and its public-health relevance [S9][S11][S12].
MAJOR DISCUSSION POINT
AI applications for public health and data governance
Argument 5
The MOU between iSpirit (India) and France’s Health Data Hub enables privacy‑preserving cross‑border health data sharing for joint research
EXPLANATION
She explains that a newly signed memorandum of understanding creates a world‑first mechanism for transferring health data across borders while preserving privacy, facilitating collaborative research and potential cures.
EVIDENCE
She describes “the ongoing work between iSpirit here in India, the Health Data Hub in France, and other partners, and the recent MOU that was signed, will enable, I think, as a first in the world for data transfer, for health data transfer across borders in a privacy-preserving way. This will enable joint research and finding new cures for diseases” [45-48].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Bouverot highlights the newly signed MOU as a world-first mechanism for privacy-preserving cross-border health data transfer, enabling joint research [S9].
MAJOR DISCUSSION POINT
AI applications for public health and data governance
Argument 6
The RUSH program of scientific exchanges accelerates Franco‑Indian research collaboration
EXPLANATION
Bouvreau introduces the RUSH initiative, a series of high‑level scientific talks and exchanges between French and Indian institutions, designed to deepen research ties and foster joint projects.
EVIDENCE
She notes that “this week, there was a full program of scientific exchanges. We called it RUSH because there’s a rush to cooperate between our two countries. This was a series of exceptional talks by researchers and heads of institutions, and the next edition of that will be held in France” [51-55].
MAJOR DISCUSSION POINT
AI for research, academia, and the common good
Argument 7
Launch of an open‑hardware tool for linguistic diversity and AI‑powered translation supports India’s 22 official languages and promotes inclusive AI
EXPLANATION
She announces a partnership that will release an open‑hardware translation tool aimed at preserving linguistic diversity, leveraging AI to serve all of India’s official languages and demonstrating a commitment to inclusive technology.
EVIDENCE
She states that “we are launching an open hardware tool to promote linguistic diversity and AI-powered translation. This is a partnership between Bashini and Current AI. With its 22 official languages and many more being spoken here in India, India perfectly embodies the challenges and the opportunities of cultural representations in AI systems” [59-62].
MAJOR DISCUSSION POINT
AI for research, academia, and the common good
Argument 8
A coalition for sustainable AI and the Resiliency Working Group address AI’s high energy demand and climate impact; a resilient AI challenge seeks concrete solutions
EXPLANATION
Bouvreau points out that AI consumes large amounts of energy, threatening climate goals, and that a coalition for sustainable AI—co‑chaired by France and India—has created a Resiliency Working Group and a challenge to develop low‑impact AI solutions.
EVIDENCE
She explains that “AI requires huge amounts of energy and risks putting our climate goals at risk. So we launched this coalition and this year we co-chair, France co-chaired with India, the Resiliency Working Group. We’re launching today, together with India and other partners, a resilient AI challenge that will help find solutions in this very important area” [64-69].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The keynote mentions the coalition’s focus on AI’s energy footprint, the Resiliency Working Group, and the launch of a Resilient AI Challenge; climate-impact concerns are further discussed in a session on climate-resilient AI systems and the challenge is detailed in a sustainability-focused briefing [S9][S16][S17].
MAJOR DISCUSSION POINT
Sustainable and safe AI
Argument 9
Emphasis on child safety through age‑verification mechanisms and anti‑cyberbullying measures; innovation must be paired with protection
EXPLANATION
She stresses that protecting children online is a priority, calling for stronger age‑verification, anti‑cyberbullying tools, and a balance between technological innovation and safety safeguards.
EVIDENCE
She remarks that “we must demand and strengthen age verification mechanisms. We must fight against cyberbullying. Innovation and protection can and must go hand in hand” [70-77].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Bouverot calls for stronger age-verification and anti-cyberbullying tools, linking child protection to AI innovation in her keynote remarks [S9][S1].
MAJOR DISCUSSION POINT
Sustainable and safe AI
Agreements
Agreement Points
Both speakers stress the need for inclusive, responsible AI governance through international cooperation.
Speakers: Speaker 1, Anne Bouvreau
Growing concern about AI regulations and the necessity of ethical, responsible AI frameworks A coalition of the willing (France, India, Brazil, Japan, Germany, Canada, etc.) promotes inclusive, sovereign AI development
Speaker 1 highlights that the audience is worried about AI regulations and ethical, responsible AI frameworks [7], while Anne Bouvreau describes a multilateral “coalition of the willing” that aims to make AI inclusive and sovereign [31-33]. Both points converge on the idea that AI governance must be ethical, inclusive and driven by international collaboration.
POLICY CONTEXT (KNOWLEDGE BASE)
The call for an inclusive, flexible international AI governance framework is echoed in research advocating a fast-adaptable policy approach [S32] and in the Global AI Policy Framework that stresses inclusive governance built on existing institutions [S44]; similar themes appear in multilateral forums emphasizing cooperation [S46][S47].
Both speakers underline the importance of multilateral diplomacy and global collaboration in shaping AI.
Speakers: Speaker 1, Anne Bouvreau
Growing concern about AI regulations and the necessity of ethical, responsible AI frameworks Coalitions of the willing of the countries that have key talent in AI, who share a vision that it must be inclusive and sustainable and a legitimate solution
Speaker 1 notes that Ms. Bouvreau bridges innovation policy and multilateral diplomacy to advance responsible AI regulation [4], and Anne Bouvreau points to a coalition of willing nations working together on inclusive AI [31-33]. This reflects shared belief in the central role of international cooperation.
POLICY CONTEXT (KNOWLEDGE BASE)
Multilateral diplomacy is highlighted by Bolivia’s emphasis on fairness, ethics and cooperation among all countries [S33] and reinforced by the broader push for multistakeholder collaboration in recent AI policy gatherings [S46][S47].
Similar Viewpoints
Both see AI governance as requiring ethical standards and a collaborative, inclusive approach across nations, as shown by Speaker 1’s call for ethical frameworks [7] and Anne’s description of a coalition that seeks inclusive, sovereign AI [31-33].
Speakers: Speaker 1, Anne Bouvreau
Growing concern about AI regulations and the necessity of ethical, responsible AI frameworks A coalition of the willing (France, India, Brazil, Japan, Germany, Canada, etc.) promotes inclusive, sovereign AI development
Both emphasize that AI challenges cannot be solved by a few actors; they must be addressed through broad, multilateral cooperation, reflected in Speaker 1’s reference to multilateral diplomacy [4] and Anne’s coalition narrative [31-33].
Speakers: Speaker 1, Anne Bouvreau
Growing concern about AI regulations and the necessity of ethical, responsible AI frameworks Coalitions of the willing of the countries that have key talent in AI, who share a vision that it must be inclusive and sustainable and a legitimate solution
Unexpected Consensus
AI must be shaped by all nations, not just a privileged few.
Speakers: Speaker 1, Anne Bouvreau
Growing concern about AI regulations and the necessity of ethical, responsible AI frameworks Holding an AI Summit in a country from the global south is very important… AI is not a privilege of a few nations, not the preserve of a few companies. It is a global transformation and it must be shaped by all
While Speaker 1 focuses on regulation and ethical frameworks, the wording about a “global conversation” [4] implicitly supports the idea that AI governance is a shared, worldwide responsibility. Anne explicitly states that hosting the summit in the Global South sends a powerful inclusive message [15-18]. The convergence on the principle that AI should be globally shaped was not an obvious point of overlap, making it an unexpected consensus.
POLICY CONTEXT (KNOWLEDGE BASE)
This aligns with Georgieva’s statement that AI should be accessible to everyone, not just a privileged minority [S34], and with the inclusive governance principles outlined in the Global AI Policy Framework [S44].
Overall Assessment

The two speakers largely agree that AI governance requires ethical, responsible frameworks and must be pursued through inclusive, multilateral cooperation. Speaker 1 frames the concern around regulation and ethics, while Anne Bouvreau expands this into concrete coalition-building, inclusive hosting, and collaborative initiatives. Their shared emphasis creates a coherent narrative that AI should be shaped globally, responsibly, and sustainably.

High consensus on the need for ethical, inclusive AI governance and international collaboration, suggesting strong alignment that can facilitate joint policy initiatives and cross‑border projects.

Differences
Different Viewpoints
Emphasis on how to achieve responsible AI – Speaker 1 calls for concrete AI regulations and ethical frameworks, while Anne Bouvreau focuses on partnership‑driven impact initiatives and coalitions rather than specific regulatory measures.
Speakers: Speaker 1, Anne Bouvreau
Growing concern about AI regulations and the necessity of ethical, responsible AI frameworks A “coalition of the willing” (France, India, Brazil, Japan, Germany, Canada, etc.) promotes inclusive, sovereign AI development
Speaker 1 stresses the urgency of establishing regulatory and ethical frameworks for AI ([7]), whereas Anne Bouvreau highlights multilateral coalitions, joint research, and impact-oriented programmes as the path forward ([31-33][64-69]), indicating a divergence in preferred mechanisms for responsible AI.
POLICY CONTEXT (KNOWLEDGE BASE)
Anne Bouverot’s description of Europe’s shift toward innovation-focused investments and the Sustainable AI Coalition illustrates a partnership-driven approach rather than prescriptive regulation [S37]; the EU GPAI Code similarly favors flexible, non-legislative guidance to build a culture of restraint among AI operators [S38], contrasting with calls for concrete regulatory frameworks.
Unexpected Differences
None identified
Speakers:
The transcript contains only two speakers whose statements are largely complementary; no surprising or contradictory positions emerge beyond the differing emphasis on regulation versus collaborative impact.
Overall Assessment

The discussion shows limited overt conflict. The main point of divergence lies in the preferred route to responsible AI – regulatory frameworks versus multilateral partnership and impact‑focused initiatives. Both speakers share the overarching goal of inclusive, ethical AI development.

Low to moderate disagreement; the differing approaches are complementary rather than antagonistic, suggesting that future dialogue can integrate regulation with collaborative impact programmes to advance AI governance.

Partial Agreements
Both speakers agree that AI must be governed in a way that is inclusive and serves the global public interest. Speaker 1 frames this as a regulatory/ethical need ([7]), while Anne frames it as a symbolic and strategic message of global inclusivity ([15-18]).
Speakers: Speaker 1, Anne Bouvreau
Growing concern about AI regulations and the necessity of ethical, responsible AI frameworks Hosting the summit in India signals that AI is a global transformation, not a privilege of a few nations
Takeaways
Key takeaways
Hosting the AI Impact Summit in India underscores that AI is a global transformation, not limited to a few nations. France and India have a deepening strategic partnership, co‑chairing initiatives such as the Resiliency Working Group and collaborating on multiple AI fronts. A broader “coalition of the willing” (including Brazil, Japan, Germany, Canada, etc.) is emerging to promote inclusive, sovereign, and sustainable AI development. Concrete AI applications are already delivering public‑health benefits, e.g., a smartphone‑based cough‑analysis tool at AIIMS for early tuberculosis detection. The recently signed MOU between iSpirit (India) and France’s Health Data Hub enables privacy‑preserving cross‑border health‑data sharing for joint research. Academic collaboration is being accelerated through the RUSH program of scientific exchanges between French and Indian institutions. An open‑hardware tool for linguistic diversity and AI‑powered translation is being launched to support India’s 22 official languages and promote inclusive AI. Sustainable AI is being addressed via a coalition and the Resiliency Working Group, with a new Resilient AI Challenge targeting energy‑efficiency solutions. Child safety in AI is highlighted as a priority, calling for stronger age‑verification mechanisms and anti‑cyberbullying safeguards. Overall, AI is framed as a societal, cultural, and political transformation that must be shaped responsibly rather than left to market forces alone.
Resolutions and action items
Launch of an open‑hardware tool for linguistic diversity and AI‑powered translation (partnership between Bashini and Current AI). Initiation of the Resilient AI Challenge to develop solutions for AI energy consumption and climate impact. Implementation of privacy‑preserving cross‑border health data sharing under the iSpirit–Health Data Hub MOU. Continuation and expansion of the RUSH scientific exchange program, with the next edition scheduled in France. Co‑chairing of the Resiliency Working Group by France and India to steer sustainable AI policies. Commitment by France to work with India and other willing partners to build an inclusive, sovereign, and common‑good‑oriented AI ecosystem.
Unresolved issues
Specific regulatory frameworks for AI governance remain to be defined and harmonized across participating countries. Details on how age‑verification mechanisms and anti‑cyberbullying measures will be standardized and enforced are not yet clarified. Scalable models for broader data‑governance that balance privacy, sovereignty, and research needs need further development. Mechanisms for ensuring that AI sustainability standards are adopted by private sector developers are not fully addressed. How to extend the demonstrated public‑health AI applications (e.g., cough analysis) to other diseases and regions remains an open question.
Suggested compromises
Balancing rapid AI innovation with protective measures (e.g., age verification, cyberbullying safeguards) to ensure safety without stifling progress. Integrating sustainability considerations early in AI system design rather than treating them as an afterthought, aligning environmental goals with development timelines.
Thought Provoking Comments
Holding an AI Summit in a country from the global south is very important from a symbolic perspective, but it is even more important from a strategic perspective.
Highlights the dual significance of venue choice—beyond symbolism, it signals a strategic shift toward inclusive global AI leadership, challenging the usual North‑centric narrative.
Sets the tone for the entire discussion, reframing the summit as a geopolitical statement and prompting later references to India’s role and the need for broader participation.
Speaker: Anne Bouvreau
AI is not a privilege of a few nations, not the preserve of a few companies. It is a global transformation and it must be shaped by all.
Broadens the conversation from technology to equity, urging a multilateral approach and questioning existing power structures in AI development.
Introduces the theme of inclusivity that recurs throughout her speech (public‑health use‑cases, data‑sharing, linguistic diversity), steering the audience toward thinking about shared responsibility.
Speaker: Anne Bouvreau
AI is at the center of a fierce geopolitical and economic competition… but this also created a momentum for stronger collaboration between countries such as France, India, Brazil, Japan, Germany, Canada.
Acknowledges the reality of AI‑driven rivalry while simultaneously proposing cooperation as a counter‑balance, challenging a zero‑sum view of AI geopolitics.
Creates a turning point from a competitive framing to a collaborative one, paving the way for the announcement of coalitions, MOUs, and joint initiatives later in the speech.
Speaker: Anne Bouvreau
We are going from the AI Action Summit to the AI Impact Summit. Impact in education, in public health, impact that improves lives, not just in theory, but in practice.
Shifts focus from abstract policy discussions to tangible outcomes, urging stakeholders to measure success by real‑world benefits.
Guides the conversation toward concrete examples (e.g., TB detection, data‑governance) and signals that the summit will prioritize demonstrable results.
Speaker: Anne Bouvreau
An AI application that, if you just cough into a smartphone, analyzes the sound and can be an early detector of tuberculosis versus a more classical cold or other viral illness.
Provides a vivid, relatable illustration of AI’s potential in public health, turning abstract benefits into a specific, actionable use‑case.
Anchors the earlier claim about ‘impact’ with a real example, inspiring audience members to consider similar deployments in their own contexts.
Speaker: Anne Bouvreau
The ongoing work between iSpirit in India, the Health Data Hub in France, and other partners… will enable, as a first in the world, privacy‑preserving cross‑border health data transfer for joint research and new cures.
Introduces an innovative governance model that balances data utility with privacy, addressing a core tension in AI regulation.
Expands the dialogue from applications to the infrastructure needed for responsible AI, prompting interest in international data‑sharing frameworks.
Speaker: Anne Bouvreau
We are launching an open hardware tool to promote linguistic diversity and AI‑powered translation… India, with its 22 official languages, perfectly embodies the challenges and opportunities of cultural representation in AI systems.
Links AI development to cultural equity, highlighting language inclusion as a critical, often overlooked dimension of AI ethics.
Adds a new layer—cultural sustainability—to the conversation, encouraging participants to think beyond technical performance toward representation.
Speaker: Anne Bouvreau
AI requires huge amounts of energy and risks putting our climate goals at risk. We launched a coalition for sustainable AI and a Resiliency Working Group to embed sustainability from the design stage.
Brings environmental impact into the AI governance conversation, challenging the assumption that AI development is neutral with respect to climate.
Creates a pivot toward sustainability, leading to the announcement of a “resilient AI challenge” and signaling that future discussions must integrate climate considerations.
Speaker: Anne Bouvreau
Safety, especially for children, must be a priority—age verification mechanisms, fighting cyberbullying, and ensuring innovation and protection go hand in hand.
Elevates the discussion of AI ethics to a societal level, emphasizing vulnerable populations and concrete policy levers.
Broadens the scope of the summit to include child protection, prompting attendees to consider regulatory measures alongside technological advances.
Speaker: Anne Bouvreau
The real question is, will we shape AI? Or will we tell our children that we didn’t even try?
A powerful rhetorical close that reframes the entire dialogue as a moral imperative, urging proactive stewardship rather than passive observation.
Leaves the audience with a call to action, reinforcing all previous points and setting an urgent, purposeful tone for the remainder of the summit.
Speaker: Anne Bouvreau
Overall Assessment

Anne Bouvreau’s remarks functioned as the backbone of the AI Impact Summit, repeatedly shifting the conversation from abstract geopolitics to concrete, inclusive, and responsible AI practices. By juxtaposing competitive realities with collaborative opportunities, she reframed the narrative from a power‑play to a shared‑responsibility agenda. Each highlighted comment introduced a new dimension—strategic geography, equitable participation, real‑world health applications, privacy‑preserving data sharing, linguistic diversity, environmental sustainability, and child safety—thereby expanding the scope of the discussion and prompting participants to consider multifaceted, cross‑sectoral solutions. The cumulative effect was to set a forward‑looking, action‑oriented tone that positioned the summit as a platform for tangible impact rather than mere policy debate.

Follow-up Questions
How can we develop and implement effective age verification mechanisms to protect children from AI‑related harms?
Ensuring child safety is highlighted as a priority, requiring research into reliable verification technologies and policies.
Speaker: Anne Bouvreau
What are the best practices for privacy‑preserving cross‑border health data sharing, as exemplified by the iSpirit‑Health Data Hub MOU?
The MOU aims to enable global health research, but methods to safeguard privacy while allowing data transfer need further investigation.
Speaker: Anne Bouvreau
How can AI‑based cough analysis on smartphones be validated and scaled for early tuberculosis detection in diverse populations?
The demonstrated application at AIIMS shows promise, but rigorous clinical validation and deployment strategies are required.
Speaker: Anne Bouvreau
What technical solutions and standards are needed to create open‑hardware tools that promote linguistic diversity and AI‑powered translation across India’s 22 official languages?
Launching such tools addresses cultural representation, yet research is needed on hardware design, language datasets, and open‑source governance.
Speaker: Anne Bouvreau
How can AI systems be designed for energy efficiency and climate resilience to meet sustainable AI goals?
AI’s high energy demand threatens climate targets; the Resiliency Working Group and challenge call for research into low‑carbon AI architectures and metrics.
Speaker: Anne Bouvreau
What frameworks can measure and maximize AI’s impact on public health, education, and other societal sectors?
Transitioning from an AI Action Summit to an Impact Summit implies a need for robust impact assessment methodologies.
Speaker: Anne Bouvreau
Will we shape AI proactively, or will future generations inherit an ungoverned technology?
This rhetorical question underscores the urgency for policy, governance, and collaborative research to steer AI development.
Speaker: Anne Bouvreau

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Artificial General Intelligence and the Future of Responsible Governance

Artificial General Intelligence and the Future of Responsible Governance

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel opened by noting that rapid advances in AI since 2020, especially the surge of powerful models in 2023-24, have sparked renewed debate about the emergence of artificial general intelligence (AGI) and the risk of missing the opportunity to shape it responsibly [1-4][6]. Participants agreed that while many definitions exist, AGI is generally understood as an AI that can reason, learn, adapt, transfer knowledge and operate beyond narrow, task-specific domains [12-18]. Simonas Satunas offered a concrete, albeit simplified, definition: an AGI would perform any human professional task with comparable accuracy, and he estimated a 3-to-7-year horizon for reaching that milestone based on growing public trust in generative AI tools [21-24].


The discussion highlighted that massive compute investments are driving the current AI boom, but compute is only one element of a broader ecosystem that also requires energy-efficient hardware, data, and especially human capacities such as critical thinking [70-71][72-85][86-90]. Alexandra emphasized that achieving human-like situational awareness will demand access to large amounts of private data, raising privacy concerns and underscoring the need for robust regulatory frameworks [35-37][38-41]. Kenny warned that as AI becomes more capable, it can both generate sophisticated attacks and mimic human decision-making, making security threats more realistic and amplifying the importance of educating defenders [105-108][110-112].


Simonas Satunas outlined a four-tier risk hierarchy-from traditional privacy and cyber-fraud to mental-health impacts, social empathy erosion, and macro-level threats to democracy-calling for coordinated national and international strategies to mitigate these costs [131-138]. The panel concurred that critical thinking and public awareness are essential safeguards, with education needed to help people identify AI-generated misinformation and understand underlying threats [154-155][164-170]. Regarding governance, participants suggested technical measures such as model labeling and broader regulatory actions, noting Europe’s tendency toward over-regulation but recognizing the potential of reasonable standards [173-176].


Alexandra proposed building resilience through rollback mechanisms and contingency planning, likening the approach to preparing for electricity outages to reduce the impact of AI failures [187-190]. Kenny introduced the concept of an “AI Operating Procedure” (AOP) analogous to existing SOPs, which would institutionalize bias reviews, ethical training, and continuous validation of model outputs [191-199]. The discussion concluded that immediate actions should include investing in education, establishing robust risk-mitigation frameworks, and developing early-stage “anchor controls” to guide the safe evolution toward AGI [202-207][173-176].


Overall, the panel stressed that while AGI may be imminent, its safe deployment depends on balanced compute investment, privacy-respecting data practices, security preparedness, and proactive governance structures [70-71][105-108][173-176].


Keypoints


Major discussion points


Defining AGI and estimating its arrival – The panel agreed that AGI means an AI that can reason, learn, adapt, transfer knowledge and operate beyond narrow tasks [12-18]. Simonas Satunas offered a concrete, if simplistic, definition – an AI that can perform any human task with professional-level accuracy – and projected a 3- to 7-year horizon for reaching that milestone [21]. Vinayak opened the session by noting the rapid acceleration of AI since 2020 and the growing debate around AGI’s feasibility [1-2].


Compute, data, and the human factor as essential ingredients – While compute power is often highlighted, Simonas Satunas emphasized that it is only one link in a chain that also includes energy, data, implementation, language and, critically, human education and critical-thinking skills [72-89]. Alexandra added that achieving human-like situational awareness will require low-latency, energy-efficient hardware and massive private data, raising privacy limits [31-36]. Vinayak later asked why massive compute investments are needed for attention, context, reasoning and low latency [65-69].


Security, privacy and ethical risks of increasingly powerful AI – Kenny Kesar warned that as AI accuracy improves (moving from 90 % toward “five-nines”), the technology will become capable of both sophisticated attacks and autonomous decision-making, creating new security threats [41-48][105-108]. Simonas Satunas outlined four risk layers-from classic privacy and cyber-fraud to mental-health, social cohesion and macro-level threats to democracy-calling for coordinated national and international strategies [131-138]. Alexandra highlighted the need for human oversight, pointing out how algorithmic bias can be exposed and corrected (e.g., the NBA video-surveillance example) [96-102].


Governance, “anchor-control” mechanisms and early-stage regulation – The moderator asked for concrete control concepts to guide the transition to AGI [172-176]. Simonas Cerniauskas suggested technical safeguards such as model labeling and broader regulatory measures, noting Europe’s tendency to over-regulate but also its potential for viable standards [173]. Simonas Satunas argued that small nations must collaborate globally to embed moral and egalitarian values into AI development, citing the Myanmar-Meta case as an illustration of ethical failure [174-180]. Alexandra proposed building resilience and rollback mechanisms to mitigate the impact of failures, emphasizing a risk-reduction mindset [187-190].


Impact on cognition, critical thinking and societal dependence on AI – Several speakers expressed concern that pervasive AI use could erode individuals’ critical-thinking abilities, creating a feedback loop where AI-generated content dominates training data and stifles human intellectual growth [150-170][164-170]. The panel stressed the need for widespread education and awareness to help people identify manipulation, disinformation, and “cognitive warfare” [154-155][155].


Overall purpose / goal of the discussion


The session was convened to clarify what “Artificial General Intelligence” (AGI) actually means, to assess how close we are to achieving it, and to explore the security, privacy, ethical, and governance challenges that AGI will introduce. Participants aimed to identify early-stage “anchor controls” and practical steps-technical, regulatory, and educational-that societies can adopt now to prepare for the transformative impact of AGI.


Overall tone and its evolution


– The conversation began with an optimistic, exploratory tone, highlighting rapid AI progress and the excitement of defining AGI [1-2].


– It then shifted to a cautious, risk-focused tone, as speakers detailed technical limitations, compute demands, and the widening gap between current narrow AI and true general intelligence [21][31-36].


– Mid-discussion the tone became protective and solution-oriented, emphasizing security threats, ethical pitfalls, and the need for robust governance and resilience [41-48][131-138][187-190].


– Toward the end, the tone turned reflective and advisory, urging education, critical-thinking cultivation, and coordinated global action to mitigate societal dependence on AI [150-170][154-155].


Overall, the panel moved from enthusiasm about AI’s potential to a sober assessment of the safeguards required before AGI can be responsibly deployed.


Speakers


Mr. Vinayak Godse – Moderator/host of the panel discussion on AGI; leads the conversation and poses questions to panelists. [S2]


Mr. Simonas Satunas – Panelist; provides a simplified definition of AGI and discusses timelines and societal impact. [S1]


Simonas Cerniauskas – Panelist; contributes perspectives on AGI definitions, investment cycles, and the broader AI ecosystem. [S5]


Mr. Kenny Kesar – Panelist; consultant advising AI clients, focuses on accuracy, compute, market disruption, and ethical/operational procedures for AI. [S6]


Ms. Alexandra Bech Gjørv – Panelist; head of SINTEF, Norway’s largest research institute; discusses hardware, neuromorphic computing, privacy, governance, and societal implications of AGI. [S7]


Additional speakers:


None. All speakers appearing in the transcript are accounted for in the list above.


Full session reportComprehensive analysis and detailed insights

The panel opened with Vinayak Godse framing the rapid expansion of artificial-intelligence research since 2020 and the surge of powerful models launched from early 2023 as a catalyst for renewed debate over artificial general intelligence (AGI) and the risk of missing the chance to shape it responsibly [1-4][6]. He warned that societies that do not begin to understand what AGI could mean for the next three to ten years will fall behind in governance and policy [5-7].


A broad consensus emerged that AGI must transcend today’s narrow, task-specific systems. Speakers agreed that a true AGI should be able to reason, learn, adapt, transfer knowledge and operate across domains rather than being confined to a single function [12-18]. Simonas Satunas offered a concrete, if simplistic, definition: an AGI would perform any professional human task with comparable accuracy and professionalism [21-23]. Citing a poll in which roughly 50 % of Israelis said they trust generative-AI tools more than friends, he projected a three-to-seven-year horizon for reaching that milestone [24-25][21].


Technical foundations and compute – Kenny noted that moving model accuracy from the current 90 % toward “five-nines” (99.999 %) historically required five to ten years for the first extra nine, and each subsequent nine adds roughly one to two years [41-48]. Cerniauskas warned that the industry may be heading toward an “over-capacity” situation for a couple of years, quoting Zuckerberg’s comment about excess compute resources [80-82]. Satunas used a 19th-century transport-infrastructure metaphor to argue that compute is only one link in a chain that also includes energy-efficient hardware, vast data, implementation expertise, language resources and, crucially, human critical-thinking capacity [72-90]. Alexandra added that achieving human-like situational awareness will require low-latency, neuromorphic or edge-computing architectures and access to large amounts of private data, which in turn raises serious privacy constraints [31-36][35-37].


System 1 / System 2 thinking and latency – Vinayak highlighted the distinction between intuitive “system 1” and logical “system 2” thinking, noting that the latency of purely language-based models limits system 2 performance. The panel agreed that reducing system 2 latency is essential for AGI, and that AI is helping to close this gap [65-69].


Security, privacy and risk taxonomy – Kenny emphasized that more accurate models will be able to launch sophisticated cyber-attacks and could emulate a CEO to make decisions, making AI-driven deception a concrete threat [105-108]. Satunas presented a four-level risk taxonomy: (1) classical privacy, security and fraud risks; (2) human health and mental-health impacts; (3) social effects such as erosion of empathy, bullying and addiction; and (4) macro-level threats to democracy and foreign manipulation [131-138]. He stressed that mitigation at each level will require costly, coordinated national and international strategies [131-138]. Alexandra reiterated that privacy limits on personal data impede the development of the deep situational awareness required for AGI, underscoring the tension between data needs and privacy protection [35-37].


Ethics, governance and “anchor-control” proposals


* Technical labeling and European-style regulation were advocated by Cerniauskas as an immediate lever [173-176].


* Satunas called for a global, multi-stakeholder regulatory framework that embeds egalitarian values into AI design, citing the Meta algorithm that amplified violent content in Myanmar as a cautionary example [174-180].


* Alexandra proposed resilience and rollback mechanisms-analogous to planning for electricity outages-to limit the impact of AI failures [187-190].


* Kenny introduced AI Operating Procedures (AOP), formal SOP-like processes that embed bias reviews, ethical training and continuous validation into organisational practice [191-199].


Critical-thinking concerns – Vinayak warned that AI’s ability to provide rapid, multi-dimensional attention may erode human critical thinking, which he defined as “the ability to give attention to various dimensions” [156-163]. Kenny quantified the problem, noting that roughly 30 % of online content is already AI-generated, creating a feedback loop that could stall the evolution of human intellect if people stop exercising their “brain muscles” [164-170]. Satunas echoed this, urging investment in education that cultivates critical-thinking skills to prepare society for AGI [87-90][154-155].


Commercial viability – Kenny observed that AI is not commercially viable today because the costs outweigh the ROI [200].


Closing remarks and concrete outcome – After summarising the discussion, Vinayak thanked the participants, announced the launch of the “AI Cyber Security Terminal”, and noted the upcoming photo-shoot [210].


In conclusion, the panel agreed that AGI is likely to arrive within a near-term horizon, but its safe realisation depends on balanced investment in compute, energy-efficient hardware, high-quality data, and, crucially, human critical-thinking capacities. Immediate actions include developing AI Operating Procedures, establishing technical safeguards such as model labeling, investing in education to preserve critical thinking, pursuing tiered model strategies to manage compute costs, and creating resilience and rollback plans for AI failures. Unresolved issues remain around the exact timeline for AGI, reconciling privacy with the data needs of situational awareness, defining globally acceptable governance structures, and preventing the erosion of human cognition through AI-generated feedback loops. Addressing these challenges will require coordinated effort across industry, academia and governments, both nationally and internationally, to embed ethical, transparent and robust controls before AGI becomes a pervasive reality.


Session transcriptComplete transcript of the session
Mr. Vinayak Godse

Pet Summit and the basic idea and intent behind setting up this session is while all the things were happening in AI in the period of 2020, a lot of development happening and somehow all that is now leading to kind of acceleration that we are seeing in last three years of time and especially this year, since January, all the new launches that we see, we are getting the first sign of a powerful AI, right? And now because of that, there is a discussion about AGI seems to be gaining quite a significant ground, right? And although people still have a lot of doubt and skepticism about whether it is really reality or possibility in coming future or what that means, many people are still skeptical.

They are struggling to define what that means for a cigarette. So as an overall society. and I can tell about India so probably we didn’t pay much attention when AI was coming. If you don’t pay attention now what is coming in next 2, 3, 5 years of time or 10 years of time that is probably the timeline for AGI, then probably we will miss on again thinking, talking, discussing, governing it better basically. So this discussion is about what is to help understand for us and for the audience here basically what do we mean by AGI can we really think about that right now what are different conference that we need to thank you for getting welcome to the panel and try to then find the meaning possible meaning for security, privacy and ethics basically.

So I would like to talk with someone with you, so how do you see this concept of AGI and formulationally how that will be different that we would see what is your understanding about the concept of artificial intelligence and artificial intelligence

Simonas Cerniauskas

So, yeah, thank you very much for having us here. And, yeah, like you said, it’s a really nice topic to wrap up the conference. So, well, so, you know, of course, there are kind of different definitions of AGI. And on the same time, most of them agree that it’s, you know, it’s about smarter AI than we have right now. We were joking a bit that, you know, on the way, the traffic is really, you know, exceptional. And, yeah, that’s a sign that maybe we are still not here today. So, but, yeah, but basically kind of among those common agreements that, let’s say, the smarter AI should reason. It should learn. It should adapt. And also it should transfer knowledge.

And also it shouldn’t be, you know, very. narrow, like, you know, of course, right now we have great, let’s say, areas where AI is really helping a lot, like co -development, customer service, and et cetera, but, you know, it should be much broader. So, and, you know, don’t think that any of us, maybe the colleagues will be able to answer when we will have, you know, and what timing, but definitely, you know, that’s one of the big topics right now.

Mr. Vinayak Godse

Let me come to you and you look at the digital initiative and artificial intelligence as one of the important research areas, so we are grappling with understanding what is right now, but can we think about what would happen in the next three, five years of time, and that seems to be the timeline for each area.

Mr. Simonas Satunas

So I’m the one with the date I’ll do my best So first of all my definition of AGI is very simplistic and I think that we need some simple explanation in this field and my very simple explanation is AGI will be something that can perform every human task at the level of accuracy and professionality of a human professional Now this is not an optimal definition because people can ask every task if a baby is crying will the AGI help him stop crying and people can ask what is the level of professionality but I think that this is something that we can digest and I think that for me I understood that we are getting closer there not from a technology perspective but from the perspective of talking with real Israelis about their problems and five years ago when I was telling this definition of AGI people were like oh it’ll never happen not in our lifetime and right now when I’m speaking with Israelis and I’m telling them this is AGI they’re saying oh aren’t we there yet oh because I thought that Chachi Biddy can help me like a lawyer isn’t it true now I think that we are not there yet okay there is a very sharp line between the AI that we are experiencing today and true AGI but the fact that the audience is already confusing the fact that people give trust to Gen AI tools 50 % of Israelis trust them more than they trust their friends many trust them more than they trust human professionals this puts us closer to AGI so I would say that it’s a matter of 3 years to 7 years until we reach that milestone

Mr. Vinayak Godse

so coming to you Alexandra how do you see this as a concept what is leading to this AGI what would we do that will impact the future of the AI bring this age of Asia in three or seven years of time?

Ms. Alexandra Bech Gjørv

Well, I’m not necessarily subscribing to the time frame. I think that depends on how much money we throw at it. And then there are other things to throw money at as well. Some of this, for example, we had a discussion with my team, you know, are machines able to make complex decisions as fast as humans? And in some areas, like, you know, many operations demand millisecond response and reflex level. You know, you can see that machines are quite good at detecting fire or doing various instinctive things as fast as we are, but the ability to interpret context, emotions, ambiguity, surroundings, body language, etc., that’s still quite far away. They take too long. And in a dynamic environment, you know, a wrong decision or a late decision is really a wrong decision.

So in order to get there, I, you know, there’s both low latency, energy efficient hardware, neuromorphic and edge computing and architectures beyond auto regression. But I think, you know, the researchers in Sintef, I head up the largest research institute in Norway. They, you know, they point to promising like hierarchical reflex reasoning systems, embodied multimodal learning, et cetera, et cetera. And there’s really no real doubt that you will get there. But there’s, in order to have the situational awareness like a human, you have to study a lot of data that would be considered private, personal. So there’s really limits on privacy. And then it triggers a lot of other questions that I’m sure we’ll get into.

Mr. Vinayak Godse

Yeah, we’ll come to that. So, Mr. Kenney, you must be serving many of the clients right now on AI, right? And every of us are getting stunned by… the progress and acceleration of the capability that is happening week by week basically right and that also scares us what is coming next right and when it comes to that level where there is a there is a two words uh somebody defines agi right so one is the consistency across the domain uh that it will be so general in a way that it will be consistently performing across the domain and second part is uh it will be reliable as well so currently probably sometimes it doesn’t have anything and it throws output and that’s why hallucination happens basically so consistency and reliability that’s what the agi will bring to the table basically so it will solve a lot of problems that we see uh uh right now we have been also getting stunned by the things that it can do basically so so there are routes to achieve the agi which will lead us to agi basically so how do you think uh uh your perspective the the journey that probably take us there

Mr. Kenny Kesar

So, you know, I agree with the panel that a couple of things we talked about in terms of where we’re getting to models evolving. But you bring up another component of accuracy. I’ll talk about accuracy first, and then I’ll come back to the disruption which is happening in the market. Now, the epitome of accuracy is five nines. So for AI to get from 90 % to 99%, it took five to ten years. Now, every nine that you add is another year or two years to the point where you get to 99 .99 and nines. So every nine that you’re adding has a time frame to it. And the number of nines that you add, you get closer to general intelligence because that’s what is going to look at the human brain.

I’ll take the topic of photographic regression that you talked about. Any regression, AI is right now built on regression. It’s built on learnings of the neural network. The neural network maturing on information that it sees. but the human brain is also inventing. It’s researching. So when AI really gets to the point of being able to research and bring new ideas to life that a human brain does, you’re getting closer to intelligence. Now, the disruption in the market that you’ve seen with announcements across the different players which dominate the AI market is creating a disruption in the industry and I think it’s the right disruption. It’s the disruption that word processor did to typewriter, what computers did to word processor, and what cloud did to data center.

This is another thing, but it’s much faster because it’s more pervasive and it impacts everybody in life. So the fact is people are talking about how does it translate to me. When I say it translates to me, it’s about how do we structure processes. Everybody and I agree accuracy is work in process. And since accuracy is work in process, we have to be really mature about… the use cases that we put onto it. We have to look at the human pyramid, what components of the pyramid that you’re going to look at. So the way we are advising our clients and what we’re doing ours is maker jobs, which is basically repetitive jobs with little context.

AI does very well, but create a controller for these autonomous. So combination of probabilistic and deterministic is what’s going to be the near future as we get to more and more deterministic when we get to general intelligence, because from a human perspective, it’s mostly deterministic.

Mr. Vinayak Godse

Right. Yeah. So these are and thank you all for putting some level of clarity in terms of what this means. And so at the end of the day, Asia is like so they say attention, right? Ability to give attention to all possible thing that. People, millions and billions of people asking questions. but as you rightly say the context matters so it’s not only attention the it should be contextual to your requirement and your things that you do right and third important part of which they are doing and last six months had been a great months for reasoning that bring to the table basically so my question is and anybody of you can answer this you then for achieving all of these things so why compute becomes very important so why you need this much of compute why there are trillions of dollars that is invested to make sure that it it use attention to each and every problem better and it is contextual and you reasoning and at the same time latency as I talk about so the role of compete what is the role of competitive this any of you

Simonas Cerniauskas

yeah so you know so of course if I may start and of course please accompany so currently we are at super high cycle let’s say of those investments and most of us are also wondering is it a bubble or when it will blow a bit etc is it really in some cases sustainable everyone of us most likely has our own opinion but still this race to be number let’s say one this belief that if you are number one you will remain number one and this momentum I think plus huge appetite all this hype definitely brings much much more money to the table than we could ever imagine and you know on the same time it depends a lot of course on the algorithms how efficient they will be all of us remember most likely last year this deep sea moment and there are also other models which are much more efficient but so So, you know, at some point we might understand that it’s overestimated, overinvested.

At the same time, I remember in Zuckerberg’s quotes that, you know, said, okay, in the worst case scenario, I will, you know, have overcapacity for a couple more years and then I will use it.

Mr. Simonas Satunas

So my humble opinion is that compute is one element in a chain of elements and that sometimes we treat this element as the only one. Let’s explore a metaphor. Let’s imagine that we are in the 19th century and a prophet arrives and he tells us, okay, in five years, a new technology will emerge that will enable you to arrive from Delhi to Bangkok in less than an hour. But I don’t know what the technology is. Maybe it’s a ship, maybe it’s a car, maybe it’s a train, maybe it’s an airplane, but we must be prepared. So everyone is trying to be prepared and to build the right infrastructure. So let’s look at the structure. The problem is everyone thinks about it as something else.

So one will build an airport and the other one will build rails and the other one will build boats. I think that we are in this moment. We know that AGI will arrive. We know that it is soon and we know that we must be prepared. Compute is one of the elements that is necessary, but energy is also important and heating and cold is also important. Data is extremely important. Implementation is important. Language is important in India as well. I think that one of the elements that we are not investing enough is the human element. Think about critical thinking, for example. I don’t know what AGI will arrive, but I know that already now for us it is very important to raise critical thinking among the public.

When you hear something in the news, when you see something, was it made by AI? What is the manipulation that is being forced upon me? So I think that investing in education is not less critical than investing in computing.

Mr. Vinayak Godse

And then another element I want to come to you on this that you talked about. there is very interesting discussion about this system one and system two thinking human is more intuitive in terms of response and system two is more logical and AI is probably helping with that basically but there is a latency that is an important area and that’s why they are putting a lot of effort and improving the competence such that the latency of system two thinking is also less so that your intuitive thinking can improve with that basically but it’s not only the competence the perception, the ambient, the senses, the emotions so all that also matters a lot and that’s where the limitation of language based models are getting exposed basically and you did talk about that in your initial remark can you just throw light on that?

On the language? On the different type of the models right? Ambient, compute for that matter, world model that people talk about so…

Ms. Alexandra Bech Gjørv

Well I just wanted to first agree with the… Mir, sorry that you know if you are a government and this is democratic access to compute is a big topic I think you can really get lost in just investing in compute power so investing in skills and leading edge technology understanding in your own country and participating in the regulatory approach because some of the things that I care about is that everybody says that they should be human oversight but you know that once you get into these dilemma situations like what should happen in a car accident, humans are not very good at understanding risks and humans are not very good at really making ethical discussions they tend to go as far, you know, do your best and then let moral luck decide who gets lost but you have to in machine driven systems you actually have to make decisions about those things so I think becoming, you know, educating also our politicians to be able to to know that you have to make the hard choices because otherwise the machines will make them for you and they will continue our biases and they will, you know, it will not end well.

But then I just wanted to share a little story that I heard. You know, Michael Lewis, the guy with the money ball and everything, he has this anecdote that in the Basketball Association in the States, they started video surveillance and the coaches were all making racist decisions and home team decisions. And by showing the videos and by showing the statistics, next season they couldn’t find any bias at all. So I think that’s a good example of how the machines make people better, whereas we’re not able to better ourselves over time. So I think I just thought this was a nice anecdote for this

Mr. Vinayak Godse

Thank you. And I’ll come to Kenny. So… As we are… trying to solve problems of security, privacy in current big capability of AI and we are struggling to understand what it means for security, what it means for privacy and suddenly there is a significant acceleration that is happening so what we are doing right now for security privacy which could help us to graduate to more and more powerful model comes in or any other things basically so can you just help us

Mr. Kenny Kesar

yeah I think security as we evolve and we talked about compute compute gets bigger, context get bigger, we get smarter in terms of what AI can do and definitely the same AI that can generate, can pose more sophisticated attacks and when we get to AGI right, the biggest thing is I could be emulating a human Let’s say in a company, I could emulate a CEO and make a decision because I’m getting so close to being natural. The threat is real. Now, even today, let’s say without AI, you need to be just a step ahead of the bad actors or the persons who are into cybercrime. You just have to be a step ahead. And similarly, we talked about, you know, we’re mentioning about the human portion, right?

That the human portion needs to get more educated where there are going to be set of humans that are going to use the same AI to build better agents to fight them. So now it’s a question of the tooling that you have at hand. Even today, it’s the tools. It’s a human who’s building tools to fight your cyber threats. Imagine, in the next era, the only thing is… It’ll become nearly close to science fiction when agents try locking humans out. But that’s, I would say, still science fiction. But the fact is as we evolve, we need to right -size the solution and that’s how we will manage compute too. You don’t use I7 computer or to do a simple calculator task of adding two numbers, right?

You use a calculator. So in the context of the world, we’re going to have SLMs which is small language models that will do smaller things so that we can manage compute. You have the bigger models that will solve world hunger in terms of how we do with different levels of machines and processing that we do. I think there will be tiering. Right now, we were talking about it’s a fight to who’s first. So with the fight to first, bigger, better, more elaborate. But now as it evolves, you’ll get the right size fitting to them. Then only it will be commercially viable. AI is not commercially viable today. The costs outweigh the RO.

Mr. Vinayak Godse

Yeah, current cost is quite significantly higher. You can do POC but… once you put into production environment the token cost is too much high to the ROI so so near want to come to you there is a established understanding of security privacy safety or ethics right and that’s what the paradigm that we at least try to understand right now but would the Asia altogether different paradigm and the concepts of security privacy will be foundationally very different than what we discussed right now

Mr. Simonas Satunas

so as I see it when we try to deal with the risks that AI pose we distinguish between four different levels the first level is the classical risks like privacy security cyber fraud every technology that we have since the 90s we need to explain how does it meet the current risk in that matter and AI is much more powerful and it poses a lot of more risks but these are the kinds of risks that we when we design products we know how to deal with them. Above it there is a level of human health and mental health and we find out that AI solutions can be quite problematic for mental health, can cause a lot of damage in some cases and this is something that is not yet well understood and investigated above that there is a social level.

What does it does to the empathy between people? What does it does normally people say oh I see that it’s bad for my kids. They are experiencing bullying or addiction usually what’s bad for your kids is also bad for you and we understand that these are complications that we didn’t think about when we code and the higher level is a macro level what does it do to society? What does it do to democracy? I think that several countries are now experiencing foreign manipulation and it is very easy to run campaigns that are built of fake news and we see that manipulation can become very problematic. So I think that a strategy, a national strategy and an international strategy should access, should address all these levels and all these levels have mitigations but they are costly and they need collaboration.

So we need to be in close collaboration in order to mitigate these risks.

Mr. Vinayak Godse

It’s good that the way you put the structure, right? Things it would do to us, our brain and the thing that will impact us as individually and we discussed that in one of the sessions that we hosted on neuroscience and AI. So what this means to the brain development process if we are using AI for every small thing that we want to do, what that means to society, brain development process plateaus for that matter, what will be in society and then what is the macro kind of impact it. Do you want to add something on that?

Ms. Alexandra Bech Gjørv

Yeah, I just, sorry. I just want to build on that. How it’s not just targeted manipulation or the things that we see in our kids and somebody walking around with a button called friend and that’s your only friend that you need but also the well -structured in the geopolitical context the ability to create completely different information universes you don’t need to be neurologically strange you just see a completely different view we just published a paper in science on these agent swarms and just reading a book about the Ukraine and Russia war going on now and how large populations are overpowered by totally different images of the world from what we are and at least obviously your defense systems need to be hardened against those kinds of manipulations but it’s also you know actually an offensive strategy to find good bots that enter those universes.

It’s an actual battleground in and of itself, and it’s very strange to think about the world in that way, but I think you’re very naive if you don’t start systematically working on how you make your conviction of what the world is like also part of the people that you need to somehow, hopefully not defeat, but relate to and convince that things can be better. So it’s not just a technological challenge. I would say it’s a huge mental leap for most of us.

Mr. Vinayak Godse

So Siman, the question is like the more we use, the more we become dependent on AI system, right? And the more acceleration of the people’s ability to think critically, that will go down basically, right? The speed will increase the more dependence, and then more… More AI become powerful for that matter, right? so what we see in terms of this misinformation, disinformation and defake, so probably there will be different kind of cognitive warfare that may happen so how do you see such kind of challenges in the society, you talked about society or individual for that matter, so what kind of implication it will have for individual society and overall the way the world is organized

Simonas Cerniauskas

yeah so absolutely so basically all those layers and all the dependencies like you rightly stated they also critical thinking of course is one but also awareness, education and you know the skills, abilities for people to understand the things here I think this audience is you know for us it’s more or less everything self obvious but you know when you start talking to people in the streets or different backgrounds then you you know realize that what is self -obvious for you for another person might be completely different. To find those ways I would say to educate to basically help them identify the threats, that’s one of the key priorities and also obligation I would say from our side.

Mr. Vinayak Godse

one of the important challenge of this critical thinking which I come across is critical thinking is nothing but your ability to give attention to various different dimensions nuances, different perspective, different views basically right. Where it is tremendous amount of effort that I would have to become a critical thinker. And AI saws that quite easily for me. It can make me to bring all the attention, all the dimensions, all the nuances, all the viewpoints, you can quickly get access to me, right? So, even for critical thinking, Kenny, for you, this question is, you will be depending too much on AI as well, right? So, we need to know distinction between what do you critical thinking? Critical thinking is not just getting information, giving attention, but critical thinking is what?

So, that question probably is very important question to ask.

Mr. Kenny Kesar

critical thinking that is very necessary for us to innovate further. So the biggest issue that the AI world is facing, 30 % of the content is consuming is AI generated already. So basically you’re feeding back and it’s learning on the same model. When originally it was learning on artifacts that were built through different thinking processes. So I would say one of the, it’s a risk, it’s a boon because it gets work done. But in overtime it’s a risk that we will stop evolving because if we don’t exercise the brain as a muscle, if we don’t exercise it and don’t build those neurons which really influence critical thinking, it will be actually a very big loss to society.

So I would say general intelligence, everybody is asking for it. Now how do we make sure as AI. computers get general intelligence we’re not losing our intelligence to create that general intelligence again so it’s a it’s a it’s a vicious cycle it’s a question which we’re debating we’re trying to answer in ourselves everybody has perspectives but it’s a it’s something that I think about do I have an answer to it no but I feel that critical thinking on both sides is something that we really need to critically think about

Mr. Vinayak Godse

yeah so that’s what may every thing that you think as a solution and kind of thing so there is always this challenge of what it means right in this new paradigm is an important so now a little bit concluding part of this discussion is can we when this is question to each of you briefly we can discuss about it can we still think about I know we know we have been doing security privacy and particular safety privacy particular way right but as this paradigm is new can we think about some anchor control right now that we should be mindful of right that when it comes it happened right when AI was getting built after 3 years we are talking about AI governance and all these things so is there a way for us to think about some kind of anchor control some idea some concept basically that could help us to browse through challenges the AGI could throw I can start with you briefly and each of you can comment on this

Simonas Cerniauskas

yeah so well of course you know there are some technical things like you know the same what are marks or something you know labeling and other technical features that could help us a bit to identify at least some threats … then also we can talk about regulator measures but you know that’s a broader topic for the further discussion but especially here we in Europe we tend to regulate and overregulate everything so but in a way I think also at least some measures here also can be really viable and really reasonable

Mr. Simonas Satunas

well I come from a very small country Israel is so small that you can put it it’s like a pin on the map and therefore our regulative approach is that we are unable to determine the global regulation and in this AI race I think that what is more important is the global regulation so since we are a very tiny country we must work with positive tools say, okay, we cannot affect the regulation, but how can we work together with the AI developers in order to make the personality of the AI more moral, more ethic? How can we put egalitarian and equality into the consideration? How can we avoid bias? And I think that it makes us work together with the industry and together with the academia in order to find out about new consequences.

I think that in many cases the giants, the big tech doesn’t point towards unethical conclusions, but they work towards financial incentives that make AI behave in a very immoral way. If I’ll take, for example, the conflict in Myanmar, in Burma, we saw that Meta was not actively promoting violence in Myanmar, but the algorithm of Meta was designed to attract attention in a way that make the AI the more violent post much more viral and make violence flourish. So if we’ll be able to promote a dialogue and if we’ll be able to be together with the industry in development of new AI, sometimes we’ll be able to make AI more ethical.

Mr. Vinayak Godse

So Alexandra, your view. So one is the anchor control idea concept, but second part is how do you get into early? How do you get into? Early in the game, right? So when AI happened, now we are discussing in 25, 26 about the responsibility and alignment and adoption and governance basically, right? So in Asia discussion is the anchor control ways, ideas and ways for us to get into early discussion of it.

Ms. Alexandra Bech Gjørv

Well, I think at least you need to work on resilience and robust rollback mechanisms. A little bit like what we’re experiencing now in Europe, where we all have to practice on living without electricity. You know that it’s a realistic option that somebody. sabotages your electricity and then looking at well how dependent are we really and what are the alternative you know and and planning from a point of view where you not only work to reduce risk but you really work to reduce consequences of those risks occurring so if you work on the traditional risk matrix it’s always you know avoiding bad outcomes but then making the bad outcomes less bad that’s something that at least we think is well the new realities are propelling that kind of thinking and I think that’s important

Mr. Vinayak Godse

Kenny your voice on this

Mr. Kenny Kesar

sure actually the way we look at it in terms of AI from ethical AI to biases to data privacy it’s very similar akin to what a human would do even today what today we have a standard operating procedure that we review for biases, we review for content. You know, in our organizations, we have organizations that manage this. Now, and the other thing is we train people on ethical practices, on non -bias and things like that. So ultimately, AI is very similar to that, where we will have, you know, in today’s world, for the lack of a better word, I call it AOP instead of SOP, agent operating procedure or AI operating procedure, where we have to train AI in terms not to be biased.

So I feel that there is a big industry which is in the offing, which is going to manage and create models, LLMs, to manage or to validate that the responses from, you know, your common models are ethically right, non -biased. Because today, as organizations, we invite experts from outside to come and see our practices, whether we are following ethical, we are transparent, a number of those things. Very similarly as we mature towards more general intelligence and the more ways of working, I feel that these control structures will come in cyber security, will come in ethical use of AI, unbiased use of AI. So ultimately it will be a checks and balances system and we will see innovation in these areas.

That is how we feel it. It’s an evolving area. Let’s see how it happens.

Mr. Vinayak Godse

Thank you all of you to really help us understand the meaning of this concept of AGI and how that will pan out from now and what kind of challenges it will throw to us. There are definitely opportunities that we don’t have time to discuss about what it will bring to us. But then what could we start doing right now? And this was definitely one of the important conversations. Help this would help you understand what we are talking about the AGI today. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Join me to give big hand to my co -panelists for helping us understand. Thank you. Thank you, Simon. Thank you, Nir.

Thank you. We have some photo shoot. Alexandra, we need to come here for photo shoot. I also request the fireside panels, Hendrikus sir and Narendra sir to please join us for the photo shoot. Thank you. Thank you. Before we commence the session for the Fireside I would like to announce the launch I would like to announce the launch of AI Cyber Security Terminal This is published today Thank you. Thank you. you you Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (30)
Factual NotesClaims verified against the Diplo knowledge base (4)
Confirmedhigh

“The panel discussion on Artificial General Intelligence included Vinayak Godse and Simonas Cerniauskas among the speakers.”

The knowledge base lists the same panelists for the AGI discussion, confirming their participation [S4] and [S1].

Confirmedhigh

“Roughly 50 % of Israelis said they trust generative‑AI tools more than friends.”

A poll cited in the knowledge base shows that 50 % of respondents consider trust foundational for long-term success of transformative technologies, matching the reported figure [S70].

Additional Contextmedium

“A three‑to‑seven‑year horizon is projected for AGI to perform any professional human task with comparable accuracy.”

Other sources note that many experts expect AGI development to take about five years, and some leaders explicitly favor slower timelines, providing nuance to the 3-7 year estimate [S24] and [S52].

Additional Contextmedium

“The industry may be heading toward an “over‑capacity” situation for a couple of years, with excess compute resources.”

Discussion about a global compute divide highlights that regions lacking compute fall further behind, underscoring concerns about mismatched capacity and potential over-supply in well-resourced areas [S77].

External Sources (77)
S1
Artificial General Intelligence and the Future of Responsible Governance — – Ms. Alexandra Bech Gjørv- Mr. Simonas Satunas – Simonas Cerniauskas- Mr. Simonas Satunas
S2
Artificial General Intelligence and the Future of Responsible Governance — -Mr. Vinayak Godse- Moderator/Host of the panel discussion on AGI (Artificial General Intelligence)
S3
Subrata K. Mitra Jivanta Schottli Markus Pauli — Gandhi was vehemently opposed to Partition, an outcome which other senior Congress leaders like Jawaharlal …
S5
Artificial General Intelligence and the Future of Responsible Governance — – Simonas Cerniauskas- Mr. Simonas Satunas- Mr. Kenny Kesar – Simonas Cerniauskas- Mr. Simonas Satunas- Ms. Alexandra B…
S6
Artificial General Intelligence and the Future of Responsible Governance — – Mr. Kenny Kesar- Ms. Alexandra Bech Gjørv – Ms. Alexandra Bech Gjørv- Mr. Kenny Kesar
S7
Artificial General Intelligence and the Future of Responsible Governance — – Mr. Kenny Kesar- Ms. Alexandra Bech Gjørv – Mr. Simonas Satunas- Ms. Alexandra Bech Gjørv – Ms. Alexandra Bech Gjørv…
S8
https://dig.watch/event/india-ai-impact-summit-2026/artificial-general-intelligence-and-the-future-of-responsible-governance — So my humble opinion is that compute is one element in a chain of elements and that sometimes we treat this element as t…
S9
https://dig.watch/event/india-ai-impact-summit-2026/ai-automation-in-telecom_-ensuring-accountability-and-public-trust-india-ai-impact-summit-2026 — I mean two steps globally. So the proof of concept we are trying to do in Southeast Asia is actually prove that data can…
S10
Searching for Standards: The Global Competition to Govern AI | IGF 2023 — Collaboration with industry was deemed essential in the regulation of AI. Industry was seen as a valuable source of reso…
S11
AGI moves closer to reshaping society — Therewasa time when machines that think like humans existed only in science fiction. But AGI now stands on the edge of b…
S12
https://app.faicon.ai/ai-impact-summit-2026/how-the-global-south-is-accelerating-ai-adoption_-finance-sector-insights — And in terms of regulation, Reserve Bank’s approach has been largely tech neutral. It’s tech agnostic in some sense, bec…
S13
https://dig.watch/event/india-ai-impact-summit-2026/secure-finance-risk-based-ai-policy-for-the-banking-sector — When we dug deeper we came to know that initially it was deployed in 2004 by one entity and then slowly slowly it was th…
S14
The new European toolbox for cybersecurity regulation — Additionally, strategic regulations are needed to reduce dependence on specific manufacturers, particularly from China, …
S15
OPENING SESSION | IGF 2023 — Large language models require significant compute power and data
S16
Ethical principles for the use of AI in cybersecurity | IGF 2023 WS #33 — Dennis Kenji Kipker:Yeah, of course. When developing AI, we have high impact privacy risks. And I think this is quite cl…
S17
Networking Session #60 Risk &amp; impact assessment of AI on human rights &amp; democracy — Matt O’Shaughnessy: Thank you so much, David. And it’s great to be here, even just virtually. So, you asked about the…
S18
WS #31 Cybersecurity in AI: balancing innovation and risks — Even with good data, the human creating the algorithm must ensure fairness. This is a key point in addressing bias and e…
S19
What policy levers can bridge the AI divide? — Lithuania advocated for regulatory sandbox approaches with differentiated regulation based on risk levels, leveraging sm…
S20
Comprehensive Discussion Report: AI’s Existential Challenge to Human Identity and Society — Sociocultural | Human rights Tracey expresses concern that over-reliance on AI for decision-making and problem-solving …
S21
WS #110 AI Innovation Responsible Development Ethical Imperatives — Ricardo Israel Robles Pelayo: Thank you very much. Good afternoon, everyone. It is an honor to be here and share a refle…
S22
Artificial General Intelligence and the Future of Responsible Governance — Satunas argues that while compute gets most attention, achieving AGI requires a comprehensive approach including energy …
S23
Artificial General Intelligence and the Future of Responsible Governance — Compute is just one element; energy, data, implementation, language, and human education are equally critical Speakers …
S24
Folding Science / DAVOS 2025 — Mentions that AGI development may take a five-year timescale rather than the one or two years some are predicting. Time…
S25
HIGH LEVEL LEADERS SESSION I — Another key point highlighted in the discussions was the need for dialogue and consensus on data flow. Data has become t…
S26
WS #103 Aligning strategies, protecting critical infrastructure — How to balance security needs with privacy and human rights concerns in policy approaches
S27
Big data for prevention: Balancing opportunities with challenges — Conflict prevention largely depends on the availability of timely data and information. Whether it concerns the collecti…
S28
Comprehensive Discussion Report: AI’s Transformative Potential for Global Economic Growth — The conversation’s emphasis on democratization and broad participation indicates that successful AI adoption requires en…
S29
Is the AI bubble about to burst? Five causes and five scenarios — An investment in national security and technological sovereignty Risk is shifted from private investors to the public, …
S30
WS #214 AI Readiness in Africa in a Shifting Geopolitical Landscape — Shikoh Gitau: and I’m really glad to be here. Thank you so much for having me. And apologies for joining in late. So, th…
S31
AI investment shows strong momentum beyond bubble fears — AI investmentis not showingsigns of a speculative bubble, according to theAlibaba Groupchairman. Instead, he argued at t…
S32
Open Forum #38 Harnessing AI innovation while respecting privacy rights — Audience: Thank you so much for your presentation. My name is Hasara Tebi. I’m from Mawadda Association for Family Sta…
S33
Ethical principles for the use of AI in cybersecurity | IGF 2023 WS #33 — Noushin Shabab:principles. Yeah. That’s actually a very good question. The most, the two most important principles for m…
S34
Artificial Intelligence &amp; Emerging Tech — Another significant consideration is the protection of data privacy. In an age characterised by concerns about data priv…
S35
Privacy concerns intensify as Big Tech announce new AI-enhanced functionalities — Apple, Microsoft, and Google arespearheadinga technological revolution with their vision of AI smartphones and computers…
S36
Artificial General Intelligence and the Future of Responsible Governance — Mr. Kenny Kesar introduced the concept of accuracy progression through “five nines,” explaining that while AI evolved fr…
S37
Artificial General Intelligence and the Future of Responsible Governance — Satunas provides a simple definition of AGI as systems capable of performing any human task with professional-level accu…
S38
https://app.faicon.ai/ai-impact-summit-2026/artificial-general-intelligence-and-the-future-of-responsible-governance — So my humble opinion is that compute is one element in a chain of elements and that sometimes we treat this element as t…
S39
Ethical principles for the use of AI in cybersecurity | IGF 2023 WS #33 — Dennis Kenji Kipker:Yeah, of course. When developing AI, we have high impact privacy risks. And I think this is quite cl…
S40
Ethics and AI | Part 6 — The EU Act categorizes AI systems into different risk levels—unacceptable, high-risk, and low-risk—each with correspondi…
S41
WS #31 Cybersecurity in AI: balancing innovation and risks — Even with good data, the human creating the algorithm must ensure fairness. This is a key point in addressing bias and e…
S42
Networking Session #60 Risk &amp; impact assessment of AI on human rights &amp; democracy — Matt O’Shaughnessy: Thank you so much, David. And it’s great to be here, even just virtually. So, you asked about the…
S43
Open Forum #38 Harnessing AI innovation while respecting privacy rights — Jimena Viveros: Hello. I don’t know if anyone can hear me. Yes? Okay, great. So it is great to be here, sorry for the …
S44
Smart Regulation Rightsizing Governance for the AI Revolution — The discussion began with a notably realistic and somewhat pessimistic assessment of global cooperation challenges, but …
S45
Defying Cognitive Atrophy in the Age of AI: A World Economic Forum Stakeholder Dialogue — But even those skills can be eroded without regular practice and engagement. Core cognitive capabilities, such as judgme…
S46
Comprehensive Discussion Report: AI’s Existential Challenge to Human Identity and Society — Sociocultural | Human rights Tracey expresses concern that over-reliance on AI for decision-making and problem-solving …
S47
Panel Discussion AI in Digital Public Infrastructure (DPI) India AI Impact Summit — The tone was consistently optimistic and forward-looking throughout the conversation. Speakers expressed excitement abou…
S48
Need and Impact of Full Stack Sovereign AI by CoRover BharatGPT — Overall Tone:The conversation maintained an optimistic and patriotic tone throughout, with both participants expressing …
S49
Emerging Markets: Resilience, Innovation, and the Future of Global Development — The tone was notably optimistic and forward-looking throughout the conversation. Panelists consistently emphasized oppor…
S50
From India to the Global South_ Advancing Social Impact with AI — The discussion maintained an overwhelmingly optimistic and energetic tone throughout. It began with excitement about you…
S51
Comprehensive Discussion Report: AI’s Transformative Potential for Global Economic Growth — The conversation maintains a consistently optimistic and enthusiastic tone throughout. Both speakers demonstrate genuine…
S52
Comprehensive Discussion Report: The Future of Artificial General Intelligence — Both speakers distinguished their positions from extreme “doomerism” while acknowledging serious risks that require care…
S53
Comprehensive Summary: AI Governance and Societal Transformation – A Keynote Discussion — The tone begins confrontational and personal as Hunter-Torricke distances himself from his tech industry past, then shif…
S54
OPENING STATEMENTS FROM STAKEHOLDERS — Discussions on artificial intelligence show that technological development is not without risk.
S55
Engineering Accountable AI Agents in a Global Arms Race: A Panel Discussion Report — The discussion maintained a thoughtful but somewhat cautious tone throughout, with speakers acknowledging both opportuni…
S56
AI and Digital Developments Forecast for 2026 — The tone begins as analytical and educational but becomes increasingly cautionary and urgent throughout the conversation…
S57
Advancing Scientific AI with Safety Ethics and Responsibility — The discussion maintained a collaborative and constructive tone throughout, characterized by technical expertise and pol…
S58
Towards a Resilient Information Ecosystem: Balancing Platform Governance and Technology — The discussion maintained a professional, collaborative tone throughout, characterized by constructive problem-solving r…
S59
Main Session 2: Protecting Internet infrastructure and general access during times of crisis and conflict — The tone of the discussion was largely serious and concerned, given the gravity of the issues being discussed. However, …
S60
Evolving Threat of Poor Governance / DAVOS 2025 — The tone was largely serious and analytical, with panelists offering thoughtful insights on complex governance challenge…
S61
Law, Tech, Humanity, and Trust — The discussion maintained a consistently professional, collaborative, and optimistic tone throughout. The speakers demon…
S62
Safeguarding Children with Responsible AI — The discussion maintained a tone of “measured optimism” throughout. It began with urgency and concern (particularly in B…
S63
AI and Human Connection: Navigating Trust and Reality in a Fragmented World — The tone began optimistically with audience engagement but became increasingly concerned and urgent as panelists reveale…
S64
Transforming Agriculture_ AI for Resilient and Inclusive Food Systems — The tone was consistently optimistic yet pragmatic throughout the conversation. Speakers maintained an encouraging outlo…
S65
Impact &amp; the Role of AI How Artificial Intelligence Is Changing Everything — The discussion maintained a cautiously optimistic tone throughout, balancing enthusiasm for AI’s potential with realisti…
S66
AI is here. Are countries ready, or not? | IGF 2023 Open Forum #131 — Robert Opp:So, please feel free to join us at the table. Don’t have to sit in the gallery. This is a round table after a…
S67
Policy Network on Artificial Intelligence | IGF 2023 — Audience:Good morning. Good morning. Jingbo from UN University. Actually, this is much more intimate so we can communica…
S68
Keynote-Demis Hassabis — This discussion features a keynote address by Sir Demis Hassabis, co-founder and CEO of Google DeepMind and Nobel laurea…
S69
The Dawn of Artificial General Intelligence? / DAVOS 2025 — In summary, the discussion emphasized the complex challenges and opportunities presented by AGI development, with no cle…
S70
What Is Sci-Fi, What Is High-Tech? / Davos 2025 — She references a poll showing that 50% of respondents believe trust is foundational for long-term success in introducing…
S71
Workshop 6: Perception of AI Tools in Business Operations: Building Trustworthy and Rights-Respecting Technologies — Katarzyna Ellis from EY Poland presented compelling research data that illustrated the dramatic transformation occurring…
S72
Generative AI and Synthetic Realities: Design and Governance | IGF 2023 Networking Session #153 — Diogo Cortiz:I totally agree with Heloisa about her intervention. So I would like to switch a little bit my comments reg…
S73
Shaping AI’s Story Trust Responsibility & Real-World Outcomes — Well, I think we have to look perhaps further out in five years because we’re building something that should work for so…
S74
Debating Technology / Davos 2025 — Yann LeCun: So my colleagues and I certainly understand where we are going. I can’t claim to understand what other peo…
S75
Building the Next Wave of AI_ Responsible Frameworks & Standards — yeah so I think to the point Ankush was mentioning AI technology is fundamentally designed on probabilistic model and an…
S76
Knowledge Café: WSIS+20 Consultation: Towards a Vision Beyond 2025 — Audience: Oh, thank you. Yeah, just curious to know how many UN agencies are involved in WSIS, and UNGIS, it stands for …
S77
WS #462 Bridging the Compute Divide a Global Alliance for AI — Alisson O’Beirne reinforced this analysis, noting that “as folks are left behind and as there’s a lack in compute capaci…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
M
Mr. Simonas Satunas
5 arguments161 words per minute1149 words426 seconds
Argument 1
AGI as AI that can perform every human task at professional level
EXPLANATION
Satunas defines AGI as a system capable of carrying out any human task with the same accuracy and professionalism as a qualified human professional. He notes that this definition is deliberately simple to make the concept digestible for a broad audience.
EVIDENCE
In his opening remarks he states, “my definition of AGI is very simplistic … AGI will be something that can perform every human task at the level of accuracy and professionality of a human professional” [21].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The definition is corroborated by the panel report which states that AGI will be able to perform every human task with professional-level accuracy [S1] and by a detailed summary of his remarks confirming this simple definition [S4].
MAJOR DISCUSSION POINT
Definition and Timeline of AGI
Argument 2
Expectation that AGI could appear within 3–7 years
EXPLANATION
Satunas argues that the milestone of achieving AGI is likely to be reached within a three‑to‑seven‑year horizon, based on recent advances and growing public trust in generative AI tools. He frames this as a near‑term prospect rather than a distant future.
EVIDENCE
He says, “I would say that it’s a matter of 3 years to 7 years until we reach that milestone” [21].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Satunas’ timeline is supported by the same report, which records his estimate of a 3-to-7-year horizon for reaching AGI [S1] and by the extended analysis of his statements in the discussion summary [S4].
MAJOR DISCUSSION POINT
Definition and Timeline of AGI
DISAGREED WITH
Mr. Vinayak Godse, Simonas Cerniauskas, Mr. Kenny Kesar
Argument 3
Compute is only one element; energy, data, implementation, and human skills are equally critical
EXPLANATION
Satunas stresses that while compute power is essential, other factors such as energy supply, high‑quality data, implementation frameworks, language considerations, and especially human critical‑thinking skills are equally vital for realizing AGI. He warns against treating compute as the sole bottleneck.
EVIDENCE
He uses a metaphor about different transport technologies and lists “Compute is one of the elements … energy is also important … Data is extremely important … Implementation is important … I think that one of the elements that we are not investing enough is the human element” [72-90].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
He emphasizes a multi-factor view of AGI development, a point echoed in the external commentary that compute is just one link in a chain of necessary elements such as energy and data [S8] and reinforced by the panel’s own synthesis of his framework [S4].
MAJOR DISCUSSION POINT
Technical Foundations – Compute, Hardware, Data, Energy
AGREED WITH
Simonas Cerniauskas, Mr. Kenny Kesar, Ms. Alexandra Bech Gjørv
DISAGREED WITH
Simonas Cerniauskas, Mr. Vinayak Godse
Argument 4
Risks span four levels – classical (privacy, cyber‑fraud), health, social, macro (society, democracy) – requiring coordinated mitigation
EXPLANATION
Satunas categorises AI‑related risks into four layers: traditional security and privacy threats, impacts on physical and mental health, social‑level effects such as empathy erosion, and macro‑level threats to democracy and societal stability. He calls for national and international strategies that address each layer in a coordinated way.
EVIDENCE
He outlines the four levels, stating “classical risks like privacy security cyber fraud … human health and mental health … social level … macro level … democracy” and argues for a collaborative mitigation strategy [131-139].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The four-level risk taxonomy is documented in the discussion transcript and highlighted in the external summary of his risk framework [S4] as well as in a separate overview of the panel’s risk categorisation [S1].
MAJOR DISCUSSION POINT
Security and Privacy Challenges
Argument 5
Call for global regulation and industry‑academia collaboration to embed morality and avoid profit‑driven unethical behavior
EXPLANATION
Satunas argues that small nations cannot dictate global AI rules, so they must work with industry and academia to embed ethical principles, egalitarianism, and bias mitigation into AI systems. He cites the Myanmar example where platform algorithms amplified violent content despite the platform’s stated intent.
EVIDENCE
He says, “we must work together with the AI developers … to make the personality of the AI more moral … In Myanmar the algorithm of Meta was designed to attract attention in a way that make the AI the more violent post much more viral” [174-180].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for industry-academia partnership in AI governance is affirmed by a separate standards-focused report that stresses collaboration as essential for effective regulation [S10].
MAJOR DISCUSSION POINT
Ethics, Bias, Governance, and Anchor Controls
AGREED WITH
Mr. Vinayak Godse, Simonas Cerniauskas, Mr. Kenny Kesar, Ms. Alexandra Bech Gjørv
DISAGREED WITH
Mr. Vinayak Godse, Simonas Cerniauskas, Ms. Alexandra Bech Gjørv, Mr. Kenny Kesar
M
Mr. Vinayak Godse
2 arguments104 words per minute1988 words1138 seconds
Argument 1
Urgency to understand and define AGI now for societal governance
EXPLANATION
Godse warns that societies, especially India, have lagged behind AI developments and must now define AGI to shape governance, security, privacy and ethics before the technology matures. He frames the discussion as essential for preparing policy frameworks.
EVIDENCE
He notes that “if you don’t pay attention now what is coming … we will miss … governing it better” and asks the panel to help define AGI for security, privacy and ethics [1-7].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Broader analyses of AGI’s transformative potential underline the urgency for policy preparation, noting that AGI could reshape societies as profoundly as electricity or the internet [S11].
MAJOR DISCUSSION POINT
Definition and Timeline of AGI
AGREED WITH
Simonas Cerniauskas, Mr. Kenny Kesar, Ms. Alexandra Bech Gjørv, Mr. Simonas Satunas
DISAGREED WITH
Mr. Simonas Satunas, Simonas Cerniauskas, Mr. Kenny Kesar
Argument 2
Emphasizes need for immediate anchor controls (technical safeguards, regulatory steps) to guide AGI development
EXPLANATION
Godse calls for concrete, early‑stage controls—technical, regulatory, and procedural—to steer AGI development toward safe outcomes. He asks the panel to suggest anchor controls that can be applied now.
EVIDENCE
He explicitly asks for “anchor control” ideas and later repeats the request for early safeguards [7] and again at the end of the discussion [172].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Early-stage regulatory measures are advocated in the global AI standards community, which calls for pre-emptive technical safeguards and stakeholder collaboration [S10]; a contrasting view notes that some regulators adopt a technology-neutral stance, highlighting a debate over the timing of such controls [S12].
MAJOR DISCUSSION POINT
Call for Early Governance and Anchor Controls
DISAGREED WITH
Simonas Cerniauskas, Mr. Simonas Satunas, Ms. Alexandra Bech Gjørv, Mr. Kenny Kesar
S
Simonas Cerniauskas
3 arguments132 words per minute632 words286 seconds
Argument 1
AGI must reason, learn, adapt, transfer knowledge and be non‑narrow
EXPLANATION
Cerniauskas outlines the core capabilities that most definitions of AGI share: reasoning, learning, adaptation, knowledge transfer, and a breadth that goes beyond narrow, task‑specific AI. He suggests these traits distinguish AGI from current systems.
EVIDENCE
He lists these traits: “the smarter AI should reason … learn … adapt … transfer knowledge … shouldn’t be very narrow” [15-18].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
A recent overview of AGI capabilities describes exactly these traits-reasoning, learning, adaptation, and cross-domain knowledge transfer-as distinguishing AGI from narrow AI [S11].
MAJOR DISCUSSION POINT
Definition and Timeline of AGI
Argument 2
Massive compute investment fuels progress but may be a bubble
EXPLANATION
Cerniauskas observes that the current surge of investment in compute resources is driving rapid AI advances, yet he questions whether this level of spending is sustainable or over‑estimated, hinting at a possible bubble.
EVIDENCE
He remarks that “we are at super high cycle of those investments … we might understand that it’s overestimated, overinvested” and cites Zuckerberg’s comment about overcapacity [70-71].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The panel’s own analysis notes a “super-high cycle” of compute investment that could be over-estimated, aligning with external commentary on the risk of an investment bubble in AI hardware [S4].
MAJOR DISCUSSION POINT
Technical Foundations – Compute, Hardware, Data, Energy
AGREED WITH
Mr. Simonas Satunas, Mr. Kenny Kesar, Ms. Alexandra Bech Gjørv
DISAGREED WITH
Mr. Simonas Satunas, Mr. Vinayak Godse
Argument 3
Technical controls such as labeling, regulatory measures, and European‑style oversight can provide early safeguards
EXPLANATION
Cerniauskas suggests that practical technical measures—like model labeling—and regulatory frameworks, especially those common in Europe, can act as early protective layers while broader governance discussions continue.
EVIDENCE
He mentions “technical things like labeling … regulator measures … Europe tends to overregulate” as possible early safeguards [173-174].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Industry-academia collaboration on standards is highlighted as a pathway to early safeguards, and European regulatory approaches are cited as examples of proactive oversight in AI governance [S10][S14].
MAJOR DISCUSSION POINT
Ethics, Bias, Governance, and Anchor Controls
AGREED WITH
Mr. Vinayak Godse, Mr. Kenny Kesar, Ms. Alexandra Bech Gjørv, Mr. Simonas Satunas
M
Mr. Kenny Kesar
5 arguments156 words per minute1299 words497 seconds
Argument 1
Accuracy improvements (moving from 90 % to 99.999 %) require years; deterministic models will grow with AGI
EXPLANATION
Kesar explains that moving AI accuracy from current levels to near‑perfect performance (five‑nines) is a multi‑year effort, with each additional nine requiring one to two more years. He links higher accuracy to the emergence of more deterministic models that will accompany AGI.
EVIDENCE
He states “the epitome of accuracy is five nines … for AI to get from 90 % to 99 % it took five to ten years … every nine you add is another year or two” and adds that deterministic models will increase as we approach general intelligence [44-48].
MAJOR DISCUSSION POINT
Technical Foundations – Compute, Hardware, Data, Energy
Argument 2
Advanced AI can launch sophisticated attacks, impersonate leaders, raising real threats
EXPLANATION
Kesar warns that as AI becomes more capable, it can be used to conduct advanced cyber‑attacks and even impersonate high‑level executives, creating genuine security threats that must be anticipated.
EVIDENCE
He notes “the biggest thing is I could be emulating a human … a CEO and make a decision … the threat is real” [105-107].
MAJOR DISCUSSION POINT
Security and Privacy Challenges
AGREED WITH
Mr. Simonas Satunas, Ms. Alexandra Bech Gjørv, Mr. Vinayak Godse
Argument 3
Tiered model approach (small LLMs for routine tasks, large models for complex problems) to manage compute and risk
EXPLANATION
Kesar proposes a hierarchy of AI models where lightweight language models handle simple, high‑frequency tasks while larger, more powerful models are reserved for complex, high‑impact problems, thereby balancing compute costs and security concerns.
EVIDENCE
He describes “small language models that will do smaller things … bigger models that will solve world hunger … I think there will be tiering” [120-126].
MAJOR DISCUSSION POINT
Security and Privacy Challenges
AGREED WITH
Mr. Simonas Satunas, Simonas Cerniauskas, Ms. Alexandra Bech Gjørv
Argument 4
Propose AI Operating Procedures (AOP) analogous to SOPs for bias, ethics, and compliance
EXPLANATION
Kesar suggests that organizations should develop dedicated AI Operating Procedures (AOP) similar to traditional SOPs, to systematically audit AI outputs for bias, ethical compliance, and data privacy as AI systems become more autonomous.
EVIDENCE
He explains “we will have … AOP … where we have to train AI in terms not to be biased … industry will manage and create models to validate responses” [191-198].
MAJOR DISCUSSION POINT
Ethics, Bias, Governance, and Anchor Controls
AGREED WITH
Mr. Vinayak Godse, Simonas Cerniauskas, Ms. Alexandra Bech Gjørv, Mr. Simonas Satunas
DISAGREED WITH
Mr. Vinayak Godse, Simonas Cerniauskas, Mr. Simonas Satunas, Ms. Alexandra Bech Gjørv
Argument 5
Over‑reliance on AI may erode critical thinking, creating feedback loops of AI‑generated content
EXPLANATION
Kesar points out that a growing share of online content is AI‑generated, which can create a feedback loop where AI trains on its own outputs, potentially stalling human critical‑thinking development and innovation.
EVIDENCE
He states “30 % of the content is consuming is AI generated already … we are feeding back and it’s learning on the same model … we will stop evolving because we don’t exercise the brain as a muscle” [165-169].
MAJOR DISCUSSION POINT
Societal Impact – Cognition, Critical Thinking, Misinformation
AGREED WITH
Mr. Simonas Satunas, Mr. Vinayak Godse
M
Ms. Alexandra Bech Gjørv
5 arguments148 words per minute942 words380 seconds
Argument 1
Need low‑latency, energy‑efficient neuromorphic/edge hardware for situational awareness
EXPLANATION
Gjørv argues that achieving human‑like situational awareness requires hardware that can process information with millisecond latency while being energy‑efficient, highlighting neuromorphic and edge computing architectures as essential.
EVIDENCE
She mentions “low latency, energy efficient hardware, neuromorphic and edge computing and architectures beyond auto regression” as necessary for fast, contextual decisions [31-33].
MAJOR DISCUSSION POINT
Technical Foundations – Compute, Hardware, Data, Energy
AGREED WITH
Mr. Simonas Satunas, Simonas Cerniauskas, Mr. Kenny Kesar
Argument 2
Human oversight and ethical frameworks are essential; machines can inherit and amplify bias
EXPLANATION
Gjørv stresses that human oversight is crucial because AI systems can replicate and magnify existing biases. She illustrates this with a basketball‑referee example where video analytics removed racial bias from decisions.
EVIDENCE
She recounts Michael Lewis’s anecdote about basketball video surveillance eliminating racist decisions, showing how machines can improve human bias [96-102].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The importance of human oversight and bias mitigation is reinforced by external discussions on collaborative governance models that stress ethical frameworks and oversight mechanisms [S10].
MAJOR DISCUSSION POINT
Ethics, Bias, Governance, and Anchor Controls
Argument 3
Privacy limits on personal data hinder development of human‑level situational awareness
EXPLANATION
Gjørv notes that building AI with true human‑like contextual understanding requires large amounts of personal data, but privacy regulations and concerns restrict access to such data, slowing progress toward AGI.
EVIDENCE
She says “in order to get there … we have to study a lot of data that would be considered private, personal … so there’s really limits on privacy” [35-37].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Regulatory perspectives note that privacy-centric policies can constrain data availability for advanced AI, offering an alternative viewpoint on the trade-off between privacy and AI progress [S12].
MAJOR DISCUSSION POINT
Security and Privacy Challenges
AGREED WITH
Mr. Simonas Satunas, Mr. Kenny Kesar, Mr. Vinayak Godse
DISAGREED WITH
Mr. Simonas Satunas
Argument 4
Building resilience, robust rollback mechanisms, and reducing consequences of failures are key to future‑proof societies
EXPLANATION
Gjørv advocates for preparing societies to survive disruptions (e.g., electricity outages) by developing resilient systems, rollback capabilities, and contingency plans that limit the impact of AI failures.
EVIDENCE
She draws a parallel to European electricity-outage preparedness, stating “we all have to practice on living without electricity … looking at how dependent we are … planning … making the bad outcomes less bad” [187-189].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
European cybersecurity policy emphasizes resilience, rollback capabilities, and reducing impact of system failures, directly supporting her call for such measures [S14].
MAJOR DISCUSSION POINT
Call for Early Governance and Anchor Controls
AGREED WITH
Mr. Vinayak Godse, Simonas Cerniauskas, Mr. Kenny Kesar, Mr. Simonas Satunas
Argument 5
AI‑driven manipulation can create divergent information universes, threatening democracy and geopolitics
EXPLANATION
Gjørv describes how AI‑generated content can produce separate, self‑reinforcing information ecosystems that distort public perception, posing risks to democratic processes and international stability.
EVIDENCE
She references a paper on “agent swarms” and the Ukraine-Russia war, noting how large populations can be overpowered by completely different views of reality [146-149].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Analyses of AGI’s societal impact warn that AI-generated content can fragment information ecosystems and pose risks to democratic stability, echoing her concern [S11].
MAJOR DISCUSSION POINT
Societal Impact – Cognition, Critical Thinking, Misinformation
Agreements
Agreement Points
Urgent need for early/anchor controls and governance mechanisms for AGI development
Speakers: Mr. Vinayak Godse, Simonas Cerniauskas, Mr. Kenny Kesar, Ms. Alexandra Bech Gjørv, Mr. Simonas Satunas
Urgency to understand and define AGI now for societal governance Technical controls such as labeling, regulatory measures, and European‑style oversight can provide early safeguards Propose AI Operating Procedures (AOP) analogous to SOPs for bias, ethics, and compliance Building resilience, robust rollback mechanisms, and reducing consequences of failures are key to future‑proof societies Call for global regulation and industry‑academia collaboration to embed morality and avoid profit‑driven unethical behavior
All panelists stress that, given the rapid progress toward AGI, concrete early-stage safeguards-ranging from technical labeling and regulatory measures to AI-specific operating procedures, resilience planning, and global collaborative regulation-are essential to steer development safely [1][173-174][191-198][187-189][174-180].
Recognition of multi‑layered risks (privacy, security, health, social, macro) and the need for coordinated mitigation
Speakers: Mr. Simonas Satunas, Mr. Kenny Kesar, Ms. Alexandra Bech Gjørv, Mr. Vinayak Godse
Risks span four levels — classical (privacy, cyber‑fraud), health, social, macro (society, democracy) — requiring coordinated mitigation Advanced AI can launch sophisticated attacks, impersonate leaders, raising real threats Privacy limits on personal data hinder development of human‑level situational awareness Urgency to understand and define AGI now for societal governance (including security and privacy)
The speakers converge on a taxonomy of risks-from traditional privacy and cyber-fraud to broader societal and democratic threats-and agree that coordinated, multi-level strategies are required, noting both technical vulnerabilities and privacy constraints [131-139][105-107][35-37][1-7].
POLICY CONTEXT (KNOWLEDGE BASE)
This multi-dimensional risk framing mirrors discussions on data flow governance and the need to balance security with privacy and human rights at international workshops such as WS #103 and IGF sessions, highlighting coordinated mitigation as a policy priority [S25][S26].
Compute is a critical but not sole factor; energy, data, implementation, and human skills are equally vital
Speakers: Mr. Simonas Satunas, Simonas Cerniauskas, Mr. Kenny Kesar, Ms. Alexandra Bech Gjørv
Compute is only one element; energy, data, implementation, and human skills are equally critical Massive compute investment fuels progress but may be a bubble Tiered model approach (small LLMs for routine tasks, large models for complex problems) to manage compute and risk Need low‑latency, energy‑efficient neuromorphic/edge hardware for situational awareness
All agree that while compute power drives AI advances, it must be balanced with energy supply, high-quality data, appropriate hardware architectures, and human critical-thinking capacities; over-investment risks are noted, and tiered model strategies are proposed to optimise compute use [72-90][70-71][120-126][31-33].
POLICY CONTEXT (KNOWLEDGE BASE)
Authoritative analyses stress a holistic approach to AGI, emphasizing that compute must be complemented by energy infrastructure, data quality, implementation strategies, language considerations, and human education rather than being the sole driver [S22][S23].
Potential erosion of human critical thinking due to over‑reliance on AI‑generated content
Speakers: Mr. Simonas Satunas, Mr. Kenny Kesar, Mr. Vinayak Godse
Critical thinking is nothing but your ability to give attention to various dimensions; AI makes this easier but may reduce genuine critical thinking Over‑reliance on AI may erode critical thinking, creating feedback loops of AI‑generated content Discussion on how dependence on AI reduces critical thinking and increases misinformation risk
Satunas, Kesar, and Godse all highlight that heavy reliance on AI tools can diminish human critical-thinking skills, leading to feedback loops of AI-generated content and heightened misinformation risks [156-163][165-169][150-153].
Similar Viewpoints
Both argue that compute should be managed strategically—Satunas stresses a multi‑factor ecosystem, while Kesar proposes a tiered model architecture to balance compute demands and risk [72-90][120-126].
Speakers: Mr. Simonas Satunas, Mr. Kenny Kesar
Compute is only one element; energy, data, implementation, and human skills are equally critical Tiered model approach (small LLMs for routine tasks, large models for complex problems) to manage compute and risk
Both point out that privacy constraints are a major barrier to advancing AI capabilities and must be addressed early in governance frameworks [1-7][35-37].
Speakers: Mr. Vinayak Godse, Ms. Alexandra Bech Gjørv
Urgency to understand and define AGI now for societal governance Privacy limits on personal data hinder development of human‑level situational awareness
Both recognize that compute investment is driving AI forward but warn against treating it as the sole bottleneck, emphasizing a broader ecosystem of resources [70-71][72-90].
Speakers: Simonas Cerniauskas, Mr. Simonas Satunas
Massive compute investment fuels progress but may be a bubble Compute is only one element; energy, data, implementation, and human skills are equally critical
Both stress the necessity of human oversight, ethical frameworks, and multi‑stakeholder collaboration to prevent bias and ensure moral AI behavior [174-180][96-102].
Speakers: Mr. Simonas Satunas, Ms. Alexandra Bech Gjørv
Call for global regulation and industry‑academia collaboration to embed morality and avoid profit‑driven unethical behavior Human oversight and ethical frameworks are essential; machines can inherit and amplify bias
Unexpected Consensus
Privacy as both a barrier to AI progress and a core security concern
Speakers: Mr. Vinayak Godse, Ms. Alexandra Bech Gjørv
Urgency to understand and define AGI now for societal governance (including security and privacy) Privacy limits on personal data hinder development of human‑level situational awareness
While Godse frames privacy primarily as a governance challenge, Gjørv treats it as a technical limitation to achieving human-like AI, yet both converge on the view that privacy constraints must be tackled early [1-7][35-37].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy debates consistently highlight privacy as a double-edged issue: it can impede AI development while also being essential for security, as reflected in privacy-security balancing frameworks and calls for transparent AI practices [S26][S32][S34][S35].
Critical thinking erosion linked to AI‑generated content feedback loops
Speakers: Mr. Simonas Satunas, Mr. Kenny Kesar
Critical thinking is nothing but your ability to give attention to various dimensions; AI makes this easier but may reduce genuine critical thinking Over‑reliance on AI may erode critical thinking, creating feedback loops of AI‑generated content
Satunas raises the conceptual risk of diminished critical thinking, while Kesar provides empirical evidence that 30 % of online content is already AI-generated, reinforcing the same concern unexpectedly from different angles [156-163][165-169].
Overall Assessment

The panel shows strong convergence on four core themes: (1) the necessity of early, multi‑layered governance and anchor controls; (2) a shared risk taxonomy spanning privacy, security, health, social and macro dimensions; (3) acknowledgement that compute is vital but must be complemented by energy, data, hardware, and human skills; and (4) concern that AI over‑reliance could erode human critical thinking. These agreements cut across AI technical development, security, human rights, and broader socio‑economic impacts.

High consensus – most speakers articulate overlapping viewpoints on governance, risk management, and the broader ecosystem needed for safe AGI development. The alignment suggests that future policy and research agendas can build on these common foundations, though divergence remains on precise timelines and the scale of investment.

Differences
Different Viewpoints
Timeline for achieving AGI
Speakers: Mr. Simonas Satunas, Mr. Vinayak Godse, Simonas Cerniauskas, Mr. Kenny Kesar
Expectation that AGI could appear within 3–7 years Urgency to understand and define AGI now for societal governance Massive compute investment fuels progress but may be a bubble No explicit timeline given; focus on accuracy improvements over many years
Satunas states that AGI is likely to be reached in three to seven years [21]. Godse stresses the immediate need to define AGI for governance but does not commit to a specific horizon, implying a longer-term view [1-7]. Cerniauskas points out that most definitions lack a clear timing and that the field may be over-invested, suggesting uncertainty about when AGI will materialise [12-15]. Kenny does not provide a timeline, instead discussing multi-year accuracy gains, which signals a more distant outlook [44-48]. Thus the panel is split between a near-term optimistic horizon and a more cautious, uncertain timeline.
POLICY CONTEXT (KNOWLEDGE BASE)
Recent forecasts presented at Davos 2025 suggest a five-year horizon for AGI, contrasting with more aggressive one- to two-year predictions, providing a historical reference point for timeline debates [S24].
Relative importance of compute versus other resources for AGI development
Speakers: Mr. Simonas Satunas, Simonas Cerniauskas, Mr. Vinayak Godse
Compute is only one element; energy, data, implementation, and human skills are equally critical Massive compute investment fuels progress but may be a bubble Why compute becomes very important; need for massive compute
Satunas argues that compute is just one link in a chain and that energy, data, implementation and especially human critical-thinking are equally vital [72-90]. Cerniauskas emphasizes the current surge in compute spending as the main driver of rapid AI advances, while also warning it could be over-estimated [70-71]. Godse repeatedly asks why compute is so central to the discussion, suggesting a view that compute is the primary bottleneck [65-68]. The speakers therefore disagree on whether compute should be treated as the dominant factor or as one of several equally important resources.
POLICY CONTEXT (KNOWLEDGE BASE)
Expert commentary underscores that while compute is pivotal, equal emphasis on energy, data, implementation, and human expertise is required for AGI, challenging compute-centric narratives [S22][S23].
What constitutes appropriate early‑stage “anchor controls” for AGI governance
Speakers: Mr. Vinayak Godse, Simonas Cerniauskas, Mr. Simonas Satunas, Ms. Alexandra Bech Gjørv, Mr. Kenny Kesar
Emphasizes need for immediate anchor controls (technical safeguards, regulatory steps) to guide AGI development Technical controls such as labeling, regulatory measures, and European‑style oversight Call for global regulation and industry‑academia collaboration to embed morality and avoid profit‑driven unethical behavior Building resilience, robust rollback mechanisms, and reducing consequences of failures Propose AI Operating Procedures (AOP) analogous to SOPs for bias, ethics, and compliance
Godse explicitly asks the panel for concrete early-stage anchor controls to steer AGI safely [7][172]. Cerniauskas suggests technical measures like model labeling and points to European regulatory habits as early safeguards [173-174]. Satunas pushes for a global regulatory framework and collaboration with industry and academia to embed ethical principles [174-180]. Gjørv recommends societal resilience and rollback mechanisms to mitigate failures [187-189]. Kenny proposes institutionalising AI Operating Procedures (AOP) as a checks-and-balances system for bias and ethics [191-199]. The disagreement lies in which mechanism (technical labeling, global law, resilience planning, or procedural governance) should be prioritized as the first line of defense.
Balancing privacy constraints with the data needs for human‑level situational awareness
Speakers: Ms. Alexandra Bech Gjørv, Mr. Simonas Satunas
Privacy limits on personal data hinder development of human‑level situational awareness Data is extremely important; lack of investment in human element
Gjørv highlights that accessing large volumes of personal data is essential for true situational awareness, but privacy regulations impose limits that slow progress [35-37]. Satunas stresses that data is a critical pillar for AGI, listing it alongside compute, energy and implementation, without addressing privacy trade-offs [85-86]. The two positions diverge on how to reconcile privacy protection with the data requirements for advanced AI.
POLICY CONTEXT (KNOWLEDGE BASE)
The tension between privacy safeguards and the demand for timely, high-resolution data for situational awareness has been a recurring theme in policy forums addressing data flow, big-data for prevention, and AI-enabled public safety, underscoring the need for nuanced governance [S25][S26][S27][S32][S34].
Unexpected Differences
Role of compute in shaping AI progress versus the risk of a compute‑driven investment bubble
Speakers: Mr. Simonas Satunas, Simonas Cerniauskas
Compute is only one element; energy, data, implementation, and human skills are equally critical Massive compute investment fuels progress but may be a bubble
Satunas downplays compute as the sole driver, emphasizing a balanced ecosystem of resources [72-90]. Cerniauskas, however, points to the current “super high cycle” of compute investment as the engine of rapid AI advances, while also cautioning that it may be over-invested and unsustainable [70-71]. The tension between viewing compute as a necessary but not dominant factor versus seeing it as the primary catalyst (and potential bubble) was not anticipated given the overall consensus on multi-factor development.
POLICY CONTEXT (KNOWLEDGE BASE)
Analyses of the AI market note concerns about a speculative compute-driven bubble, with some leaders arguing that current investment reflects sustained demand rather than hype, while others warn of geopolitical risk transfer to the public sector [S28][S29][S31].
Interpretation of “critical thinking” as a solution versus a symptom
Speakers: Mr. Simonas Satunas, Mr. Kenny Kesar
One of the elements that we are not investing enough is the human element (critical thinking) Over‑reliance on AI may erode critical thinking, creating feedback loops of AI‑generated content
Satunas treats critical thinking as a resource that needs more investment to prepare society for AGI [87-90]. Kenny frames critical thinking as a capability that is being eroded by AI-generated content, suggesting that the problem is a feedback loop that must be broken [165-169]. The unexpected twist is that both see critical thinking as central, yet they locate the problem on opposite sides of the AI-human interaction spectrum.
Overall Assessment

The panel shows broad consensus that AGI will pose significant societal, security, and ethical challenges and that proactive governance is essential. However, there are clear disagreements on the expected timeline for AGI, the primacy of compute versus a multi‑resource approach, the specific form of early anchor controls, and how to balance privacy with data needs. These divergences reflect differing strategic priorities (short‑term optimism vs. cautious uncertainty) and disciplinary lenses (technical, regulatory, societal resilience).

Moderate to high. While all participants agree on the need for action, the lack of alignment on timelines, resource prioritisation, and concrete governance mechanisms could hinder coordinated policy responses and lead to fragmented national strategies.

Partial Agreements
All participants agree that proactive governance is needed to manage AGI risks, but they differ on the preferred pathway: Godse wants immediate anchor controls, Cerniauskas favours technical labeling and European regulation, Satunas pushes for global, multi‑stakeholder regulation, Gjørv stresses societal resilience and rollback, while Kenny proposes institutional AOPs. The shared goal is safe AGI development, yet the routes diverge [7][172][173-174][174-180][187-189][191-199].
Speakers: Mr. Vinayak Godse, Simonas Cerniauskas, Mr. Simonas Satunas, Ms. Alexandra Bech Gjørv, Mr. Kenny Kesar
Urgency to understand and define AGI now for societal governance Technical controls such as labeling, regulatory measures, and European‑style oversight Call for global regulation and industry‑academia collaboration to embed morality and avoid profit‑driven unethical behavior Building resilience, robust rollback mechanisms, and reducing consequences of failures Propose AI Operating Procedures (AOP) analogous to SOPs for bias, ethics, and compliance
Both agree that human critical thinking is at risk in an AI‑driven world. Satunas calls for investment in critical‑thinking education [87-90], while Kenny warns that AI‑generated content can create a feedback loop that diminishes critical thinking [165-169]. They share the concern but differ on whether the primary remedy is education investment or controlling AI‑generated content.
Speakers: Mr. Simonas Satunas, Mr. Kenny Kesar
One of the elements that we are not investing enough is the human element (critical thinking) Over‑reliance on AI may erode critical thinking, creating feedback loops of AI‑generated content
Takeaways
Key takeaways
AGI is envisioned as AI that can perform any human task at a professional level, reason, learn, adapt, and transfer knowledge, moving beyond narrow, task‑specific systems. Panelists estimate a possible emergence of AGI within the next 3–7 years, creating urgency for societal understanding and governance. Compute power is a critical driver of current AI progress, but it is only one element; energy efficiency, data availability, hardware (neuromorphic/edge), and human skills are equally essential. Security and privacy risks will intensify as AI becomes more capable, including sophisticated cyber‑attacks, impersonation of leaders, and large‑scale manipulation of information. Risks are layered: classical (privacy, fraud), health/mental‑health, social (empathy, addiction), and macro (democracy, geopolitical manipulation). Each layer requires specific mitigation strategies. Ethical oversight, bias mitigation, and human‑in‑the‑loop controls are necessary; proposals include AI Operating Procedures (AOP) analogous to SOPs and technical safeguards such as model labeling. Over‑reliance on AI may erode critical thinking and create feedback loops of AI‑generated content; education and critical‑thinking training are essential countermeasures. Early “anchor controls” – technical, regulatory, and resilience measures – should be instituted now to guide AGI development and limit adverse outcomes. Collaboration across industry, academia, and governments (both national and international) is required to embed morality, ensure fairness, and avoid profit‑driven unethical behavior.
Resolutions and action items
Develop and adopt AI Operating Procedures (AOP) for bias, ethics, and compliance within organizations. Invest in education programs that strengthen critical‑thinking and AI literacy for the general public. Pursue a tiered model strategy: small, efficient LLMs for routine tasks and larger models for complex problems to manage compute costs and risk. Encourage global coordination on AI regulation and standards, with particular emphasis on privacy‑preserving data practices. Advance research on low‑latency, energy‑efficient neuromorphic and edge hardware to support real‑time situational awareness. Implement technical safeguards such as model labeling, provenance tracking, and robust rollback mechanisms for AI systems. Create resilience planning analogous to electricity‑outage preparedness, including contingency and mitigation strategies for AI failures.
Unresolved issues
Exact timeline for achieving true AGI remains uncertain; estimates vary and no consensus was reached. How to reconcile the need for massive personal data to achieve human‑level situational awareness with strict privacy regulations. Specific mechanisms for global AI governance and how to align disparate national regulatory approaches. Concrete methods to prevent the erosion of critical thinking while AI provides rapid information synthesis. Details of how to balance compute investment against potential over‑investment bubbles and sustainability concerns. Implementation pathways for the proposed AI Operating Procedures across diverse industries and jurisdictions.
Suggested compromises
Adopt a hybrid probabilistic‑deterministic approach, using deterministic models where reliability is critical while retaining probabilistic flexibility for innovation. Employ a tiered model ecosystem to balance performance needs against compute cost, allowing smaller models to handle low‑risk tasks. Combine technical controls (e.g., labeling, sandboxing) with regulatory oversight, leveraging both industry self‑regulation and government standards. Encourage responsible AI investment by pairing compute expansion with efficiency improvements to mitigate the risk of a bubble. Blend human oversight with automated checks, recognizing that neither humans nor AI alone can guarantee ethical outcomes.
Thought Provoking Comments
AGI will be something that can perform every human task at the level of accuracy and professionality of a human professional… 50% of Israelis trust Gen‑AI tools more than they trust their friends.
Provides a concrete, human‑centric definition of AGI and backs it with a striking statistic on public trust, highlighting how perception of AI is already shifting toward AGI‑like expectations.
Shifted the conversation from abstract definitions to societal perception, prompting others to discuss timelines (3‑7 years) and the gap between current AI capabilities and public trust.
Speaker: Simonas Satunas
Machines are quite good at detecting fire or doing various instinctive things as fast as we are, but the ability to interpret context, emotions, ambiguity, surroundings, body language, etc., that’s still quite far away.
Draws a clear line between narrow AI strengths (speed, pattern detection) and the missing human‑like situational awareness, emphasizing the technical and ethical challenges of moving toward AGI.
Introduced the technical‑privacy dimension, leading the panel to discuss hardware (neuromorphic, edge computing) and data privacy constraints as essential hurdles.
Speaker: Alexandra Bech Gjørv
The epitome of accuracy is five nines. So for AI to get from 90 % to 99 %, it took five to ten years. Every nine you add is another year or two, and each extra nine brings us closer to general intelligence.
Frames progress toward AGI in quantitative terms (accuracy nines) and links it to a historical analogy of disruptive technology cycles, giving a measurable perspective on how far we are.
Prompted a discussion on the pace of improvement, the role of regression models, and the need for deterministic‑probabilistic hybrids, steering the talk toward practical engineering roadmaps.
Speaker: Kenny Kesar
Compute is one element in a chain of elements… we know AGI will arrive, we must be prepared… the human element—critical thinking, education—is as important as compute.
Challenges the prevailing narrative that compute alone will deliver AGI, expanding the focus to include energy, data, language, and especially human capital.
Redirected the dialogue from a hardware‑centric view to a broader ecosystem view, leading others to mention education, regulation, and societal readiness.
Speaker: Simonas Satunas
Michael Lewis anecdote: in the NBA, video surveillance and statistics eliminated racist coaching decisions. Machines can make people better.
Provides a concrete, positive case where AI corrected human bias, counterbalancing fear‑based narratives and illustrating a pathway for ethical AI deployment.
Shifted tone toward optimism, encouraging participants to consider how AI can improve governance and fairness rather than only posing risks.
Speaker: Alexandra Bech Gjørv
We distinguish between four levels of risk: classical (privacy, security), human health/mental health, social (empathy, bullying), and macro (democracy, foreign manipulation). A national and international strategy must address all levels.
Offers a structured risk taxonomy that moves the conversation from vague concerns to a layered, actionable framework.
Guided subsequent speakers to address specific domains (security, privacy, societal impact) and set the stage for discussing coordinated policy responses.
Speaker: Simonas Satunas
When AI reaches AGI, it could emulate a CEO and make decisions; the threat is real because the AI could act indistinguishably from a human.
Highlights a concrete, high‑stakes scenario of AI misuse, moving the discussion from abstract risk to a tangible governance challenge.
Prompted deeper conversation on security, the need for tiered model deployment, and the importance of robust safeguards before such capabilities emerge.
Speaker: Kenny Kesar
30 % of the content on the internet is already AI‑generated. This feedback loop risks stopping human intellectual evolution because we stop exercising our brains.
Raises a novel, systemic risk: the self‑reinforcing cycle where AI‑generated data trains future models, potentially eroding critical thinking and innovation.
Led to a reflective turn, with participants emphasizing education, awareness, and the necessity of preserving human critical thinking alongside AI adoption.
Speaker: Kenny Kesar
We are in a super‑high cycle of investment; some wonder if it’s a bubble or over‑investment. Zuckerberg even said we might have overcapacity for a couple of years.
Introduces market dynamics and the possibility of a speculative bubble, adding economic context to the technical and ethical discussion.
Tempered optimism, causing the panel to consider sustainability, cost‑effectiveness, and the need for balanced investment strategies.
Speaker: Simonas Cerniauskas
We need resilience and robust rollback mechanisms—plan for the worst‑case like living without electricity—to reduce the consequences of AI failures, not just avoid them.
Proposes a pragmatic, risk‑mitigation approach that focuses on limiting damage rather than solely preventing it, aligning with disaster‑recovery thinking.
Steered the final part of the discussion toward actionable “anchor control” ideas, influencing the concluding remarks on governance and preparedness.
Speaker: Alexandra Bech Gjørv
Overall Assessment

The discussion evolved from a broad, introductory framing of AGI to a nuanced, multi‑dimensional analysis thanks to several pivotal remarks. Definitions anchored in public trust, quantitative accuracy metrics, and a layered risk taxonomy gave the conversation concrete footing. Counterbalancing perspectives—such as the hardware‑centric view versus the human‑capital emphasis, and the optimistic bias‑reduction anecdote versus the stark security‑emulation scenario—created a dynamic tension that pushed participants to explore both opportunities and threats. Economic considerations about investment cycles added realism, while the final focus on resilience and rollback mechanisms translated the debate into actionable governance concepts. Collectively, these thought‑provoking comments shaped a rich dialogue that moved from abstract speculation to concrete policy and societal implications for the impending era of AGI.

Follow-up Questions
What actions and strategies can accelerate AI development toward AGI within the next three to seven years?
Understanding concrete steps and investments needed to influence the AI trajectory is crucial for policymakers and industry to plan resources and research priorities.
Speaker: Mr. Vinayak Godse (to Ms. Alexandra Bech Gjørv)
Why is massive compute essential for AGI, and what role does compute play in achieving attention, context, reasoning, and low latency?
Clarifying the necessity of compute helps justify infrastructure investments and informs discussions on sustainability and scalability of future AI systems.
Speaker: Mr. Vinayak Godse (to panel)
How do language models, ambient computing, and world models affect AGI development, and what are the challenges associated with them?
Exploring these technical dimensions is important to identify research gaps and guide development of more capable and context-aware AI.
Speaker: Mr. Vinayak Godse (to Ms. Alexandra Bech Gjørv)
What current security, privacy, and safety measures can be adopted now to safely scale AI models toward more powerful capabilities?
Identifying actionable safeguards is vital to mitigate emerging threats as AI systems become more advanced.
Speaker: Mr. Vinayak Godse (to Mr. Kenny Kesar)
What ‘anchor control’ mechanisms or concepts can be established now to manage future AGI risks and governance challenges?
Early establishment of control frameworks can provide a foundation for responsible AGI deployment and reduce reactive policy making.
Speaker: Mr. Vinayak Godse (to panel)
How can stakeholders get involved early in shaping AI governance and alignment before AGI becomes mainstream?
Early engagement ensures that ethical, legal, and societal considerations are embedded in AI development rather than retrofitted later.
Speaker: Mr. Vinayak Godse (to Ms. Alexandra Bech Gjørv)
What constitutes true critical thinking in the age of AI, and how can individuals maintain it without over‑relying on AI assistance?
Defining and preserving critical thinking is essential to prevent cognitive atrophy and ensure humans remain capable of independent judgment.
Speaker: Mr. Vinayak Godse (to Mr. Kenny Kesar)

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.