From Technical Safety to Societal Impact Rethinking AI Governanc

From Technical Safety to Societal Impact Rethinking AI Governanc

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel opened by stating that AI safety is often framed only in technical terms such as model alignment and benchmark performance, but the discussion must move beyond these to address multidisciplinarity, governance, and real-world impact [14-20]. Speakers emphasized that AI systems do not fail solely because of model flaws; their harms arise from the institutional, economic, and political contexts in which they are deployed [21-24].


Lourino Chemane argued that safety should be understood as the protection of people, requiring AI governance that integrates law, ethics, social sciences, education, labor, and the voices of affected communities [31-36]. He highlighted the need for comprehensive data policies, cybersecurity measures, and interoperable digital-government frameworks to secure national AI strategies and infrastructure [43-48].


Wendy Hall criticized the summit’s lack of gender diversity and warned that safety must include systematic monitoring, longitudinal studies, and the creation of AI measurement and “social-machines” metrology to capture socio-technical effects [78-84][89-103]. Yannis Ioannidis distinguished between the safety of AI technology itself and the safety of its use, calling for regulation of both inputs and outputs and for multidisciplinary oversight [108-119][120-124]. Sara Hooker noted that safety conversations have become more precise yet remain vague, stressing the importance of acknowledging trade-offs and transparently reporting what model capabilities are omitted [135-146][147-166][167-185].


Jibu Elias warned that AI is increasingly a sociopolitical and extractive force that widens socioeconomic gaps and can cause environmental harms such as water depletion from data-center projects [192-207][208-224]. Neha Kumar underscored the relevance of human-centred HCI research and called for genuine inclusivity, asking who designs, benefits from, and decides AI systems [233-239][285-303]. Merve Hickok broadened safety to encompass human-rights and democratic values, arguing that historical power narratives must be challenged to protect citizens [242-245][271-281]. Rasmus Andersen stressed advising political leaders to consider long-term societal impacts and embed safety in policy before harms materialize [248-256]. Tom Romanoff described ACM’s role in translating technical concerns into policy recommendations for lawmakers worldwide [261-265]. Jeanna Matthews posed a provocative question about whether good intentions alone suffice, highlighting the need for enforceable safeguards and accountability [266-270].


The session closed with Virginia Dignum asserting that achieving inclusive, multidisciplinary AI safety will require ongoing dialogue, concrete governance tools, and collective insistence from all stakeholders [375-378].


Keypoints

Major discussion points


Broadening AI safety beyond technical metrics – The session was opened by stressing that AI safety is often framed only in terms of model alignment, robustness, and benchmarks, but real-world value or harm depends on deployment context, governance, and institutional factors [14-19]. Panelists echoed this, noting that safety must prioritize human, social, and institutional impact and draw on law, ethics, education, and affected communities [31-34].


Inclusion and diversity as essential for safe AI – Multiple speakers highlighted the systematic exclusion of women, children, and marginalized groups from AI decision-making. Wendy Hall pointed out the all-male composition of the summit’s leadership and argued that “if it’s not diverse it’s not ethical” [78-85]. Jibu Elias warned that tribal languages are omitted from major models, illustrating cultural exclusion [202-205]. Neha Kumar called for concrete answers to “who is making decisions?” and stressed the gap between inclusive rhetoric and actual practice [285-293].


Policy, regulation, and institutional frameworks are needed – Mozambique’s effort to draft a national AI strategy, data policy, and regulations for data centres and cloud computing shows how governance structures shape safety [42-48]. Rasmus Andersen described advising governments on long-term AI impacts and the need to embed safety in public-service delivery [250-256]. Tom Romanoff explained ACM’s role in turning technical recommendations into policy actions [261-265], while Merve Hickok called for a broader view of safety that links AI policy to human-rights and democratic values [242-245].


Measuring AI systems and acknowledging trade-offs – Wendy Hall introduced the concept of “AI metrology” – a science of measuring social machines and their societal effects [57-68]. Sara Hooker stressed that safety discussions must be precise, expose what has been sacrificed in model design, and require transparent reporting of coverage and omitted safety tests [164-176].


Urgent need for accountability and proactive enforcement – Panelists warned that history shows safety only improves after crises. Jeanna Matthews asked whether good intentions are enough, and Merve Hickok argued that narratives of safety must shift from optional evaluation to mandatory protection of rights [267-279]. Tom Romanoff illustrated the “51 % rule” of political will needed to pass regulations and urged participants to move from “moderate” to active advocacy [326-338]. The session closed with a collective call to “insist” on concrete actions for inclusive, accountable AI safety [359-363].


Overall purpose / goal of the discussion


The panel aimed to re-frame AI safety from a narrow technical problem to a multidisciplinary challenge that integrates governance, policy, societal impact, and inclusive participation, and to generate concrete ideas for future frameworks, standards, and accountability mechanisms.


Overall tone and its evolution


The conversation began formally and optimistically, focusing on the need for broader perspectives [14-19]. It quickly turned critical, with speakers highlighting exclusion, tokenism, and the gap between rhetoric and practice [78-85][285-293]. As the dialogue progressed, it became constructive and solution-oriented, introducing concepts like AI metrology, trade-off reporting, and policy roadmaps [57-68][164-176][250-256]. The final segment adopted an urgent, activist tone, urging participants to move beyond discussion to concrete advocacy and enforcement [267-279][326-338][359-363].


Speakers

Virginia Dignum – Co-host of the session and Chair of the Technology Policy Council of ACM; expert in AI policy, governance, and multidisciplinary safety frameworks [S15].


Lourino Chemane – Chairman of the Board of the National Institute of Information and Communication Technology (Mozambique) and lead of Mozambique’s national AI strategy; focuses on AI policy, governance, and safety from a national-level perspective [S10].


Dame Wendy Hall – Regius Professor of Computer Science, Associate Vice-President and Director of the Web Science Institute at the University of Southampton; former member of the United Nations high-level expert advisory body; expertise in computer science, web science, and AI governance [S3].


Yannis Ioannidis – President of the ACM and Professor at the University of Athens; specialist in computer science and AI safety from a technical standpoint [S2].


Sara Hooker – Co-founder and President of Adaption Labs (formerly with Cohera); AI researcher focusing on large language models, safety, and the societal impact of AI [S1].


Jibu Elias – Researcher and activist examining how technology and innovation institutions acquire knowledge, labor, and legitimacy; concentrates on AI’s sociopolitical and extractive dimensions [transcript].


Speaker 2 – Unnamed participant who contributed a brief comment (“be”) during the discussion; no additional role or expertise identified [S7].


Participant – Audience member who raised a question about multilingual safety and regulatory artifacts; no formal title or affiliation provided [S11][S12][S13].


Neha Kumar – Associate Professor at Georgia Tech, School of Interactive Computing; President of the ACM SIGCHI (Special Interest Group on Computer-Human Interaction); expertise in human-computer interaction, social impact of technology, and inclusive design [transcript].


Merve Hickok – President and Policy Director for the Center for AI and Digital Policy, an independent think-tank working at the intersection of AI policy, human rights, democratic values, and the rule of law [S18][S19].


Tom Romanoff – Director of Policy for the ACM, overseeing global and regional policy committees; former Washington, D.C. think-tank professional who worked with U.S. Congress on tech policy [S20][S21].


Jeanna Matthews – Co-host of the second session of the panel; involved in organizing and moderating the discussion [S22].


Rasmus Andersen – Advisor at the Tony Blair Institute of Government, providing AI guidance to heads of state and senior ministers; expertise in AI policy advisory and strategic planning for governments [S23][S24].


Sara Hooker – (Listed again for completeness; see entry above.)


Additional speakers:


Gina Matthews – Co-host of the session (mentioned by Virginia Dignum) and Chair of the Technology Policy Council of ACM; involved in session moderation and organization [S15].


Full session reportComprehensive analysis and detailed insights

The session opened with Virginia Dignum reminding the audience that AI safety is often reduced to technical notions such as model alignment, red-team testing and benchmark performance, yet these tools “matter” but “do not address the core question” of what determines whether AI creates societal value or harm when deployed [14-20]. She argued that AI systems are never isolated; their impact is shaped by deployment context, governance capacity, incentive structures and the lived realities of the communities that use them, so failures often stem from institutional, economic and political embedding rather than from model flaws alone [21-24].


Dr. Lourino Chemane, chair of Mozambique’s National Institute of Information and Communication Technology, reframed safety as the protection of people, not merely of systems. He stressed that AI governance must prioritise human, social and institutional impact and be grounded in multidisciplinary input from law, ethics, education, labour, social sciences and the affected communities [31-36]. Mozambique is drafting a national AI strategy, a data-policy and a cybersecurity strategy, and has already adopted regulations for data-centre construction and cloud computing to safeguard national sovereignty and democratic processes [42-48]. He also highlighted the need for interoperable digital-government frameworks to ensure that AI improves public-service efficiency while remaining safe [46-48].


Dame Wendy Hall criticised the summit’s lack of gender diversity, noting that “50 % of the population weren’t included yesterday, the women” and that the panels were dominated by “alpha males” [78-85]. She introduced the concept of “AI metrology” – a science of measuring “social machines” to capture socio-technical effects [57-68] and cited concrete initiatives such as the UN high-level expert advisory board, the upcoming AI for Good conference in Geneva (July), the UK National Physical Laboratory’s AI Measurement Centre, and the AI Security Institute as steps toward operationalising AI metrology [57-68]. Hall warned that safety requires systematic monitoring and longitudinal studies, citing Australia’s social-media age-restriction experiment and the unintended consequences of bans that may drive youth to hidden platforms [89-103].


After Hall’s remarks, Virginia Dignum thanked her, acknowledged that Hall needed to leave, and posed a question to the panel about shifting the discourse from a purely technical approach to a broader societal one [104-105].


Yannis Ioannidis distinguished the safety of the technology (the algorithm/model) from the safety of its use, likening the technology to a car that is either working or not [111-115]. He emphasized that the real safety concerns lie in the data-input stage and the deployment-output stage, both of which require regulation and multidisciplinary oversight involving humanities, legal, ethical and civic-society experts [118-124].


Sara Hooker reflected on the evolution of the safety debate, observing that early discussions were vague and centred on existential risk, whereas today the conversation is “messier” but more accountable to real-world impact [151-156]. She noted that the term “safety” remains a blanket term, that trade-offs are inevitable, and that transparent reporting of which safety parameters are covered, which languages are supported and what trade-offs have been made is essential [164-176][167-185]. Hooker also warned that prestige and resource allocation signal how seriously safety is taken, and that panel titles alone do not guarantee substantive action [135-146][147-166].


Jibu Elias warned that AI is increasingly a sociopolitical construct with exploitative and extractive dimensions. He cited the omission of tribal languages from major models, the imposition of Hindi as a national language, and the environmental damage caused by a data-centre in Telangana that depleted groundwater and involved community bribery [202-210][211-224]. Elias highlighted the emerging concern of “AI psychosis” among vulnerable users and critiqued the US-centric AI stack being promoted globally, questioning whether this extractive model will continue [215-224].


Neha Kumar, an HCI scholar, reinforced the human-centred perspective, urging the panel to ask “who is making decisions, who is being benefited, who is part of the design process?” [285-293]. She argued that inclusive rhetoric often remains disembodied, focusing on infrastructure and data without addressing lived impacts on women, children and marginalised groups [294-303]. Kumar suggested drawing on feminist, women’s studies and development studies to interrogate power dynamics and avoid repeating historical development failures [285-303].


Merve Hickok broadened safety to encompass human-rights, democratic values and the rule of law. She argued that the prevailing safety narrative is an “evaluation” driven by powerful interests and called for a shift to mandatory, rights-based safeguards that protect citizens’ freedoms, dignity and democratic participation [242-245][271-281]. Hickok emphasized that such artefacts must be dynamic, cover multiple languages and cultures, and can be mandated by governments (e.g., the California precedent) [364-371].


Rasmus Andersen, advising leaders at the Tony Blair Institute, stressed long-term foresight, urging policymakers to consider how AI will affect citizens in 2030-35 and to embed safety in public-service delivery [250-256]. He cited ongoing lawsuits concerning suicides among young people and the deep-fake regulation example (the Elon Musk/Grok incident) as evidence that significant harms are already emerging [250-256]. Andersen noted that governments are the only arena where imperfect technical, civil-society and industry perspectives can be reconciled, making state-level coordination essential [322-324].


Tom Romanoff described the ACM’s role in translating technical safety concerns into policy action. He explained that the ACM’s policy office works with regional committees worldwide to convey researchers’ recommendations to legislators [261-265]. Romanoff introduced the “51 % rule”, stating that regulatory change occurs only when support exceeds the 51 % threshold, whereas 49 % support is insufficient, and urged participants to move from “moderate” to active advocacy[326-338]. He highlighted the need for concrete artefacts-model cards, dataset cards and user-feedback mechanisms-to be mandated by governments [364-371].


During the audience Q&A, a participant requested multilingual, culturally-aware model-card, dataset-card and system-card evaluations. Hickok responded that such artefacts must be dynamic, cover multiple languages and cultures, and can be mandated by governments, citing the California precedent [359-363][364-371].


Jeanna Matthews posed a provocative question about whether history shows that AI will automatically benefit everyone or whether enforceable “musts” are required. She warned that good intentions alone are insufficient and that without binding safeguards, “people won’t go to jail when they do bad things with AI” [266-270][359-363].


Finally, Virginia Dignum synthesised the discussion, reiterating that safety must move beyond technical robustness to an inclusive, multidisciplinary approach that addresses governance, institutional capacity and societal impact [104-105]. She announced the intention to develop a collaborative AI-safety governance model within the next year and to produce a post-summit report with concrete recommendations[375-379]. The session closed with a shared acknowledgement that achieving inclusive, accountable AI safety will require ongoing dialogue, concrete standards such as multilingual model-card disclosures, and sustained advocacy from both technical and policy communities [359-363][364-371].


Overall, the panel reached strong consensus that AI safety is a socio-technical challenge demanding multidisciplinary governance, inclusive design, systematic measurement and outcome-oriented regulation. Points of contention remained around the primary locus of safety (technology versus use), the preferred horizon for measurement (long-term longitudinal studies versus immediate trade-off reporting), and whether coordination should be led by governments or multistakeholder bodies such as the ACM. Agreed-upon action items include finalising Mozambique’s AI strategy and data-policy, launching an ACM-sponsored journal on AI measurement/metrology, drafting a post-summit report with concrete recommendations, and urging governments to require multilingual, culturally aware model-card disclosures. Unresolved issues-operationalising inclusive governance structures, defining legal liability for harmful AI outputs, and balancing rapid innovation with the time needed for longitudinal safety studies-were identified as priorities for future research and policy work.


Session transcriptComplete transcript of the session
Virginia Dignum

Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. you you Thank you. Thank you. session so if you just want to stand here in front they want to make a picture of all of us you Thank you. Thank you. Yes, you have to sit there. Okay. Good morning, everybody. Thank you very much for being here. My name is Virginia Dignam. I will be co -hosting this session with my colleague Gina Matthews there. We both are the chairs of the… Technology Policy Council of ACM. And today we are here to discuss how to move beyond technical safety and looking at aspects of multidisciplinarity, governance, and real world.

impact. Across global AI discussion, safety is too often being framed in technical terms. Model alignment, red teaming, benchmark performance, frontier containment, and so on. These tools matter and they really are further development is crucial. But they don’t address the core question or at least one of the core questions. What determines whether AI systems produce human and societal value or harm in real deployment contexts? That’s what we are going to discuss in this session. AI systems, like we all know, do not operate in isolation. Their impact is shaped by deployment context, by governance capacity, by incentive structures, and by the lived reality of the communities that use and are impacted by these systems. As such, AI systems do not fail simply because of flaws in the model architecture or in the data or in the alignment technique.

they fail or they produce harm because they are embedded in institutional, economic and political systems. So we will have an open discussion with the panelists. It will be two rounds of panelists. And I would like to start by inviting Dr. Lorine Chaman, who is the chairman of the board of the National Institute of Information and Communication Technology in Mozambique, where he is at this moment leading the national strategy on AI for Mozambique. Please.

Lourino Chemane

Thank you. I would like to start by thanking the invitation to join this panel and also to congratulate the government of India hosting this AI Impact Summit. Going directly to the topic, part of this panel, as part of our exercise of crafting the… the national AI strategy, we look to this topic of safety. And for us, safety… working for the policy subject and from the policy formulation point of view. For our safety, we look at it as the protection of people, not only systems. So we look at AI governance must prioritize human, social, and institutional impact, going beyond technical metrics such as robustness, accuracy, or algorithm alignment. We also look at it from the multidisciplinary governance, grounded in the world context of use of AI.

For us, effective AI policies require input from law, social sciences, education, labor, ethics, and affected communities. So the inclusion of the people and how they will feel safe in using these technologies. We look also from the continuous human oversight and institutional accountability. People must know what’s in the bread box, how they’re designed, if they’re functional, if they’re not functional, if they’re not functional, and what factors that are affecting their lives, the decision made by the algorithm, have taken into consideration their feelings in the design phase. We also look for the protection of children, young people and women. From the studies that were conducted, women and children and youth are the first victims of the bad application of the AI.

We also look for the ethical and social assessment. Mozambique is one of the pilot countries adopting the UNESCO principles of ethics in adopting AI, and we are looking also for the dimension defined by UNESCO in this perspective. Sharing what we are doing in the country now, in Mozambique we are drafting, as I mentioned, our national AI strategy with the support of UNESCO and thank Professor Virginia, who is the leading expert in our team, but the contribution of other experts from UNESCO. We are also drafting our data policy and its implementation strategy, because we believe that data is a fundamental element for AI system. We are reviewing our national cyber security strategy. data that we’re collecting now is that there are already cybersecurity -related problems by the use of a young use of AI model.

We just adopted in Mozambique the regulation for the construction and operation of data centers and also the regulation for cloud computing, because we believe that infrastructure is a fundamental and key element for sovereignty of our country in terms of when it comes to safety, but from the policy point of view for the democratic system and all other dimensions. But we also look at it from the digital government point of view. So we’re reviewing also our interoperability framework that’s related to data to make sure that in adopting AI in the public administration, we address our main objective of improving efficiency and efficacy and delivering public services. For us, these are the elements that will be contained in the overall digital transformation strategy that, if everything goes as planned, will be approved by our government during government.

This year, and we are learning a lot in this summit. and gathering important elements that will help us to uplift and improve our work in crafting this element. Thank you for the opportunity to be part of this session.

Virginia Dignum

Thank you very much, Dr. Shaman. I understand that you have to move to another session, so feel free to leave whenever you need to go. We understand the complexities of the program. Now I would like to ask Dame Wendy Ho, Regis Professor of Computer Science, Associate Vice President and Director of the Web Science Institute at the University of Southampton, and also a former member of the United Nations high -level expert advisory body, to give us some provocative statements. They will be. Good. Provoke us.

Dame Wendy Hall

I’m fed up with just towing the party line. So I will… I have to first apologize, because I have to leave at 11. I’m supposed to be on three panels at the moment. and I also have a lunch date at midday in town. So, that’s my morning. I want to say, I think, three things. One is, what’s really… Four. If you know Monty Python, nobody expected the Spanish Inquisition. Anyway, so first of all, it’s been wonderful to be in India. I love India, and I have a love -hate relationship with this summit. It’s too big. There’s too much going on, and not enough actual real debate about the core. There’s going to be some sort of platitude statement come out today.

Yeah. And I’ve just been come back from the UN. Our advisory board and the new scientific panel get together. They’ve got a panel going on. At the moment, the dialogue that’s starting in… The dialogue that’s starting in the AI for Good conference in Geneva in July, we hope will be a real dialogue. I don’t know what form it’s going to take yet. But we have to knock the world leaders’ heads together. Now, I’m now going to say something which also really struck me. Thank you. Is that working? Yes? At this conference. Everyone’s, I love, you know, in India, AI means all -inclusive. But 50 % of the population weren’t included yesterday, the women. Right? There were no women.

The CEOs of every country, every company, there was one lady CEO from Accenture, I think. There were a couple of ladies on the panels at the end. It was all men. The alpha males of this world. The alpha males of this world. The alpha males of this world. Men. Men. Men. The alpha males of this world. right the world leaders that spoke the ceos that spoke that this world is dominated by men and my mantra has always been in terms of the the lack of women and other other some other diversity points as well but mainly women is if it’s if it’s not diverse it’s not ethical people don’t really understand what that means that means is if you haven’t got a diversity of people discussing a problem how are you going to actually sort out the biases if you haven’t got women at the top level making these decisions trying to set up the guidelines i mean your comment was yeah we want to make sure for the safety of women and children well let’s include the women and children in the discussions i mean that my third point um is that we we are watching i mean i’m very into watching these um experiments i did it all through the web and we need to learn how to monitor what’s going on so that we can say what is the right direction to go in the future.

It means collecting data and evidence and doing longitudinal studies, and it takes time. But take, for example, what Australia is doing with social media. We’ve heard at this conference several other… for teenagers. I mean, didn’t Macron… Who was there yesterday? Macron said under 15 in France. Our Prime Minister, who constantly changes his mind, so I don’t suppose it will happen, but he’s talked… Sorry, that’s a joke for any Brits in the audience, but there aren’t many. He’s saying 16 in the UK, some out of Spain saying 16. There will be unintended consequences of that. Making a ban like that without thinking about the nuances of… Well, what happens if… Well, first of all, the kids are ingenious enough to get round it.

And then they’re back on the dark side of things again, even worse than before. Because they’re doing it in… secret um what happens when they start to use social media how do we train them to do it properly my worry about a ban like that i said i mean it’s very brave of australia to do it first and we can watch and they’re saying six months time they’ll have some evidence of how many under 16s are still on social media but the behavioral issues take much much longer to explore than that and we have to get over this fact that whilst the technology is going on a pace because the alpha males are driving it without any you know just worrying about technical safety maybe um we have to we can’t say well it’s all going too fast we can’t do any we have to study this stuff um we have and i think this is what i want the acm to do i talked at my keynote talk this is my last point by the way i my keynote talk on whatever day it was wednesday on the main stage I talked about two things happening in the UK actually around one is our National Physical Laboratory which is the sort of equivalent of NIST in America has just launched with government backing a centre for AI measurement and the AI Security Institute in the UK and the other security institutes that are growing up around the world that network is now being called largely driven by the US because Trump doesn’t want to call it anything to do with safety I can’t believe I just said that anyway, but then he was the man that drank bleach in Covid they’re calling their network the network for AI measurement and I think this is a breakthrough I think this is, I mean I love AI for science, but we need to think about the science of AI and I think, and that’s a social it’s a socio -technical and I’m starting to call these things social machines as we did on the web that came from Tim Berners -Lee the idea of technology and society coming together to create artefact systems that wouldn’t have existed if they hadn’t come together and the technology doesn’t understand society at the moment most of society doesn’t understand this technology but together those two systems will create socio -technical systems or social machines and I want to build a science of studying social machines and it will be called AI measurement or AI metrology I love that word, I’ve learnt to say it it’s a cool script it’s a cool script everything’s Greek to us I love the yogurt don’t you love Greek yogurt so sorry I’m finishing there AI metrology and we’re going to launch I’m chair of the ACM publications committee or co -chair he’s president we’re going to launch a journal first journal in this area and it will be associated pulling together work and the data sharing the data that people are collecting to

Virginia Dignum

thank you Wendy, very important point and I think you can leave it there again if you understand when you have to leave you just leave we understand that so for the rest of us in the panel we start the day or the session talking about AI safety needs to be more than just the technical robustness I love your idea of the the social machines of this AI metrology yes it is it does yeah yeah with me only sometimes probably but i i did my best now yeah anyway i would like to bring you into the discussion how can we both dr shaman and wendy wall gave us examples of issues that we need really to include in going beyond this idea of technical robustness even if systems perform exactly as they have been designed and safely designed they will still probably be causing harm which is not probably just a technical failure but also a failure of inclusion a failure of imagination so i would like to get your opinions from where where you think that we can change the where can we start changing the discourse of of a pure technical approach to a broader inclusive societal institutional approach to the discussion on AI safety, on AI measurement, and so on.

And I would like to start this question, which is for all of you, starting with Professor Ioannis Ioannidis, who is the current president of ACM, and also a professor at the University of Athens.

Yannis Ioannidis

Thank you very much for having me in this panel. I’m a technical person, very sociable, but technical, that’s my expertise. So I want to separate the issue of safety of AI and talk about safety of AI use. And for me, in my technical mind, there is the AI technology, And I think that’s where I’m going to be. which is the algorithms, which are the models, and so on, from the use of this technology, the use of the software that is on AI. And we are using this software both in the beginning with the input that we give it and at the output when we create what is called I have an artificial intelligence, I have an agent, and so on, to do this or that or the other.

The technology, there’s no issue, there’s no social issue in the safety of the technology itself. It’s like the car, whether it’s working or not. There is no issue of safety. And innovation in that regard has to be let free, like the human mind and all the innovators to progress on that. And robustness and not having bugs or not bugs are an issue there, but it’s a day in the park for us. Software engineers and computing scientists. The use is the important thing and sometimes the key thing that people are talking about is the end result, the model. We put it in the judge’s hands, we put it in the doctor’s hands, we put it in the youth’s hands in terms of social media and so on.

This we have to work on, measure, regulate potentially and in any case all sciences like it was said before, especially the humanities, philosophers, ethicists, legal people, cognitive scientists and so on have to come together to address this. But there is also the input side which is again humans doing it. Humans are determining the first parameters where the system is. The first parameters where the systems are starting to be trained. The data that we feed it, it’s again humans that are choosing it. And as much as we have to… regulate or measure or think the end result the model the humanoid or non -humanoid robot that is telling us do thing or that or the agent and the same level of importance is that we have to think about what to do with what comes in and humans are using it different humans are feeding it and i think the safety must start from there we should not grow the input size we should not let it run for free even at that level we have to have the different sciences the different technologies civic society to be represented there and having an ai with whatever data we happen to have or whatever data generates billion dollar industries these are the data that that will use it’s wrong i mean there is a right and wrong here and and we have to be on the right side of that you so As a quick wrap -up, so for others to express their opinion, technology should be running free, but both input and output and result should be in the

Virginia Dignum

Thank you, Wendy. Thank you, Dr. Shiman. Thank you. See you soon. Okay. Okay, let’s continue the discussion. Sarah, Sarah Hooker, you are the co -founder and president, I believe, of Adaption Labs, a very young company, I believe. You have been before with Cohera and with other… developing organizations. What do you think about this balance or tension between the technical robustness, the technical safety measures, and the need for understanding more the environment, the context, the social context in which systems are built? And how can we technologists, those that develop like yourself, be developing systems while they are aware of this type of tension and also the insertion of the systems in very concrete, real -world domains?

Sara Hooker

And typically it’s been how do you build extremely large systems at the frontier of what’s possible. I think it’s interesting. I’ll share a few things. So one, I think what Wendy was getting to is that one of the biggest signals of whether you actually care about safety is what the forms of prestige and power look like. I think that’s mainly her comment. She’s saying, you know, we are at the pinnacle of where we all gather to discuss these things. And the way resources have actually been allocated doesn’t show that people are serious, which I think is fair. I think you have to look to the surrounding environment to understand if people are serious or not about safety or whether it’s just a panel title, candidly.

And maybe today it’s just a panel title. I think in general my philosophy about these forums is that you have to look six months out to actually get a signal of what has happened. That doesn’t mean that they’re not critical. I frankly don’t know if the expectation should be anymore that we have universal rules for AI. It’s not clear to me that that should be the outcome of these forums. So I think decidedly, if you’re going in with that expectation, you’re going to be very disappointed because I don’t think that’s going to happen at this forum or at the next one. But I do think it’s worth asking, well, where are we going as a conversation about safety and the precision of it?

Because for me, that’s the most interesting part. Time is very valuable. It’s our most precious resource. And so for me, the more precise the conversation, the better. I do think if I look at the overarching arc from Bletchley to now, we’ve had now four summits. We’ll have the fifth. It’s worth asking, has it become more precise? Candidly, and thank goodness, yes. I still remember Bletchley where it was all about existential risk and six months from now, and there were protests and hunger strikes from people who thought machines were taking over, but no precision to the conversation, no accountability for where these timelines were coming from. Thank you. And then I look to now, and now we have a very messy conversation about safety.

Certainly everyone has a different view. It’s still a blanket term, but at least it’s more accountable to what is the real -world impact of these conversations and the technology that we build. Because when I started my career as a computer scientist, we were just in research conferences. I mean, I think the fact that ACM is so well represented on this panel speaks to the origins of, like, you know, a very narrow group of people who work in a very academic community, and now our technology is used everywhere. So it’s a much more important conversation to have. So, one, I think we have gotten more precise, but it’s still very murky what people mean. Here’s the other thing I’ll say.

I think there’s often desire in these conversations about where technical meets the ecosystem to say, oh, well, safety has to be everything to everyone. And, frankly, that’s not a precise conversation either, because the truth is there are tradeoffs. When you build systems, there are tradeoffs. And too often when these conversations enter this arena, there’s a misconception about the sheer difficulty of how do you actually impose constraints on these systems. So the other thing I’ll say is the biggest thing that has to come out is an understanding of what you give up, because you give up something. The big things for me are, you know, I work a lot on language. My big ask is just report what languages model providers cover.

Report essentially, like, what they say that the safety parameters are not, and report what they don’t cover or they haven’t tested for. This sounds like a simple ask, but I think this is actually quite precise. And what it establishes is what have we given up? What are you confident about what have we given up? There’s many versions of this, but too often, and this is my ask, in conversations like this, we end up just circling around and saying we want safety, we need perspectives of everyone in the model. And the truth is that’s also a naive statement, because it is almost certainly the fact. that there will be some trade -off. Someone will not be represented.

Someone will be represented. And actually, what I think these forums are very useful for, having us all in the same conference, is about galvanizing ecosystems where you can make your own constraints and trade -offs, but also having a discussion about, you know, for the models that are being shipped that serve billions of people, we have these static monolithic models that are served the same way. What are the trade -offs that they have made, you know? And that’s, you know, as someone who’s built these models, there are almost certainly trade -offs in place. So we need to understand the state of the world as well as where we want to go. And it’s okay if there are clearly, you know, things left out.

It’s more that they have to be stated out loud. That’s my wish list, yeah. So maybe I’ll leave it there, and I’ll pass it on. I think you were next. Go for it.

Virginia Dignum

Thank you very much. Thank you, Sarah. And indeed, next one. Jibu Elias, you are a researcher, but you are also an activist who examines how technology and innovation institutions receive knowledge, labor, and legitimacy. so help us making sense of what it means safety, AI safety for society that seems to be what you do

Jibu Elias

I was more interested in the real world consequences of the panel title but wonderful conversations by Sarah and Wendy and all here so when I look back how technology has shaped my understanding of the world I feel like an idiot because I grew up in a time watching this animated shows like Jetsons and all these futuristic shows believing that the more advanced the technology gets the better the better our world will be I grew up as this idealist kid who thought when AI comes there will be no inequality that’s what I’m saying I was an AI kid back then and nowadays when I look at these things. I mean, there have been phenomenal work done by computer scientists like people present here in panel, Sarah and everyone, right?

On technical aspects of things. But more and more, we are seeing AI now becoming more political. It’s becoming a larger sociopolitic construct in general. And what concerns me more is its exploitative and extractive nature. I think Sarah mentioned about Bletchley and where the talk was all about existential risk. But now I think we are all at a point where we are agreeing that the accumulated risk have become more worrying at the same time. I’ve been tracking people who’ve been using tools, people who’ve been impacted by and those who were excluded from the benefit of this kind of technology, right? If you go around states like Telangana, Chhattisgarh, Jharkhand, there are big group of tribal populations.

You know, their language is not represented in Gemini or anything, right? And I know everybody wants to impose Hindi on all of us, but sorry, I still, you know, Hindi is not the national language of India. But what about them? How do they get access? So more and more, what I’m seeing is the divide between the socioeconomic divide becoming more wider, especially in countries like India. And, you know, it’s fascinating that, you know, we’ve been celebrating the data centers that we’ve been building. And I mean, I had firsthand experience of a data center that’s very much celebrated in Telangana in a place called Make a Good. I’m not, I don’t want to mention the company associated with it, but how it was built, how the people were manipulated, how the groundwater being extracted, right?

In a place where there is a water scarcity, you know, and when I asked the company, you know, Hey, this happened and I have a close association with that organization. and they said we interacted with the community leaders. So what I did, I reached out to the serpents. He has no idea what they mean. So essentially there’s a lot of, you know, I mean, in India we know what that means, reaching out to community leaders, bribing the politicians. But that’s the larger things I’m worried about. And the people who are using this technology, you know, now some people are talking about terms like AI psychosis. I don’t know how valid those terms are. But it’s fascinating to see that me and my executive director of Muslim Foundation has been chatting about how elderly people are using these models.

It’s very fascinating and it’s worrying at the same time. You know, we often put our attention on younger folks. But, I mean, it’s funny at the same time, but still. So my larger question is why, you know, the going forward, like yesterday the gentleman from US was telling that, you know, everyone should use a US AI stack. I think people in Denmark will be a good idea how US rates its strategic partners. Yeah. Yeah, so my larger question is where are we headed, right? Are we still going to have this extractive nature, you know, the data annotation workers who are building these models, right? So I will stop here and looking forward to the next level of conversation.

Virginia Dignum

Unfortunately, we have our second round of the panel and like all that, what we all are complaining about, it will happen. We all say our thing and the dialogue will need to be done outside in the corridor and we really hope also to, after this meeting, try to combine all what has been said in some kind of ask or report. But anyway, now we are moving to the second part of the panel. We were all going to be in the same panel, but there weren’t enough chairs. So we are splitting into two. Patience with us. You are proxy. Okay. okay everyone uh thank you so much for being here in the second part of our session and thank you for all of the panelists who are joining me here on stage i think we’re going to do something a little different than the first panel did i would like everyone to just quickly introduce themselves um nay how would you uh start

Neha Kumar

hello check okay uh hi everyone i am neha kumar and i’m an associate professor at georgia tech in the school of interactive computing i’m also uh president of this special interest group on computer human interaction and so uh this summit is um is really a coming together of many different worlds for me i actually i grew up in delhi so it’s been uh about coming home but also uh a lot of people have been coming to me for a long time and i’m really excited to be here a lot of the conversations we’ve been having are conversations that are really very very active right now discipline of human -computer interaction, HCI, some of you might know it, and it’s great to see how central human centricity is to what we’ve been discussing.

And third, something that’s been much closer to my own area of study is really looking at HCI and technology use in the context of social impact, and this has been named in many different ways over the years, social goods, social impact, societal impact, public interest, whatever you want to call it. But really, it’s an area that we’ve been studying for many, many years before AI was on the scene. And so I would say that we’re looking at multidisciplinarity in this panel, and to me, there’s a lot of learning that could be happening from many of these disciplines that have been actively looking at some of these, agreed that the platform that we’re looking at is different.

It’s unprecedented in many ways. At the same time, there’s a lot that we have to learn from as well. So I’ll stop there.

Virginia Dignum

Thank you, Neha. Thank you, Eugena. Merve Hickok.

Merve Hickok

I’m the president and policy director for Center for AI and Digital Policy. We are an independent think tank working globally at the intersection of AI policy and human rights, democratic values, and rule of law. So I would like to take a more expansive view of safety and governance at large. More to come on that. Thank you.

Virginia Dignum

Rasmus?

Rasmus Andersen

Yes. I think this works now. Yes, my name is Rasmus Andersen. I work with the Tony Blair Institute of Government where I advise leaders around the world at the prime ministerial or presidential level, but also at the line minister level on navigating AI. What does it mean for them? How they both deliver results to citizens with AI and also avoid them. I think it’s important to avoid. harm to their citizens. And so the question of safety comes up a lot, but it’s also usually not the top of leaders’ minds, and it’s really about, for me, helping them often realize the long -term best interest, informed self -interest of what will actually, what is the world likely to look like in 2030, in 2035?

How can you best make sure that your country and your constituents and citizens are in the best possible position as the world will change very rapidly? Thank you.

Virginia Dignum

Tom?

Tom Romanoff

Is this one working? Great. I am not James. I am Tom Romanoff. I am the director of policy for ACM, where I help manage the policy committees, which Gina and Virginia chair our global committee. We also have regional committees across the world, including the United States, Europe, Asia, India. Africa. Africa. and the APEC regions. So what my job at ACM is is to help the computer science folks translate their recommendations on harms or issues that they see in the technology to policymakers and engage those policymakers on behalf of ACM. So before that, I was at a think tank in Washington, D .C., so I worked with Congress and have been working in tech policy for many years now.

Jeanna Matthews

Okay. Okay. So in the interest of time, I’m going to get right to a very provocative question, which is we’ve been seeing wellness for all, happiness for all, in the presence of a fairly extractive and exploitive potential. Does history tell us that it’s going to be great for everyone, just works out, or there have to be some musts, not just good intentions or shoulds? If we are not seeing things like recovery, retribution, remuneration, we don’t see people going to jail when they do bad things with AI. Are we serious about AI safety?

Merve Hickok

So no, history does not show us that it’s going to be cool. And history is definitely another good indicator, which means that we need to fight harder this time around and try to get that level up, right? So history is always a story of the powerful, of the winner, like who gets to decide the narrative. And we are seeing that again today, the narratives around what is safety, what should be the evaluations, where should the money go, whether we should regulate or not. Whether it should be. It should be should or must. is always the narrative of the powerful. And when Dame Andy Hall mentioned, the representation was very much the same kind of people throughout the conversations, higher -level conversations yesterday.

So I think first and foremost, the narrative needs to change in safety as well. So far it has been, I think it’s been an evaluation, but so far the most important safety issues has been around nuclear, cyber security, chemical weapons, etc. Yes, they might be, or existential risk, which is another story. Yes, maybe we talk about those, we should talk about those, but there are real consequences right now on people’s rights, freedoms, ability to live with their dignity, and people’s rights to participate in democracy, and democratic processes. All of these are undermined, and as an organization where those… three issues are in our mission, we are seeing this more and more under pressure. So this is the time to get your voices up as citizens, as consumers, as professionals in your own right, and try to change the narrative.

Because otherwise it’s going to just be a repeat of history.

Jeanna Matthews

Well said. Neha?

Neha Kumar

Yeah, I think coming back to something that Wendy said, right, about being all -inclusive at the same time as having no women around in decision -making places, I think that that is something we should really be thinking about. I mean, do we have a history of being inclusive? What inclusivity have we been practicing in our innermost circles? It’s easy enough to say that the poorest of the poor should have access to this AI, but how are we doing on being all -inclusive? So I think there are lessons from disciplines such as feminist and women’s studies that we can learn from to really ask the who question. Who is making decisions? Who is being benefited? Who is part of the design process?

That’s one. Second, I would say in learning from design, which is one of the disciplinary disciplines that I’ve trained in, thinking about zooming out is great, and that’s where we have value. We talk about inclusivity. We talk about diversity. We talk about all these great -sounding words, but then when we zoom in, what are we actually doing? I think that a lot of the dialogue that we’ve been having is in this disembodied state where we talk about infrastructure, and we talk about data, and we talk about interoperability, and we talk about processes, but who is benefiting? The panelists before me also talked about aging, so people who are… more vulnerable, where are they in the conversation?

And lastly, with regards to development studies, thinking about… what are the benefits of development really. Like we want development and impact, and that’s what we’re talking about here for five days at the summit. But we know from historical perspectives that development hasn’t worked out so well for so many people and so many countries across the globe, and how are we making sure that we don’t repeat those same mistakes? And I think these have to be very much part of the conversations so that it’s safety of the human, of the body, of our values, of just our communities, our structures, social structures that are so critical to us. Thank you.

Jeanna Matthews

Gnasmus?

Rasmus Andersen

Yeah, I think we’re not seeing people go to jail. I’m not sure we have seen something just as of yet that really where that’s the case. There are lawsuits ongoing on suicides among young people, et cetera. But I do think that we will see a moment pretty soon where something does go pretty wrong, and then we’re going to have a decision on what we do with that. Some people – this is a very dark parallel. Some people said we needed to have World War II to have the UN and other systems that were put in place to avoid that happening again. And, yeah, I think it’s a matter of time when we get something, and we will have to make those decisions.

And currently, I think I’m not super confident that we will interpret those events correctly, that we will have a realistic view of what might change and how we might prevent them from happening again. And it could be people leveraging them, organized crime. It could be – I mean, we’ve had – Very recently, these – where we’ve successfully had Elon Musk and Grok stop allowing – people to create non -consensual deep fakes of nudes, which had happened in the millions. So we have sort of small, that’s not small, but we’ll have much bigger things than that. And I do think still when that happens, we will have to think about both pros and cons and costs and benefits. When we regulate things, we don’t regulate risks down to zero.

You know, when you get into a car, there’s a risk something will happen, but you still need to get places. And I think it’s, with safety, we do have to take some of the same lessons, as Mariam mentioned, from nuclear, from flights. You know, it used to be that when you got on an airplane, you know, something like 200 or 1 ,000 more of them crashed than today. And we’ve reduced that level of risk very far down. And I do think that the political level, while we need technical inputs, the only force in the world. I can really take all those considerations together and think about the partial perspectives that technical people have, that civil society has, that industry has.

Really, the only place it comes together imperfectly is at government, and that’s why it’s so important that we are here, however imperfect these summits are.

Jeanna Matthews

Tom?

Tom Romanoff

All right, something a little different. I would like everybody in the room to raise their hand if you think safety is an important aspect of the AI deployment. Great. Keep your hands up. Keep your hands up. Now, take your hand down if you think that safety should be enforced on the output of AI outcomes. Oh, wow. Okay. Take your hand down if you think that laws should apply to the outputs of AI rather than AI itself. Okay? All right, you can go ahead and put your hands down. It wasn’t as dramatic as I thought. I thought it would be. So I’m going to talk a little bit about the 49 -51 % rule. And across all political spectrums, no matter where you are in the world, there’s this idea that you only need 51 % of the political willpower to start passing regulations, and 49 % won’t get it done.

It applies in the business world as well. You have 51 % of the board control or equity in a company. Basically control that company, right? Right? Lobbyists have an extreme incentive to not push anybody past that 51 % or 49 % in order to have an action in the political space, right? So across all of our governments in here, there is private – I don’t want to say private sector because they’re important, but there are private entities that would like to have an action in the regulatory space. And it’s not until 51 % of those politicians or that political regulator or that regulation gets – it’s to that threshold that you’ll start seeing some changes. And so you see examples of that with the example that my colleague here mentioned with deepfakes or notification applications causing worldwide outrage.

And you started seeing governments across the spectrum say, that’s something that at least 51 % of our population does not want. And so they start moving towards regulating or enforcing current laws to punish that kind of action. And so I say all this because there is also this conversation around moderates, right? We don’t know where the technology is going. We have computer scientists. We have civil society screaming about the need for action, for security within the stack, right? And the rest of the world are moderates. They’re still engaged. They’re still engaging this AI. They’re still figuring out what it can be doing. And it’s not until some kind of action happens, some kind of consequence, some kind of…

issue happens that people wake up to the folks who’ve been screaming about it for years. And so what I encourage everybody in here is not be a moderate. Pick a side and start encouraging your politicians, your family, your community. Educate them. Figure out ways to communicate the very heady technical aspects of security within the AI stack to the common person, to the person who can understand it. And that’s when you’re going to start seeing the regulations start to roll out.

Jeanna Matthews

I think that’s a great place to end because I think we are not going to get happiness for all and wellness for all unless we insist. We’re all going to have to insist. It’s not going to come automatically. So asking each of us to ask ourselves a question, what are we going to do to insist, I think is a really good place to end. I think we started this session a little late but I’ve been told that they would really like us to try to end on time so I think I will leave it there but we would love to engage you in conversation out in the hall after this session is over thank you to all the panelists in the first session and also all of us up here thank you so much thank you all indeed I think that there is actually time to have one question or two questions maybe now there are too many questions I have to vote ok sir there and the lady there

Participant

so I would like to a very short question I would like it’s not a question it’s a suggestion to the gentleman who has beard on that side name I missed yeah Jitu that go get some life at Sarvam I think that setup your agenda of Hindi and other language is going to die very soon so you have to get some life of that Hindi imposition and all those things nobody will impose down the line few ones sure thank you so much for the provocative discussion this is what I was hoping to get that the India impact summit my question is about how can regulatory artifacts like data set cards model cards system cards rigorous evaluations user feedback now be extended to cover multiple languages multiple contexts and multiple cultures I think a lot of hard work

Speaker 2

be

Merve Hickok

ing used as well. So it might perform really good in English, but we know that these systems are not safe or secure or perform that well in many different languages that are not English or as resource intensive as English. So great question. They need to be dynamic and they need to reflect languages. And I will also say just very briefly following up on this is that these are things that governments can require for model providers to release models in your jurisdiction. And they so far are not. Thank you very much. We could insist. We need to insist. They are like California started this. I just want to just…

Virginia Dignum

I think we can just continue the discussion and I hope we’ll do. This is today just a start. I also hope that we will be able together with all the panelists to create some kind of model for the next year. Thank you very much. the measures and we will hopefully facilitate and continue this discussion I would ask all the panellists of the first and of the second round to stay here for a memento from the organisation and I would like to thank you all for being here and all the panellists again of course thank you so much Thank you Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (15)
Factual NotesClaims verified against the Diplo knowledge base (3)
Confirmedhigh

“AI safety is often reduced to technical notions such as model alignment, red‑team testing and benchmark performance, but these tools do not address the core question of whether AI creates societal value or harm; AI impact is shaped by deployment context, governance capacity, incentive structures and lived realities.”

The knowledge base describes the summit discussion as moving beyond purely technical approaches to AI safety toward multidisciplinary governance frameworks that address real-world societal impacts, confirming Dignum’s point [S1].

Confirmedmedium

“The summit lacked gender diversity, with women under‑represented and panels dominated by “alpha males”.”

An IGF 2023 workshop notes a gender disparity in standards work and limited involvement of diverse stakeholders, echoing Hall’s criticism of gender imbalance at the summit [S103].

Additional Contexthigh

“Safety concerns lie primarily in the data‑input stage and the deployment‑output stage, requiring regulation and multidisciplinary oversight involving humanities, legal, ethical and civic‑society experts.”

UN commentary highlights that AI governance must be multifaceted, including prevention, mitigation, human-rights-based policy and community engagement, providing context for the need to regulate data and deployment phases and to involve a broad set of disciplines [S56]; the summit’s broader call for multidisciplinary governance also supports this view [S1].

External Sources (104)
S1
From Technical Safety to Societal Impact Rethinking AI Governanc — -Sara Hooker- Co-founder and president of Adaption Labs, formerly with Cohera and other developing organizations
S2
From Technical Safety to Societal Impact Rethinking AI Governanc — -Yannis Ioannidis- Current president of ACM, Professor at the University of Athens
S3
From Technical Safety to Societal Impact Rethinking AI Governanc — -Dame Wendy Hall- Regius Professor of Computer Science, Associate Vice President and Director of the Web Science Institu…
S4
EQUAL Global Partnership Research Coalition Annual Meeting | IGF 2023 — Barhanu Nugusi, the Pan-African Youth Ambassador for Internet Governance, is actively working on internet-related issues…
S5
Session — – Eliud Kibii: Journalist, political analyst and editor Mwende Njiraini: Okay, good morning, good afternoon and good ev…
S6
Closing Ceremony and Orientation for WAIGF 2025 — – Abilahi Eliassu: Cybersecurity analyst at National Information Technology Development Agency Audience: Good evening e…
S8
https://dig.watch/event/india-ai-impact-summit-2026/press-briefing-by-hmit-ashwani-vaishnav-on-ai-impact-summit-2026-l-day-5 — Anybody else on front row? Anyone? Okay, please. Anybody else? Anybody in third row? Okay. Please. Anyone else? Yes, …
S9
https://dig.watch/event/india-ai-impact-summit-2026/advancing-scientific-ai-with-safety-ethics-and-responsibility — And I guess there’s, we, in the recommendation from the RAND Europe that I was, you know, helping out with is that we re…
S10
S11
WS #280 the DNS Trust Horizon Safeguarding Digital Identity — – **Participant** – (Role/title not specified – appears to be Dr. Esther Yarmitsky based on context)
S12
Leaders TalkX: Moral pixels: painting an ethical landscape in the information society — – **Participant**: Role/Title: Not specified, Area of expertise: Not specified
S13
Leaders TalkX: ICT application to unlock the full potential of digital – Part II — – **Participant**: Role/Title not specified, Area of expertise not specified
S14
https://dig.watch/event/india-ai-impact-summit-2026/from-technical-safety-to-societal-impact-rethinking-ai-governanc — I think we can just continue the discussion and I hope we’ll do. This is today just a start. I also hope that we will be…
S15
From Technical Safety to Societal Impact Rethinking AI Governanc — -Gina Matthews- Co-host of the session, Chair of the Technology Policy Council of ACM (mentioned by Virginia Dignum but …
S16
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Tatjana Titareva: Thank you so much. Today’s session’s focus is to discuss the roadmap for AI Policy Lab that we have de…
S18
Shaping AI to ensure Respect for Human Rights and Democracy | IGF 2023 Day 0 Event #51 — Moderator:Thank you very much, Ivana. And as you say, new technologies create new problems sometimes, but they can also …
S20
https://dig.watch/event/india-ai-impact-summit-2026/from-technical-safety-to-societal-impact-rethinking-ai-governanc — All right, something a little different. I would like everybody in the room to raise their hand if you think safety is a…
S21
From Technical Safety to Societal Impact Rethinking AI Governanc — Is this one working? Great. I am not James. I am Tom Romanoff. I am the director of policy for ACM, where I help manage …
S22
From Technical Safety to Societal Impact Rethinking AI Governanc — -Gina Matthews- Co-host of the session, Chair of the Technology Policy Council of ACM (mentioned by Virginia Dignum but …
S23
https://dig.watch/event/india-ai-impact-summit-2026/from-technical-safety-to-societal-impact-rethinking-ai-governanc — And currently, I think I’m not super confident that we will interpret those events correctly, that we will have a realis…
S24
From Technical Safety to Societal Impact Rethinking AI Governanc — Yes. I think this works now. Yes, my name is Rasmus Andersen. I work with the Tony Blair Institute of Government where I…
S25
Closing remarks – Charting the path forward — Bouverot argues for comprehensive inclusion in AI governance discussions, extending beyond just governmental participati…
S26
AI That Empowers Safety Growth and Social Inclusion in Action — “investors should ask whether there is clear board level responsibility on AI risk whether executive incentives are alig…
S27
Dynamic Coalition Collaborative Session — The panelists’ emphasis on moving beyond purely technical approaches toward comprehensive frameworks addressing economic…
S28
Parliamentary Session 3 Click with Care Protecting Vulnerable Groups Online — High level of consensus with significant implications for policy development. The agreement across different stakeholder…
S29
High-level AI Standards panel — 3. **Include**: Engaging diverse stakeholders beyond traditional technical communities The discussion highlighted the n…
S30
S31
Open Forum #75 Shaping Global AI Governance Through Multistakeholder Action — Ernst Noorman: Thank you very much, Zach, and thank you, Rasmus, for your words. While leaders at this moment gather in …
S32
World Economic Forum Panel Discussion: Global Economic Growth in the Age of AI — Large language models have demonstrated dangerous capabilities, including documented cases of AI systems coaching childr…
S33
Main Session on Artificial Intelligence | IGF 2023 — Finally, it was suggested that an independent multi-stakeholder panel should be implemented for important technologies t…
S34
AI governance debated at IGF 2025: Global cooperation meets local needs — At theInternet Governance Forum (IGF) 2025 in Norway, an expert panel convened to examine the growing complexity of arti…
S35
Aligning AI Governance Across the Tech Stack ITI C-Suite Panel — And so those are the sorts of conversations I have. I think, you know, in the AI space, I think you can look at countrie…
S36
UNGA/DAY 1/PART 2 — The advancement of AI is outpacing regulation and responsibility, with its control concentrated in a few hands. (UN Secr…
S37
Open Forum #79 Regulation of Autonomous Weapon Systems: Navigating the Legal and Ethical Imperative — **Human Control and Oversight**: Despite different approaches, speakers across perspectives emphasized the importance of…
S38
The Overlooked Peril: Cyber failures amidst AI hype — This has become evident in recent years concerning the security of digital products due to several high-effect cyberatta…
S39
Building Indias Digital and Industrial Future with AI — The panel discussion, expertly moderated by Debashish Chakraborty, revealed a sophisticated understanding of the challen…
S40
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — This distinction has profound implications for risk mitigation strategies. Safety requires internal controls and model v…
S41
Policymaker’s Guide to International AI Safety Coordination — As the final substantive comment, this provided a provocative reframing that challenged participants to consider whether…
S42
Artificial intelligence (AI) – UN Security Council — Furthermore, another critical responsibility discussed is the implementation of robust safety measures to prevent misuse…
S43
Global AI Governance: Reimagining IGF’s Role & Impact — Paloma Lara-Castro: Thank you, Liz. Hi, everyone. Thank you for the space. I’m representing Derechos Digitales. We are a…
S44
Evolving AI, evolving governance: from principles to action | IGF 2023 WS #196 — She notes that legal certainty which can only be provided through regulations is necessary. The panel also explored the…
S45
Lightning Talk #245 Advancing Equality and Inclusion in AI — Bjorn Berge: Thank you very much, Sara, and very good afternoon to all of you. Let me first start by congratulating Norw…
S46
Democratizing AI Building Trustworthy Systems for Everyone — And the US, this is the man again who drank bleach during COVID, says no regulation. So we can’t talk about the network …
S47
AI experts ask governments to introduce algorithmic impact assessments — In apaper released by artificial intelligence (AI) experts from the AI Now Institute, governments are invited to conduct…
S48
WS #214 AI Readiness in Africa in a Shifting Geopolitical Landscape — Examples of missing stakeholders include women’s rights organizations, trade unions, journalists, researchers who should…
S49
WS #189 AI Regulation Unveiled: Global Pioneering for a Safer World — Gautam brought attention to the lack of capacity in developing nations to implement or create AI standards, highlighting…
S50
Framework to Develop Gender-responsive Cybersecurity Policy | IGF 2023 WS #477 — In conclusion, the analysis brings attention to several key aspects of gender equality and cybersecurity policies. It hi…
S51
From principles to practice: Governing advanced AI in action — Discussion of different governance approaches being implemented across regions and stakeholder groups Legal and regulat…
S52
Open Forum #58 Collaborating for Trustworthy AI an Oecd Toolkit and Spotlight on AI in Government — This comment reinforced the toolkit approach discussed in the first segment by validating the need for flexible, adaptiv…
S53
Main Session | Policy Network on Artificial Intelligence — Anita Gurumurthy: Sure, I can do that. Am I audible? Okay. Thank you. I just wanted to commend the report, and especia…
S54
Safeguarding Children with Responsible AI — The discussion maintained a tone of “measured optimism” throughout. It began with urgency and concern (particularly in B…
S55
From Technical Safety to Societal Impact Rethinking AI Governanc — Virginia stresses that AI safety cannot be limited to technical robustness, accuracy or alignment. It must incorporate m…
S56
What is it about AI that we need to regulate? — A key distinction emerged around technical versus broader governance issues. InWorkshop 344 on WSIS+20 Technical Layer, …
S57
From principles to practice: Governing advanced AI in action — Brian Tse: right now? First of all, it’s a great honor to be on this panel today. To ensure that AI could be used as a f…
S58
The Dawn of Artificial General Intelligence? / DAVOS 2025 — Yoshua Bengio advocates for substantial investment in AI safety research alongside the development of AI capabilities. H…
S59
Tech Transformed Cybersecurity: AI’s Role in Securing the Future — Furthermore, the analysis underscores the importance of considering regional regulations and governance in cybersecurity…
S60
Artificial Intelligence & Emerging Tech — In conclusion, the meeting underscored the importance of AI in societal development and how it can address various chall…
S61
Leveraging AI to Support Gender Inclusivity | IGF 2023 WS #235 — Another important point emphasized in the analysis is the significance of involving users and technical experts in the p…
S62
International multistakeholder cooperation for AI standards | IGF 2023 WS #465 — Inclusion of all relevant stakeholders is seen as crucial for effective AI standards. The inclusivity of diverse perspec…
S63
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Joanna Bryson: Hi, yeah, sure. Thanks very much and sorry not to be in Oslo. I wanted to come specifically to your quest…
S64
Informal Stakeholder Consultation Session — Digital transformation affects every sector, so coordinated policymaking helps ensure coherence and better outcomes for …
S65
Main Topic 3: Europe at the Crossroads: Digital and Cyber Strategy 2030 — The disagreement level was moderate and constructive. Speakers generally agreed on core goals like improving cybersecuri…
S66
High Level Session 1: Losing the Information Space? Ensuring Human Rights and Resilient Societies in the Age of Big Tech — Effective governance requires clear separation of roles rather than treating all stakeholders as equals in multi-stakeho…
S67
Evolving AI, evolving governance: from principles to action | IGF 2023 WS #196 — To ensure responsible AI deployment, Egypt has issued a charter that provides guidelines for promoting citizen well-bein…
S68
High-level AI Standards panel — Need to embrace a socio-technical paradigm that goes beyond technical aspects to include societal considerations
S69
Advancing Scientific AI with Safety Ethics and Responsibility — And also, very importantly, how we have to also see it from the context of, you know, people doing their own thing, DIY …
S70
AI Governance Dialogue: Steering the future of AI — Because principles and declarations alone are not enough. We need technical standards that translate high level commitme…
S71
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — Agents functions as invaluable teammates, unlocking productivity gains and time savings, which we all want more of. Howe…
S72
Four seasons of AI:  From excitement to clarity in the first year of ChatGPT — Dealing with risks is nothing new for humanity, even if AI risks are new. In environment and climate fields, there is a …
S73
Toward Collective Action_ Roundtable on Safe & Trusted AI — This comment introduces a temporal framework that prioritizes immediate, observable risks over speculative future threat…
S74
Searching for Standards: The Global Competition to Govern AI | IGF 2023 — Audience:Thank you. My name is Sonny. I’m from the National Physical Laboratory of the United Kingdom. There’s a few wor…
S75
Towards a Safer South Launching the Global South AI Safety Research Network — Crampton argues that evaluations must be continuous and supported by large‑scale infrastructure investments to track mod…
S76
From Technical Safety to Societal Impact Rethinking AI Governanc — Virginia stresses that AI safety cannot be limited to technical robustness, accuracy or alignment. It must incorporate m…
S77
Advancing Scientific AI with Safety Ethics and Responsibility — “Those we’ll put in a higher risk category compared to something which is just working, let’s say, on certain animals wh…
S78
High-level AI Standards panel — The discussion highlighted the need for enhanced collaboration among standards organisations to address AI’s complexity …
S79
Lightning Talk #245 Advancing Equality and Inclusion in AI — Bjorn Berge: Thank you very much, Sara, and very good afternoon to all of you. Let me first start by congratulating Norw…
S80
How to ensure cultural and linguistic diversity in the digital and AI worlds? — Hannah Taieb:Real diversity is very important indeed, and it all depends on the models and business models. Algorithms a…
S81
WS #236 Ensuring Human Rights and Inclusion: An Algorithmic Strategy — Monica Lopez: Okay, yes. So, can you hear me okay? Yes? All right. Well, first of all, thank you for the forum organiz…
S82
IGF 2023 WS #313 Generative AI systems facing UNESCO AI Ethics Recommendation — Closing off research could create power asymmetries and solidify the current power positions in the AI industry. Another…
S83
Evolving AI, evolving governance: from principles to action | IGF 2023 WS #196 — Maria Paz Canales:Thank you. Thank you very much for the invitation for being here. I think that the benefit of being al…
S84
From principles to practice: Governing advanced AI in action — Discussion of different governance approaches being implemented across regions and stakeholder groups Legal and regulat…
S85
Open Forum #58 Collaborating for Trustworthy AI an Oecd Toolkit and Spotlight on AI in Government — This comment reinforced the toolkit approach discussed in the first segment by validating the need for flexible, adaptiv…
S86
Policymaker’s Guide to International AI Safety Coordination — “institutionalizing it should be a priority.”[119]. “We need to start thinking how we can build structures and perhaps i…
S87
Democratizing AI Building Trustworthy Systems for Everyone — – Peter Mattson- Wendy Hall – Wendy Hall- Other panelists While both advocate for measurement, Mattson focuses on tech…
S88
https://dig.watch/event/india-ai-impact-summit-2026/democratizing-ai-building-trustworthy-systems-for-everyone — And the US, this is the man again who drank bleach during COVID, says no regulation. So we can’t talk about the network …
S89
Safeguarding Children with Responsible AI — The discussion maintained a tone of “measured optimism” throughout. It began with urgency and concern (particularly in B…
S90
https://dig.watch/event/india-ai-impact-summit-2026/from-technical-safety-to-societal-impact-rethinking-ai-governanc — So I think first and foremost, the narrative needs to change in safety as well. So far it has been, I think it’s been an…
S91
Harmonizing High-Tech: The role of AI standards as an implementation tool — Philippe Metzger:Thank you, Bilel. Maybe to be as succinct as possible, just would like to mention four areas, which I t…
S92
AI for social good: the new face of technosolutionism — Abeba Birhane presents a critical analysis of AI systems and their impact on society, arguing that current AI technologi…
S93
Global Enterprises Show How to Scale Responsible AI — The implementation challenge extends beyond organisational commitment to practical tooling and automation. Gurnani empha…
S94
Agenda item 5: discussions on substantive issues contained in paragraph 1 of General Assembly resolution 75/240/2/OEWG 2025 — Mozambique: Mr. Chair, thank you for giving us the floor. With regard to application of international law to the use …
S95
Agenda item 5: discussions on substantive issues contained inparagraph 1 of General Assembly resolution 75/240 part 3 — Mozambique: Distinguished Chair, since it’s our first intervention in this session, the Mozambique delegation commends…
S96
Main Session 2: The governance of artificial intelligence — Human Rights and Ethical Considerations Human rights | Legal and regulatory Mashologu emphasizes that AI governance mu…
S97
Interdisciplinary approaches — AI-related issues are being discussed in various international spaces. In addition to the EU, OECD, and UNESCO, organisa…
S98
Agenda item 6: other matters — Mozambique: Thank you, Chair. Mozambique will speak out for national capacity. Mozambique delegation recognize that c…
S99
MahaAI Building Safe Secure & Smart Governance — “The answer is intelligent governance”[1]. “Governance frameworks must evolve as the artificial intelligence evolves”[2]…
S100
State of play of major global AI Governance processes — Hiroshi Yoshida from Japan discussed the country’s active role in international AI governance, including the Hiroshima A…
S101
WS #98 Towards a global, risk-adaptive AI governance framework — During the Q&A session, the importance of standards in AI governance was discussed. Speakers highlighted the need for te…
S102
Revitalising trust with AI: Boosting governance and public services — AI is reshaping public governance, offering innovative ways to enhance services and restore trust in institutions. The d…
S103
Internet standards and human rights | IGF 2023 WS #460 — In addition to the gender disparity, there is a noted lack of involvement from governments and their agencies, including…
S104
Panel Discussion: 01 — “The percentage of people that have access.”[19]. “Quality AI enabled services.”[9]. “They have to benefit from healthca…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
V
Virginia Dignum
4 arguments62 words per minute1141 words1103 seconds
Argument 1
AI safety must incorporate governance, deployment context, and societal impact, not just technical robustness (Virginia Dignum)
EXPLANATION
Virginia argues that focusing solely on technical measures such as model alignment and benchmarking overlooks the broader factors that determine AI’s real‑world value or harm. She stresses that governance capacity, incentive structures, and the lived realities of affected communities shape AI outcomes.
EVIDENCE
She notes that safety is often framed in technical terms like model alignment and red-team­ing, but the core question is what determines whether AI produces societal value or harm, emphasizing the role of deployment context, governance, and institutional systems [14-24]. Later she reiterates the need to move beyond pure technical robustness when discussing the panel’s focus [104-105].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need to go beyond technical robustness and include multidisciplinary governance and societal context is emphasized in the discussion on technical safety versus societal impact [S1].
MAJOR DISCUSSION POINT
Expanding AI safety beyond technical metrics
AGREED WITH
Lourino Chemane, Yannis Ioannidis, Dame Wendy Hall, Sara Hooker, Merve Hickok, Neha Kumar, Jibu Elias
DISAGREED WITH
Yannis Ioannidis, Tom Romanoff
Argument 2
Panel emphasizes moving beyond technical safety to multidisciplinary policy frameworks (Virginia Dignum)
EXPLANATION
Virginia frames the session as a call to shift AI safety discussions from narrow technical concerns to broader, multidisciplinary policy approaches. She highlights the importance of integrating law, social sciences, and governance structures into AI safety work.
EVIDENCE
In her opening remarks she states the session will discuss moving beyond technical safety toward multidisciplinarity, governance, and real-world impact, and she invites panelists to address these broader issues [14-24][104-105].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Panel calls for multidisciplinary policy frameworks are echoed in the Dynamic Coalition Collaborative Session that stresses moving beyond purely technical approaches [S27].
MAJOR DISCUSSION POINT
Policy, regulation, and institutional capacity
Argument 3
Even perfectly designed AI systems can cause harm if societal inclusion and imagination are lacking.
EXPLANATION
Dignum points out that safety failures may arise from a lack of inclusive perspectives and imagination about broader impacts, not merely from technical flaws in the system.
EVIDENCE
She observes that systems may perform exactly as designed yet still cause harm because of failures of inclusion and imagination, emphasizing the need to broaden safety considerations beyond technical design [104-105].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The argument that inclusion and imagination are essential to prevent harm aligns with the broader governance perspective that safety cannot be limited to technical design [S1].
MAJOR DISCUSSION POINT
Expanding AI safety beyond technical metrics
Argument 4
The panel should develop a concrete collaborative model for AI safety governance to be implemented in the next year.
EXPLANATION
Dignum expresses the intention to work with all panelists to produce a shared model or report that will guide AI safety efforts in the coming year.
EVIDENCE
She states hope to create some kind of model for the next year and to combine panelists’ input into a report, indicating a concrete plan for ongoing collaboration [375-379].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The call for a concrete collaborative model matches the Dynamic Coalition session’s focus on creating actionable, multi-stakeholder frameworks [S27] and the proposal for an independent multi-stakeholder panel on critical AI infrastructure [S33].
MAJOR DISCUSSION POINT
Policy, regulation, and institutional capacity
L
Lourino Chemane
2 arguments160 words per minute573 words213 seconds
Argument 1
Safety is the protection of people and requires multidisciplinary governance, human oversight, and ethical standards (Lourino Chemane)
EXPLANATION
Lourino defines AI safety as the protection of people, not just systems, and calls for governance that integrates law, ethics, education, labor, and affected communities. Continuous human oversight and institutional accountability are essential to ensure safe AI deployment.
EVIDENCE
She outlines that safety means protecting people, prioritising human, social and institutional impact, and that effective AI policies need input from law, social sciences, ethics, and affected communities, as well as continuous human oversight and protection of women, children, and youth [30-38].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Safety as protection of people, requiring law, ethics, and continuous human oversight, is highlighted in the technical-to-societal safety discussion [S1] and reinforced by calls for human oversight in autonomous systems [S37].
MAJOR DISCUSSION POINT
Expanding AI safety beyond technical metrics
AGREED WITH
Virginia Dignum, Yannis Ioannidis, Dame Wendy Hall, Sara Hooker, Merve Hickok, Neha Kumar, Jibu Elias
Argument 2
National AI strategies must address infrastructure sovereignty, cybersecurity, and digital government interoperability (Lourino Chemane)
EXPLANATION
Lourino explains Mozambique’s ongoing work on a national AI strategy, emphasizing data policy, cybersecurity, regulation of data centres and cloud computing, and an interoperability framework for public administration. These elements are seen as essential for sovereign, safe AI deployment.
EVIDENCE
She describes drafting a data policy, reviewing the national cybersecurity strategy, adopting regulations for data-centre construction and cloud computing, and updating the interoperability framework to improve efficiency of public services [43-48].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
National AI strategy elements such as data policy, cybersecurity, and interoperability are discussed in the AI-driven cyber-defense briefing for developing nations [S30] and in comparative country approaches to AI governance [S35].
MAJOR DISCUSSION POINT
Socio‑political and environmental impacts of AI deployment
Y
Yannis Ioannidis
2 arguments140 words per minute537 words229 seconds
Argument 1
Distinguish safety of AI technology from safety of AI use; emphasize regulation of inputs and outputs (Yannis Ioannidis)
EXPLANATION
Yannis separates the technical safety of AI models from the safety of their use, arguing that the latter—how inputs are chosen and how outputs are applied—requires regulation and multidisciplinary oversight. He stresses that both the data fed into models and the contexts in which they are deployed must be governed.
EVIDENCE
He states that the technology itself (algorithms, models) does not raise safety issues, but the use, including the data inputs and the decisions made by humans, must be measured, regulated, and involve multiple disciplines [108-124].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The distinction between technical safety and use-case regulation, focusing on inputs and outputs, is articulated in the human-rights-focused AI governance session [S18] and the Policymaker’s Guide to AI safety coordination [S41].
MAJOR DISCUSSION POINT
Expanding AI safety beyond technical metrics
AGREED WITH
Virginia Dignum, Lourino Chemane, Dame Wendy Hall, Sara Hooker, Merve Hickok, Neha Kumar, Jibu Elias
Argument 2
Safety must be enforced through law on AI outputs, not just on the technology itself (Yannis Ioannidis)
EXPLANATION
Yannis argues that legal frameworks should target the outcomes produced by AI systems rather than only the underlying technology. This approach ensures accountability for harms that arise in real‑world deployments.
EVIDENCE
He explicitly says safety must start from regulating both input size and output, suggesting that laws should apply to AI outputs rather than merely the technology [108-124].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Legal frameworks targeting AI outputs rather than the underlying technology are advocated in the same human-rights-oriented discussion [S18] and reinforced by policy guidance emphasizing outcome-based regulation [S41].
MAJOR DISCUSSION POINT
Policy, regulation, and institutional capacity
AGREED WITH
Tom Romanoff, Jeanna Matthews, Merve Hickok, Participant
D
Dame Wendy Hall
3 arguments147 words per minute1140 words462 seconds
Argument 1
Lack of diversity undermines ethical AI; inclusive representation is essential for true safety (Dame Wendy Hall)
EXPLANATION
Wendy points out that AI discussions and leadership are dominated by men, which she argues compromises ethical outcomes. She stresses that without gender and broader diversity, AI systems cannot be truly safe or unbiased.
EVIDENCE
She observes that 50 % of the population (women) were not represented at the summit, noting the all-male panel and emphasizing that lack of diversity leads to ethical blind spots [78-88].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The critique of gender-imbalanced panels and the link to ethical blind spots are echoed in the equality and inclusion lightning talk [S45] and the broader governance call for diverse participation [S1].
MAJOR DISCUSSION POINT
Diversity, inclusion, and representation in AI governance
AGREED WITH
Neha Kumar, Jibu Elias, Virginia Dignum, Merve Hickok
Argument 2
Propose “AI metrology” and the study of “social machines” to systematically measure AI impact (Dame Wendy Hall)
EXPLANATION
Wendy introduces the concept of AI metrology, a systematic science for measuring AI’s socio‑technical impact, likening AI systems to “social machines”. She calls for dedicated research institutes and a new journal to advance this field.
EVIDENCE
She describes the launch of a UK centre for AI measurement, the AI Security Institute, and her vision of studying “social machines” and establishing “AI metrology” with a dedicated journal [90-120].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The concept of AI metrology and “social machines” as a systematic measurement discipline is introduced in the technical-to-societal safety discussion [S1] and further developed by the UK AI measurement institute initiative [S46].
MAJOR DISCUSSION POINT
Measurement, transparency, and accountability
AGREED WITH
Sara Hooker, Participant, Virginia Dignum, Tom Romanoff
Argument 3
AI safety requires long‑term monitoring and longitudinal studies to understand delayed consequences.
EXPLANATION
Hall argues that collecting data over extended periods is essential to assess the real impact of AI interventions, especially when immediate bans may have unintended effects.
EVIDENCE
She mentions the need for longitudinal studies, referencing Australia’s age-restriction experiment on social media and noting that behavioral issues take much longer to surface, highlighting the difficulty of assessing impacts with short-term measures [89-103].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for long-term, longitudinal monitoring of AI impacts is highlighted in the World Economic Forum panel on dangerous AI capabilities and the importance of extended observation periods [S32].
MAJOR DISCUSSION POINT
Measurement, transparency, and accountability
S
Sara Hooker
1 argument191 words per minute918 words287 seconds
Argument 1
Safety discussions need precision, acknowledgment of trade‑offs, and transparent reporting of what is sacrificed (Sara Hooker)
EXPLANATION
Sara argues that AI safety conversations must become more precise, openly acknowledge trade‑offs, and require clear reporting of what safety parameters are covered or omitted. She sees this transparency as essential for accountability.
EVIDENCE
She notes that prestige and power allocation reveal seriousness about safety, calls for precise conversation, highlights trade-offs in model design, and asks for reporting of language coverage and safety gaps as a concrete step [135-186].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for precise safety conversations, explicit trade-off reporting, and algorithmic impact assessments are reflected in the expert recommendation for mandatory impact assessments [S47].
MAJOR DISCUSSION POINT
Expanding AI safety beyond technical metrics
AGREED WITH
Dame Wendy Hall, Rasmus Andersen, Tom Romanoff
N
Neha Kumar
2 arguments163 words per minute643 words236 seconds
Argument 1
Human‑centred design and HCI research highlight the need for inclusive, context‑aware AI systems (Neha Kumar)
EXPLANATION
Neha emphasizes that human‑computer interaction research has long studied social impact, user‑centred design, and inclusive technology, providing a foundation for AI systems that respect context and diverse user needs.
EVIDENCE
She describes her background in HCI, the study of social impact, and the importance of learning from disciplines that have examined inclusive design for years before AI emerged [233-239].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Human-centred design principles and the importance of inclusive, context-aware AI are underscored in the multistakeholder AI governance forum that stresses human-rights-based design [S31].
MAJOR DISCUSSION POINT
Expanding AI safety beyond technical metrics
AGREED WITH
Virginia Dignum, Lourino Chemane, Yannis Ioannidis, Dame Wendy Hall, Sara Hooker, Merve Hickok, Jibu Elias
Argument 2
Question who decides, who benefits, and who is involved in design to avoid exclusion (Neha Kumar)
EXPLANATION
Neha calls for critical reflection on decision‑making power, beneficiary identification, and inclusive design processes, warning that current dialogues often ignore who actually gains from AI deployments.
EVIDENCE
She asks who makes decisions, who benefits, and who is part of the design process, linking these questions to feminist studies, design thinking, and development studies, and notes the lack of women and vulnerable groups in current conversations [285-303].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The imperative to identify decision-makers, beneficiaries, and inclusive design processes is emphasized in the closing remarks on charting an inclusive AI governance path [S25] and the high-level AI standards panel’s call for diverse stakeholder engagement [S29].
MAJOR DISCUSSION POINT
Diversity, inclusion, and representation in AI governance
AGREED WITH
Dame Wendy Hall, Jibu Elias, Virginia Dignum, Merve Hickok
M
Merve Hickok
2 arguments148 words per minute454 words182 seconds
Argument 1
Safety must protect human rights, democratic values, and be driven by an expanded governance narrative (Merve Hickok)
EXPLANATION
Merve frames AI safety as a matter of safeguarding human rights, democratic participation, and rule of law, arguing that safety narratives need to shift toward protecting freedoms and dignity.
EVIDENCE
She describes her organization’s focus on AI policy, human rights, democratic values, and the need for an expanded safety narrative that addresses rights, freedoms, and democratic participation [242-246][271-283].
MAJOR DISCUSSION POINT
Diversity, inclusion, and representation in AI governance
AGREED WITH
Virginia Dignum, Lourino Chemane, Yannis Ioannidis, Dame Wendy Hall, Sara Hooker, Neha Kumar, Jibu Elias
Argument 2
Change the safety narrative to focus on rights, freedoms, and democratic participation (Merve Hickok)
EXPLANATION
Merve argues that the current safety narrative is dominated by powerful interests and must be reframed to centre human rights, democratic processes, and equitable participation.
EVIDENCE
She notes that history shows narratives are set by the powerful, stresses the need to shift safety discussions toward rights, freedoms, and democratic participation, and calls for collective action to change the narrative [271-283].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The recommendation to shift the safety narrative toward rights, freedoms, and democratic participation aligns with the UNGA emphasis on common standards for human-rights protection [S36] and the multistakeholder AI governance discussion [S31].
MAJOR DISCUSSION POINT
Measurement, transparency, and accountability
AGREED WITH
Yannis Ioannidis, Tom Romanoff, Jeanna Matthews, Participant
J
Jibu Elias
3 arguments155 words per minute658 words253 seconds
Argument 1
Language and cultural exclusion (e.g., tribal groups) illustrate extractive AI practices (Jibu Elias)
EXPLANATION
Jibu highlights how AI systems often ignore minority languages and cultures, leading to extractive practices that marginalise tribal communities, especially in India.
EVIDENCE
He mentions tribal populations in Telangana, Chhattisgarh, and Jharkhand whose languages are not represented in models like Gemini, and describes how data-centre projects have been built without community consent, harming local water resources [192-224].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The problem of language and cultural exclusion in AI models is highlighted in the broader governance discussion that stresses inclusion beyond technical metrics [S1].
MAJOR DISCUSSION POINT
Diversity, inclusion, and representation in AI governance
AGREED WITH
Dame Wendy Hall, Neha Kumar, Virginia Dignum, Merve Hickok
Argument 2
AI deployment can be exploitative; data‑center construction can harm local communities and resources (Jibu Elias)
EXPLANATION
Jibu argues that AI deployment can be exploitative, citing data‑centre construction that extracts groundwater and manipulates local communities, illustrating environmental and social harms beyond technical failures.
EVIDENCE
He recounts a data-centre built in Telangana that extracted groundwater in a water-scarce area, with companies interacting only with community leaders and politicians, reflecting exploitative practices [208-214].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The environmental and community harms linked to AI infrastructure, such as data-centre water extraction, are discussed in the AI-driven cyber-defense briefing on national strategies and infrastructure sovereignty [S30].
MAJOR DISCUSSION POINT
Socio‑political and environmental impacts of AI deployment
Argument 3
AI systems may generate new mental‑health challenges, such as ‘AI psychosis’, especially among vulnerable groups like the elderly.
EXPLANATION
Elias raises concerns that emerging AI technologies could lead to novel psychological issues and affect older populations, indicating a need to consider health impacts beyond technical performance.
EVIDENCE
He mentions the term “AI psychosis,” admits uncertainty about its validity, and describes conversations with a foundation about elderly people using AI models, signaling emerging health concerns [215-220].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Emerging mental-health risks from AI, including concerns about “AI psychosis,” are mentioned in the World Economic Forum panel on dangerous AI capabilities and the need for careful monitoring of societal impacts [S32].
MAJOR DISCUSSION POINT
Expanding AI safety beyond technical metrics
R
Rasmus Andersen
2 arguments158 words per minute568 words214 seconds
Argument 1
Leaders need foresight on long‑term AI effects to safeguard citizens by 2030‑35 (Rasmus Andersen)
EXPLANATION
Rasmus stresses that policymakers must consider the long‑term trajectory of AI up to 2030‑35 to ensure that citizens are protected from emerging risks, emphasizing strategic foresight in AI governance.
EVIDENCE
He explains his advisory role at the Tony Blair Institute, helping leaders anticipate AI’s impact on citizens and plan for the world in 2030-35 [248-256].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Strategic foresight for AI governance up to 2030-35 is advocated in the multistakeholder AI governance forum that calls for long-term scenario planning [S31].
MAJOR DISCUSSION POINT
Socio‑political and environmental impacts of AI deployment
AGREED WITH
Dame Wendy Hall, Sara Hooker, Tom Romanoff
Argument 2
Effective AI safety governance requires government to serve as the central hub where technical, civil‑society, and industry perspectives converge.
EXPLANATION
Andersen observes that the only place where the imperfect perspectives of technologists, civil society, and industry can be coordinated is within governmental structures, underscoring the pivotal role of state actors.
EVIDENCE
He states that the only place it comes together imperfectly is at government, highlighting the importance of the summit’s presence for aligning diverse viewpoints [322-324].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The role of government as the coordinating hub for imperfect technical, civil-society, and industry inputs is highlighted in the proposal for an independent multi-stakeholder panel on critical AI infrastructure [S33] and the call for legal certainty through regulation [S44].
MAJOR DISCUSSION POINT
Socio‑political and environmental impacts of AI deployment
T
Tom Romanoff
3 arguments155 words per minute628 words242 seconds
Argument 1
ACM’s role is to bridge technical recommendations with policymakers worldwide (Tom Romanoff)
EXPLANATION
Tom describes ACM’s function of translating technical AI safety concerns into policy advice for governments, acting as a conduit between researchers and decision‑makers.
EVIDENCE
He outlines his position as director of policy for ACM, managing policy committees and connecting computer-science experts with policymakers across regions [258-265].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
ACM’s function as a conduit between technical experts and policymakers is described in the multistakeholder AI governance session that emphasizes bridging technical advice to policy action [S31].
MAJOR DISCUSSION POINT
Policy, regulation, and institutional capacity
Argument 2
Highlight political‑will thresholds (51 % rule) and the need for advocacy to translate technical risks into policy (Tom Romanoff)
EXPLANATION
Tom introduces the “51 % rule”, explaining that a majority threshold is needed for regulatory change, and urges participants to become advocates to push political will toward AI safety regulations.
EVIDENCE
He conducts a hand-raising exercise, explains the 51 % rule for political and corporate decision-making, gives examples such as deep-fake regulation, and calls for active advocacy rather than moderation [326-358].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The discussion of a 51 % political-will threshold and the need for advocacy to drive regulation mirrors the consensus-building parliamentary session that stresses coordinated stakeholder responses [S28].
MAJOR DISCUSSION POINT
Measurement, transparency, and accountability
AGREED WITH
Dame Wendy Hall, Rasmus Andersen, Sara Hooker
DISAGREED WITH
Virginia Dignum, Yannis Ioannidis
Argument 3
Translating technical AI safety concerns into understandable language for the public is essential to drive regulatory change.
EXPLANATION
Romanoff stresses that without clear communication of technical risks to lay audiences, it will be difficult to generate the political will needed for effective AI regulation.
EVIDENCE
He urges participants to “Educate them. Figure out ways to communicate the very heady technical aspects of security within the AI stack to the common person,” emphasizing public education as a catalyst for policy action [357-358].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Emphasizing clear communication of technical risks to the public to generate political will is supported by the Policymaker’s Guide that calls for prioritizing human welfare over technical advancement [S41].
MAJOR DISCUSSION POINT
Policy, regulation, and institutional capacity
J
Jeanna Matthews
1 argument145 words per minute277 words113 seconds
Argument 1
Historical lessons show that voluntary good intentions are insufficient; mandatory safeguards are needed (Jeanna Matthews)
EXPLANATION
Jeanna asserts that history demonstrates reliance on goodwill does not guarantee safety, implying that enforceable regulations are required to protect against AI harms.
EVIDENCE
She states that history does not show safety will happen automatically and that powerful narratives must change, emphasizing the need for mandatory safeguards [266-270][359-363].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The UNGA’s call for universal guardrails and mandatory standards underscores the need for enforceable safeguards rather than reliance on goodwill [S36].
MAJOR DISCUSSION POINT
Policy, regulation, and institutional capacity
AGREED WITH
Yannis Ioannidis, Tom Romanoff, Merve Hickok, Participant
P
Participant
2 arguments126 words per minute141 words67 seconds
Argument 1
Extend model‑card and dataset‑card frameworks to cover multiple languages and cultures (Participant)
EXPLANATION
The participant suggests that regulatory artifacts such as model cards and dataset cards should be adapted to reflect multilingual and multicultural contexts, ensuring AI safety across diverse populations.
EVIDENCE
He asks how regulatory artifacts can be extended to multiple languages, contexts, and cultures, emphasizing the need for dynamic, language-aware tools [364].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The push for multilingual, inclusive model and dataset documentation aligns with the equality and inclusion lightning talk that calls for broader representation in AI artifacts [S45] and the AI measurement institute’s work on multilingual standards [S46].
MAJOR DISCUSSION POINT
Diversity, inclusion, and representation in AI governance
AGREED WITH
Yannis Ioannidis, Tom Romanoff, Jeanna Matthews, Merve Hickok
Argument 2
Suggest dynamic, multilingual regulatory artifacts (model cards, dataset cards) for broader accountability (Participant)
EXPLANATION
The participant argues that model‑card and dataset‑card evaluations must be dynamic and reflect language diversity, and that governments could require such multilingual disclosures from model providers.
EVIDENCE
He notes that current artifacts perform well in English but not in other languages, calls for dynamic, multilingual standards, and mentions that governments could mandate these disclosures, citing California as an example [366-371].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The recommendation for dynamic, multilingual regulatory artifacts is echoed in the equality-focused discussion on extending AI documentation to diverse languages [S45] and the development of AI metrology tools for systematic measurement [S46].
MAJOR DISCUSSION POINT
Measurement, transparency, and accountability
Agreements
Agreement Points
AI safety must extend beyond technical robustness to include governance, deployment context, and societal impact.
Speakers: Virginia Dignum, Lourino Chemane, Yannis Ioannidis, Dame Wendy Hall, Sara Hooker, Merve Hickok, Neha Kumar, Jibu Elias
AI safety must incorporate governance, deployment context, and societal impact, not just technical robustness (Virginia Dignum) Safety is the protection of people and requires multidisciplinary governance, human oversight, and ethical standards (Lourino Chemane) Distinguish safety of AI technology from safety of AI use; emphasize regulation of inputs and outputs (Yannis Ioannidis) Propose “AI metrology” and the study of “social machines” to systematically measure AI impact (Dame Wendy Hall) Safety discussions need precision, acknowledgment of trade‑offs, and transparent reporting of what is sacrificed (Sara Hooker) Safety must protect human rights, democratic values, and be driven by an expanded governance narrative (Merve Hickok) Human‑centred design and HCI research highlight the need for inclusive, context‑aware AI systems (Neha Kumar) Language and cultural exclusion (e.g., tribal groups) illustrate extractive AI practices (Jibu Elias)
All speakers stress that focusing only on technical metrics (e.g., model alignment, robustness) is insufficient; AI safety requires multidisciplinary governance, attention to deployment contexts, inclusive design, and societal impact considerations [14-24][30-38][108-124][90-120][135-186][242-246][285-303][192-205].
POLICY CONTEXT (KNOWLEDGE BASE)
This view aligns with calls for multidisciplinary AI governance that incorporate societal impact, as emphasized by Virginia in S55 and the socio‑technical paradigm discussions in S67 and S68.
Inclusion and diversity are essential for ethical and safe AI outcomes.
Speakers: Dame Wendy Hall, Neha Kumar, Jibu Elias, Virginia Dignum, Merve Hickok
Lack of diversity undermines ethical AI; inclusive representation is essential for true safety (Dame Wendy Hall) Question who decides, who benefits, and who is involved in design to avoid exclusion (Neha Kumar) Language and cultural exclusion (e.g., tribal groups) illustrate extractive AI practices (Jibu Elias) Even perfectly designed AI systems can cause harm if societal inclusion and imagination are lacking (Virginia Dignum) Change the safety narrative to focus on rights, freedoms, and democratic participation (Merve Hickok)
Speakers highlight that gender, linguistic, and cultural representation gaps create blind spots and can lead to harm; inclusive decision-making and diverse participation are required for trustworthy AI [78-88][285-303][192-205][104-105][271-283].
POLICY CONTEXT (KNOWLEDGE BASE)
The importance of inclusive stakeholder participation and gender considerations is highlighted in S61 (gender inclusivity) and S62 (multistakeholder AI standards), and further expanded to address gender‑based violence in AI safety in S73.
Systematic measurement, monitoring, and documentation (e.g., AI metrology, model‑cards) are needed to assess AI safety over time.
Speakers: Dame Wendy Hall, Sara Hooker, Participant, Virginia Dignum, Tom Romanoff
Propose “AI metrology” and the study of “social machines” to systematically measure AI impact (Dame Wendy Hall) Safety discussions need precision, acknowledgment of trade‑offs, and transparent reporting of what is sacrificed (Sara Hooker) Extend model‑card and dataset‑card frameworks to cover multiple languages and cultures (Participant) The panel should develop a concrete collaborative model for AI safety governance to be implemented in the next year (Virginia Dignum) Highlight political‑will thresholds (51 % rule) and the need for advocacy to translate technical risks into policy (Tom Romanoff)
Across the panel there is consensus on creating concrete, transparent artefacts (AI metrology, model-cards, longitudinal studies) and on institutionalising them through standards or collaborative models to enable ongoing safety assessment [90-120][135-186][364-371][375-379][326-358].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy discussions call for concrete technical standards and continuous evaluation mechanisms, such as model‑cards and AI metrology (S70), systematic assessment frameworks (S74), and large‑scale monitoring infrastructure (S75).
Regulation should focus on AI outputs and societal impacts rather than only on the underlying technology.
Speakers: Yannis Ioannidis, Tom Romanoff, Jeanna Matthews, Merve Hickok, Participant
Safety must be enforced through law on AI outputs, not just on the technology itself (Yannis Ioannidis) Highlight political‑will thresholds (51 % rule) and the need for advocacy to translate technical risks into policy (Tom Romanoff) Historical lessons show that voluntary good intentions are insufficient; mandatory safeguards are needed (Jeanna Matthews) Change the safety narrative to focus on rights, freedoms, and democratic participation (Merve Hickok) Extend model‑card and dataset‑card frameworks to cover multiple languages and cultures (Participant)
Speakers agree that legal and policy mechanisms must target the real-world consequences of AI (outputs, harms) and that voluntary measures are inadequate; concrete regulatory artefacts can be mandated to ensure compliance [108-124][326-358][266-270][359-363][364-371].
POLICY CONTEXT (KNOWLEDGE BASE)
Debates distinguishing output‑oriented regulation from technology‑centric approaches appear in S56 and S60, while S55 stresses the need for legal frameworks that target societal outcomes of AI systems.
Long‑term foresight, scenario planning and longitudinal monitoring are crucial for AI safety.
Speakers: Dame Wendy Hall, Rasmus Andersen, Sara Hooker, Tom Romanoff
AI safety requires long‑term monitoring and longitudinal studies to understand delayed consequences (Dame Wendy Hall) Leaders need foresight on long‑term AI effects to safeguard citizens by 2030‑35 (Rasmus Andersen) Safety discussions need precision, acknowledgment of trade‑offs, and transparent reporting of what is sacrificed (Sara Hooker) Highlight political‑will thresholds (51 % rule) and the need for advocacy to translate technical risks into policy (Tom Romanoff)
The panel stresses that AI impacts unfold over extended periods; therefore, policymakers need scenario-based foresight and continuous evidence-gathering to guide regulation and mitigation [89-103][248-256][151-156][326-358].
POLICY CONTEXT (KNOWLEDGE BASE)
Recommendations for precautionary principles, scenario building, and longitudinal monitoring are documented in S72, echoed in strategic foresight discussions in S57, and operationalized through continuous tracking initiatives in S75.
Similar Viewpoints
Both argue that the core safety challenge lies in how AI is used and governed, not merely in technical robustness of the models themselves [14-24][108-124].
Speakers: Virginia Dignum, Yannis Ioannidis
AI safety must incorporate governance, deployment context, and societal impact, not just technical robustness (Virginia Dignum) Distinguish safety of AI technology from safety of AI use; emphasize regulation of inputs and outputs (Yannis Ioannidis)
Both call for precise, systematic measurement frameworks (AI metrology, model‑card reporting) to make safety discussions concrete and accountable [90-120][135-186].
Speakers: Dame Wendy Hall, Sara Hooker
Propose “AI metrology” and the study of “social machines” to systematically measure AI impact (Dame Wendy Hall) Safety discussions need precision, acknowledgment of trade‑offs, and transparent reporting of what is sacrificed (Sara Hooker)
Both stress that AI safety must be grounded in multidisciplinary, human‑centred approaches that consider social, ethical, and contextual factors [30-38][285-303].
Speakers: Lourino Chemane, Neha Kumar
Safety is the protection of people and requires multidisciplinary governance, human oversight, and ethical standards (Lourino Chemane) Human‑centred design and HCI research highlight the need for inclusive, context‑aware AI systems (Neha Kumar)
Both argue that relying on goodwill is inadequate; robust, rights‑based regulatory safeguards are required to prevent harm [242-246][266-270][359-363].
Speakers: Merve Hickok, Jeanna Matthews
Safety must protect human rights, democratic values, and be driven by an expanded governance narrative (Merve Hickok) Historical lessons show that voluntary good intentions are insufficient; mandatory safeguards are needed (Jeanna Matthews)
Both focus on the political dimension: leaders must anticipate future AI impacts and actively mobilise political will to enact effective regulation [248-256][326-358].
Speakers: Rasmus Andersen, Tom Romanoff
Leaders need foresight on long‑term AI effects to safeguard citizens by 2030‑35 (Rasmus Andersen) Highlight political‑will thresholds (51 % rule) and the need for advocacy to translate technical risks into policy (Tom Romanoff)
Unexpected Consensus
A technical expert (Yannis Ioannidis) aligns with policy‑oriented speakers on enforcing AI safety through law on outputs.
Speakers: Yannis Ioannidis, Tom Romanoff, Jeanna Matthews, Merve Hickok
Safety must be enforced through law on AI outputs, not just on the technology itself (Yannis Ioannidis) Highlight political‑will thresholds (51 % rule) and the need for advocacy to translate technical risks into policy (Tom Romanoff) Historical lessons show that voluntary good intentions are insufficient; mandatory safeguards are needed (Jeanna Matthews) Change the safety narrative to focus on rights, freedoms, and democratic participation (Merve Hickok)
Yannis, who frames himself as a technical person, nevertheless calls for legal regulation of AI outputs-a stance that converges with the explicitly policy-driven arguments of Tom, Jeanna, and Merve, showing an unexpected cross-disciplinary agreement on outcome-based regulation [108-124][326-358][266-270][359-363][271-283].
Overall Assessment

The panel displayed strong consensus that AI safety cannot be reduced to technical robustness alone; it requires multidisciplinary governance, inclusive design, systematic measurement, and outcome‑oriented regulation. Participants from technical, policy, civil‑society, and regional backgrounds converged on these themes, while emphasizing the need for concrete tools (model‑cards, AI metrology) and long‑term foresight.

High consensus across most speakers, indicating a shared understanding that future AI governance must integrate technical, social, and legal dimensions. This broad agreement creates a solid foundation for developing collaborative frameworks, standards, and policy recommendations in the coming year.

Differences
Different Viewpoints
Scope of AI safety – technical robustness versus governance and societal context
Speakers: Virginia Dignum, Yannis Ioannidis, Tom Romanoff
AI safety must incorporate governance, deployment context, and societal impact, not just technical robustness (Virginia Dignum) Distinguish safety of AI technology from safety of AI use; technology itself has no safety issue, focus on regulating inputs and outputs (Yannis Ioannidis) Highlight political‑will thresholds (51 % rule) and the need for advocacy to translate technical risks into policy (Tom Romanoff)
Virginia argues that safety cannot be reduced to model alignment or robustness and must include governance, incentives and lived realities [14-24]. Yannis counters that the technology itself is not a safety problem and that regulation should target the data fed into models and the way outputs are used [108-124]. Tom adds that achieving safety depends on political mobilisation and thresholds rather than purely technical fixes [326-358].
POLICY CONTEXT (KNOWLEDGE BASE)
The tension between a purely technical safety focus and broader governance considerations is reflected in S55, the technical‑vs‑governance framing of S56, and the socio‑technical perspectives of S67 and S68.
Approach to measuring AI safety – long‑term longitudinal monitoring vs. immediate precise reporting of trade‑offs
Speakers: Dame Wendy Hall, Sara Hooker
Propose “AI metrology” and the study of “social machines” to systematically measure AI impact; stress need for longitudinal studies to capture delayed consequences (Dame Wendy Hall) Safety discussions need precision, acknowledgment of trade‑offs and transparent reporting of what is sacrificed (Sara Hooker)
Wendy calls for a new discipline of AI metrology and long-term, longitudinal evidence-gathering to understand impacts over time [89-120]. Hooker argues for more precise, short-term accountability, demanding clear reporting of language coverage and safety gaps as a concrete step [135-186]. The two differ on whether safety measurement should prioritise long-term studies or immediate, granular transparency.
POLICY CONTEXT (KNOWLEDGE BASE)
The contrast between immediate risk reporting and longer‑term scenario planning is discussed in S72 (precautionary tools) and S73, which presents a temporal framework prioritizing observable risks.
Who should coordinate the convergence of technical, civil‑society and industry perspectives – government versus multi‑stakeholder bodies
Speakers: Rasmus Andersen, Tom Romanoff
Effective AI safety governance requires government to serve as the central hub where imperfect perspectives converge (Rasmus Andersen) ACM’s role is to bridge technical recommendations with policymakers worldwide and to mobilise advocacy for regulatory change (Tom Romanoff)
Rasmus states that the only place where diverse viewpoints can be coordinated is within government structures [322-324]. Tom describes ACM as the conduit between researchers and policy makers and urges participants to become advocates to push political will [258-265][326-358]. The tension lies in whether the state or a multistakeholder professional association should lead the coordination effort.
POLICY CONTEXT (KNOWLEDGE BASE)
Multi‑stakeholder coordination is advocated in S62 and S64, while S66 argues for a clear governmental lead, and S63 highlights the need for global cooperation among diverse actors.
Unexpected Differences
Attitude toward technical safety of AI models
Speakers: Yannis Ioannidis, Dame Wendy Hall
Technology itself does not raise safety issues; focus should be on regulating inputs and outputs (Yannis Ioannidis) Propose AI metrology to systematically measure AI impact, implying that technical aspects also need rigorous safety assessment (Dame Wendy Hall)
Yannis treats the AI model as inherently safe and shifts responsibility to use-case regulation, whereas Wendy argues that even the technical side requires a new measurement discipline (AI metrology) to ensure safety, revealing an unexpected clash over whether technical robustness itself warrants dedicated safety science [108-124][90-120].
POLICY CONTEXT (KNOWLEDGE BASE)
S55 notes that focusing solely on technical safety is insufficient, indicating divergent attitudes toward the role of technical safeguards in overall AI safety.
Overall Assessment

The panel displayed a broad consensus that AI safety must go beyond pure technical robustness and involve governance, inclusion, and human‑rights considerations. However, clear disagreements emerged around the primary locus of safety (technology vs. use), the preferred measurement horizon (long‑term longitudinal studies vs. immediate precise reporting), and the institutional arena best suited to coordinate diverse stakeholder inputs (government versus multistakeholder bodies such as ACM).

Moderate to high – while participants share the overarching goal of safer AI, they diverge on methodological and institutional pathways, which may impede the formulation of unified policy recommendations and could lead to fragmented governance approaches.

Partial Agreements
Both agree that AI safety cannot be limited to technical metrics and must involve multidisciplinary governance and protection of people, but Virginia stresses moving beyond technical framing while Lourino focuses on concrete policy instruments such as data‑centre regulation and cybersecurity [14-24][30-38].
Speakers: Virginia Dignum, Lourino Chemane
AI safety must incorporate governance, deployment context, and societal impact, not just technical robustness (Virginia Dignum) Safety is the protection of people; AI governance must prioritise human, social and institutional impact, requiring multidisciplinary input (Lourino Chemane)
Both see history as a warning that safety cannot rely on goodwill alone and call for stronger safeguards. Merve frames this as a narrative shift toward rights and democracy, while Jeanna calls explicitly for enforceable regulations [271-283][266-270].
Speakers: Merve Hickok, Jeanna Matthews
Safety must protect human rights, democratic values and be driven by an expanded governance narrative (Merve Hickok) Historical lessons show that voluntary good intentions are insufficient; mandatory safeguards are needed (Jeanna Matthews)
Takeaways
Key takeaways
AI safety must be understood as a socio‑technical challenge, not merely a set of technical robustness metrics. Governance, deployment context, incentive structures, and institutional capacity shape whether AI creates value or harm. Multidisciplinary input (law, ethics, social sciences, HCI, labor, education, affected communities) is essential for effective AI policy. Human‑centred design, continuous oversight, and accountability mechanisms are required to protect people, especially women, children, youth, and marginalized groups. Diversity and inclusive representation in decision‑making bodies are critical; lack of gender and cultural diversity undermines ethical outcomes. Transparency through systematic reporting (model cards, dataset cards, AI metrology) and explicit articulation of trade‑offs is needed. AI deployment can have exploitative socio‑political and environmental impacts (e.g., data‑center resource extraction, language exclusion). Policy and regulatory frameworks must evolve to address real‑world harms, enforce safety on AI outputs, and align with democratic values and human rights. The ACM plans to bridge technical research with policymakers and launch a dedicated AI measurement journal to foster a science of “social machines.” Effective change requires active advocacy, shifting narratives from voluntary goodwill to mandatory safeguards, and mobilising political will (the “51 % rule”).
Resolutions and action items
Mozambique will finalize its national AI strategy, data policy, and cybersecurity regulations, incorporating UNESCO ethics principles and focusing on infrastructure sovereignty. The ACM will launch a new journal on AI measurement/metrology to collect and share systematic evaluation data. Panelists agreed to draft a post‑summit report/model that captures the discussed themes and recommendations for the next year. Call for governments to require multilingual, culturally aware model‑card and dataset‑card disclosures for AI systems deployed within their jurisdictions. Commitment from ACM policy office (Tom Romanoff) to continue translating technical safety recommendations into policy engagements worldwide.
Unresolved issues
How to operationalise inclusive governance structures that meaningfully involve women, children, indigenous and language‑minority communities. Specific mechanisms for enforcing safety on AI outputs versus the technology itself remain undefined. Concrete standards for multilingual and culturally contextualised regulatory artifacts (model cards, dataset cards) have not been established. The balance between technical innovation speed and the time needed for longitudinal safety studies is still an open question. How to ensure accountability for extractive practices (e.g., data‑center construction, labor exploitation) linked to AI deployment. What legal or punitive measures should apply when AI systems cause serious harm (e.g., criminal liability, civil lawsuits).
Suggested compromises
Acknowledge inevitable trade‑offs in AI design and require explicit public reporting of what safety aspects are being sacrificed. Combine technical robustness measures (alignment, red‑team testing) with governance tools (human oversight, institutional accountability) rather than treating them as mutually exclusive. Adopt a phased approach: start with mandatory disclosures and multilingual model‑card requirements, then progress to stronger regulatory enforcement as capacity builds. Leverage existing international frameworks (UNESCO ethics principles, national AI strategies) while tailoring them to local socio‑cultural contexts. Encourage moderate stakeholders to move from a “wait‑and‑see” stance to active participation by aligning incentives with long‑term societal benefits.
Thought Provoking Comments
AI safety needs to move beyond technical robustness and consider deployment context, governance capacity, incentive structures, and the lived reality of communities; harms arise because AI is embedded in institutional, economic and political systems.
Sets a foundational reframing of the entire discussion, shifting focus from model‑centric metrics to socio‑technical ecosystems.
Established the thematic lens for the panel, prompting subsequent speakers to address multidisciplinary governance, inclusion, and real‑world impact rather than purely technical fixes.
Speaker: Virginia Dignum
Safety is the protection of people, not just systems; AI governance must prioritize human, social, and institutional impact, involve law, ethics, labor, education, and affected communities, and ensure continuous human oversight and accountability.
Introduces a concrete, people‑first definition of safety and emphasizes the need for multidisciplinary input and continuous oversight.
Reinforced Dignum’s framing and broadened the conversation to include policy mechanisms such as data policies, cyber‑security, and digital government interoperability.
Speaker: Lourino Chemane
If it’s not diverse it’s not ethical – the lack of women and other under‑represented groups at the summit shows that ethical AI cannot be achieved without inclusive decision‑making. Also proposes the concept of ‘AI metrology’ to study AI as socio‑technical ‘social machines’.
Combines a sharp critique of gender bias with a novel proposal for a new scientific discipline (AI measurement/metrology) to systematically study AI’s societal impact.
Shifted the tone from abstract policy talk to concrete calls for diversity and measurement, inspiring later speakers (e.g., Sara Hooker, Merve Hickok) to discuss accountability, metrics, and the need for new research infrastructures.
Speaker: Dame Wendy Hall
Separate safety of the AI technology from safety of AI use; the technology itself can be robust, but the inputs, outputs, and human choices determine real‑world safety, requiring involvement of humanities, law, ethics, and civic society.
Clarifies a nuanced distinction that many participants had conflated, highlighting where technical work ends and socio‑political governance begins.
Prompted other panelists to discuss the role of data, human agency, and regulatory frameworks, deepening the analysis of where responsibility lies.
Speaker: Yannis Ioannidis
The real signal of whether we care about safety is the prestige and power structures that allocate resources; we need precise, transparent reporting of what safety parameters models cover and what they omit, acknowledging inevitable trade‑offs.
Moves the discussion from high‑level ideals to actionable transparency, emphasizing that safety is a political and economic signal, not just a technical checkbox.
Steered the conversation toward concrete accountability mechanisms (model cards, dataset cards) and sparked agreement on the need for explicit trade‑off disclosures.
Speaker: Sara Hooker
AI is becoming an extractive, exploitative construct: data‑center construction harms local water supplies, language minorities are excluded from models, and AI tools can cause ‘AI psychosis’ among vulnerable users.
Provides vivid, ground‑level examples of how AI deployment can cause social and environmental harm, grounding abstract safety concerns in lived realities.
Shifted the panel from policy theory to tangible harms, prompting participants like Merve Hickok and Neha Kumar to stress rights, democracy, and inclusive design.
Speaker: Jibu Elias
History shows that without deliberate action, safety narratives are shaped by the powerful; we must change the narrative to protect rights, freedoms, and democratic participation, not just focus on existential or nuclear‑style risks.
Frames AI safety within a historical and political lens, arguing that current safety discussions repeat past power dynamics unless actively contested.
Reinforced calls for activist stances, influencing later remarks about moving beyond “moderate” positions and encouraging participants to demand concrete regulatory change.
Speaker: Merve Hickok
The 51 % rule: regulatory change only happens when a majority of political or corporate power aligns; we must move from being moderates to activists who educate and pressure decision‑makers.
Introduces a pragmatic political insight about how change actually occurs, coupled with a clear call to action for the audience.
Marked a turning point toward a more urgent, action‑oriented tone, culminating in Jeanna Matthews’ concluding appeal for insistence and collective responsibility.
Speaker: Tom Romanoff
Inclusivity must be examined not just at the level of rhetoric but by zooming in on who designs, who benefits, and who is left out; lessons from feminist studies and development studies can help ask the ‘who’ question concretely.
Bridges HCI and feminist scholarship to critique superficial diversity claims and propose concrete analytical lenses.
Deepened the discussion on inclusion, prompting reflections on design practices and the need for measurable outcomes rather than buzzwords.
Speaker: Neha Kumar
Overall Assessment

The discussion was shaped by a series of pivotal interventions that moved the conversation from a generic, technical framing of AI safety to a richly layered, socio‑political analysis. Virginia Dignum’s opening set the agenda, but it was the successive challenges—Lourino Chemane’s people‑first definition, Dame Wendy Hall’s critique of gender exclusion and call for AI metrology, Sara Hooker’s exposure of power‑driven safety signals, Jibu Elias’s concrete examples of extractive harms, and Tom Romanoff’s 51 % rule—each acted as a turning point that redirected focus, introduced new concepts, and heightened the urgency for actionable governance. Collectively, these comments deepened the panel’s understanding of safety as an interdisciplinary, inclusive, and politically contested issue, steering the dialogue toward concrete accountability mechanisms and a call for activist engagement.

Follow-up Questions
How can we shift the discourse from a purely technical AI safety focus to a broader inclusive societal and institutional approach?
Addressing this question is crucial to ensure that AI safety considerations incorporate governance, ethics, and real‑world impact rather than remaining confined to model robustness alone.
Speaker: Virginia Dignum
What specific trade‑offs are being made in AI models, and can providers transparently report which safety parameters are covered and which are omitted?
Transparency about omitted safety tests and trade‑offs would allow stakeholders to understand what risks are being accepted and to hold developers accountable.
Speaker: Sara Hooker
How can we mitigate the extractive and exploitative aspects of AI development, such as the labor conditions of data annotation workers and the environmental impacts of data‑center construction?
Investigating these socio‑economic and environmental harms is essential to prevent AI from deepening inequality and resource depletion.
Speaker: Jibu Elias
Does history indicate that AI safety will automatically benefit everyone, or are enforceable mandates (musts) required? Are we serious about AI safety?
Understanding whether voluntary measures suffice or whether binding regulations are needed informs policy design and prevents repeating past failures.
Speaker: Jeanna Matthews
How can regulatory artifacts like dataset cards, model cards, system cards, rigorous evaluations, and user‑feedback mechanisms be extended to cover multiple languages, contexts, and cultures?
Ensuring these tools work across linguistic and cultural boundaries is vital for equitable AI safety worldwide.
Speaker: Unnamed Participant (audience)
What longitudinal evidence is needed to assess the impact of age‑based social‑media bans, and how can studies be designed to capture unintended consequences?
Long‑term studies are required to determine whether bans protect youth or drive them to riskier, hidden platforms, informing better policy.
Speaker: Wendy Hall
How can we develop a science of AI measurement or AI metrology to study ‘social machines’ and their socio‑technical dynamics?
A systematic measurement framework would enable consistent evaluation of AI’s societal effects, supporting evidence‑based governance.
Speaker: Wendy Hall
What mechanisms can ensure the inclusion of women, children, and other vulnerable groups in AI governance and design processes?
Inclusive participation is necessary to avoid bias and to make AI systems safe and beneficial for all segments of society.
Speaker: Wendy Hall; Neha Kumar
How can national AI strategies (e.g., Mozambique’s) integrate data policy, cybersecurity, and digital government frameworks to ensure safety and sovereignty?
Research on policy integration can guide other nations in building coherent, safe AI ecosystems aligned with national interests.
Speaker: Lorine Chemane
What role should interdisciplinary collaboration (law, ethics, social sciences, humanities) play in AI safety governance and regulation?
Cross‑disciplinary input is needed to address the full spectrum of safety concerns beyond technical robustness.
Speaker: Yannis Ioannidis

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

How AI Is Transforming Diplomacy and Conflict Management

How AI Is Transforming Diplomacy and Conflict Management

Session at a glanceSummary, keypoints, and speakers overview

Summary

The Belfer Center’s Emerging Tech Program launched the MOVE 37 project to examine how artificial intelligence can be integrated into diplomatic negotiation and policy-making processes [4-5][19-22]. Panelists highlighted that modern negotiations involve dozens of parties, multiple issues, and thousands of documents, creating information overload and time pressure that AI could help manage [41-48][66-72]. Gabriela Ramos illustrated this by describing the UNESCO AI ethics recommendation, which required processing 55,000 public comments and mapping the positions of 193 countries, a task she said would have benefited from more AI support [139-144]. Nandita Balakrishnan noted that AI access varies across academia, government, and industry, and that public-sector analysts still perform labor-intensive manual assessments that could be accelerated with AI tools [170-176][181-189]. Charlie Posniak warned that large language models are opaque, lack verifiable fluency, and cannot alone provide the accountability needed for high-stakes negotiations, emphasizing the need for a broader set of computational methods built over the past 80 years [82-85][87-92]. He identified three technical challenges: representing dynamic, strategic interactions; handling intentional misrepresentation by actors; and defining success criteria for multi-party outcomes [94-100]. To address these, his team proposes a cyclical workflow of research, analysis, strategy formulation, and real-time execution, supported by autonomous research agents, data validation, and live transcription services [102-107][108-115]. All speakers agreed that human authority must remain central, with AI tools kept modular, transparent, and scoped to augment rather than replace negotiators [117-121][284-290]. Robyn Scott presented survey results showing that over 90 % of public servants are optimistic about AI’s potential, yet most pilots lack systematic evaluation and many officials do not understand their own ethical frameworks, underscoring a skills gap [228-236][242-250]. She also cautioned against “sleeping at the wheel”-over-reliance on AI that can lead to false confidence-and advocated keeping users “above the algorithm” to preserve agency [254-259][330-334]. Gabriela stressed that cultural and linguistic diversity must be reflected in training data to avoid bias, citing examples where single-language models mischaracterized negotiators’ perspectives [391-399][423-425]. The panel concluded that while AI can improve data handling, predictive insights, and strategic option generation, its deployment must be carefully governed to maintain accountability, cultural sensitivity, and human judgment [286-290][350-357]. Michael McQuade summarized that MOVE 37 will develop AI-augmented tools, evaluation methodologies, and collaborative networks, positioning the project as an early step toward a new discipline at the intersection of technology and international diplomacy [125-129][204-209][432-438].


Keypoints

Major discussion points


AI as an augment-tool for diplomatic negotiations, not a replacement for humans – the panel repeatedly stressed that negotiations are “fundamentally interpersonal” and that any AI system must keep “human authority … central” and serve as a support rather than a decision-maker [24-30][118-121].


Technical and ethical challenges of deploying AI in diplomacy – concerns were raised about model opacity, accountability, strategic mis-representation, cultural bias, data-poisoning and “sleeping at the wheel” over-reliance [83-86][94-100][322-327][330-334][391-403].


Concrete AI functionalities being explored – the team outlined a task-breakdown (research, analysis, strategizing, execution) and described prototypes such as autonomous research agents, real-time transcription/translation, position-tracking dashboards and predictive geopolitics models [102-108][284-290][286-290].


Capacity-building and institutional adoption gaps – surveys show public-sector optimism about AI but also “pilotitis,” low evaluation rates, limited AI literacy and a large skills gap; the panel highlighted the need for training, clear ethical frameworks, and systematic rollout of pilots [226-242][248-250].


Ensuring cultural and linguistic diversity in AI systems – participants warned that AI must reflect the world’s many languages and cultural perspectives to avoid reinforcing individualistic or biased outcomes; they cited UNESCO’s multilingual work and the Swiss multilingual LLM initiative as examples [391-403][424-425].


Overall purpose / goal


The discussion was convened to launch and shape the MOVE 37 project – an initiative of the Belfer Center’s Emerging Tech Program that aims to design, prototype, and responsibly integrate artificial-intelligence tools into the practice of diplomacy and negotiation. The organizers sought input from scholars, practitioners, and policymakers to map the problem space, identify research and development priorities, and build a collaborative community that will guide the project’s roadmap.


Overall tone and its evolution


Opening segment (0:00-10:00): Formal and upbeat, emphasizing opportunity, collaboration, and the excitement of pioneering a new research frontier [1-7][19-23].


Middle segment (10:00-30:00): Becomes more analytical and cautious, detailing the complexity of negotiations, the technical limits of LLMs, and the ethical risks of opacity, bias, and over-reliance [83-86][94-100][322-327][330-334].


Later segment (30:00-end): Shifts toward pragmatic optimism, focusing on concrete use-cases, capacity-building, and concrete next steps while still acknowledging the need for vigilance and human oversight [226-242][284-290][391-403].


Overall, the tone moves from enthusiastic introduction, through a sober appraisal of challenges, to a constructive, solution-oriented outlook that calls for continued collaboration and responsible development.


Speakers

J. Michael McQuade – Director of the Emerging Tech Program at the Belfer Center, runs the MOVE 37 initiative; expertise in international policy, technology, geopolitics, and AI for diplomacy. [S8]


Charlie Posniak – Full-time fellow and research fellow at the Belfer Center; works on AI-enabled diplomatic negotiation tools and policy guidelines. [S1]


Slavina Ancheva – MPP student and research fellow at the Belfer Center; focuses on framing negotiation complexity and AI augmentation for diplomacy. [S4]


Gabriela Ramos – Former Assistant Director General for Social and Human Sciences at UNESCO; former co-chair of the UN AI Advisory Panel and Spain-India Ambassador for AI; expertise in AI ethics, international negotiations, and UNESCO AI recommendations. [S14]


Nandita Balakrishnan – Director of Intelligence at the Special Competitive Studies Project (SCSP); expertise in intelligence, AI, geopolitics, and public-sector AI adoption.


Robyn Scott – CEO and co-founder of Apolitical; collaborates with Stanford HAI; expertise in government innovation, AI training for public-sector policymakers. [S6]


Audience – Various attendees (e.g., senior advisor Sam Dawes, Indian classical-dance teacher Devika Rao, JPL South Asia staff member Arman); roles not specified.


Additional speakers:


Sam Dawes – Senior Advisor to the Oxford University AI Governance Initiative and Director of Multilateral AI; background in diplomacy (worked for Kofi Annan, UK Foreign Office, Cabinet Office).


Devika Rao – Indian classical dance teacher; involved in cultural-education frameworks linking India and the UK.


Arman – Staff member at JPL South Asia; interested in AI’s impact on balance of power in negotiations.


Full session reportComprehensive analysis and detailed insights

The session opened with J. Michael McQuade, director of the Belfer Centre’s Emerging Tech Programme, which “teaches, trains, and does research on the applications of science and technology for international affairs” and convenes scholars, practitioners and students to explore the intersection of technology, science and geopolitics [1-3]. He announced the launch of the MOVE 37 initiative – a component of the Emerging Tech Programme created “to look at where emerging technologies are creating new policy frontiers… and the implications… for governance, geopolitics, global stability and global conflict” [4-5]. Because artificial intelligence is a “major aspect of our work” in relating technology to modern geopolitical issues, a panel of experts was introduced: Gabriela Ramos (UNESCO), Nandita Balakrishnan (Special Competitive Studies Project), Robyn Scott, CEO and co-founder of Apolitical, and two Belfer researchers, Charlie Posniak and Slavina Ancheva[6-15].


The gathering’s purpose was to mobilise collaborators for a “major new project… looking at the use of artificial intelligence in diplomacy and negotiation” [19-22]. McQuade stressed that the work is not confined to a small Cambridge team but seeks “collaborators, partners, and input from the community… around the world” to shape how AI will be used responsibly in high-stakes diplomatic processes [23-26]. He framed diplomacy as a fundamentally human activity that could be augmented by AI to improve outcomes while preserving human agency [24-30][118-121].


Agenda and framing - Slavina Ancheva set the discussion’s three focal areas: (1) the current complexity of negotiation processes; (2) AI’s potential to alleviate those challenges; and (3) the need to think beyond large-language models (LLMs) toward responsible deployment [40-44]. She asked participants to “close your eyes and imagine” a negotiation with ten agenda items, noting that “it’s not just about those 10 items” because “a lot of other factors… both inside and outside that room” influence the outcome [45-48]. A typical negotiation may involve “seven counterparts from seven political groups, seven different countries… and behind you… 27 other countries you are representing” [48-49], illustrating the multi-layered nature of modern diplomacy. The resulting “information overload” includes “thousands of documents, transcripts, drafts” and is compounded by “finite resources, strategic group-think, and time pressure” [66-72].


Challenges illustrated - Gabriela Ramos described negotiating the UNESCO Recommendation on the Ethics of Artificial Intelligence, a process that involved “193 countries negotiating during COVID” and generated “55 000 comments” [139-144]. She noted that AI could have helped “map the positioning of countries” and provided a “repository of what is the traditional position of certain countries” to streamline briefings and stakeholder outreach [144-145]. Ramos warned that AI tools must avoid “misrepresentation, over-representation of certain cultures, certain languages, assumptions” and that any system should “open a space of human understanding” rather than simply “beat the person in front of me” [354-367]. When asked about cultural inclusivity, she stressed that “culture is expressed by language” and that models need multilingual training to capture philosophies such as Ubuntu, otherwise they risk “maximising individual welfare” at the expense of collective worldviews [391-403][424-425].


Audience questions - Sam Dawes asked (a) how to ensure diverse cultural inputs are embedded in AI models and (b) how to guard against data-poisoning or prompt-injection attacks. Ramos answered that culture is conveyed through language, so training on a wide range of languages and continuously validating source material are essential, and she emphasized the need for continual ground-truth testing to detect poisoning [391-403][424-425][430-432]. Arman raised concerns about the impact on the balance of power when data access is uneven. McQuade responded that the project is examining how AI tools can create competitive leverage but also risk exacerbating asymmetrical information if not widely disseminated [208-212][431-432].


Sectoral perspectives - Nandita Balakrishnan observed that “the public sector has been more in the passenger seat, if not the backseat” while academia and industry enjoy broader toolsets [170-172]. She recounted a mentor pointing out a “ten-year-old data point that completely negates” her analysis, a gap that “AI could have identified and synthesised” [185-189]. From this perspective she argued that AI should be viewed as a “data point” that requires human explanation and accountability, especially in high-stakes policy work [297-304]. Balakrishnan also highlighted AI’s strategic importance for geopolitics, noting that “AI has fundamentally changed the threat landscape” and that “the public sector must leverage these tools… in intelligence, the State Department, commerce, OPM” to stay competitive [194-200]. She cited ongoing projects that use AI to “predict geopolitical events” for both military and diplomatic applications, arguing that demonstrable use-cases are needed to convince policymakers of AI’s value [201-203].


Technical roadmap - Charlie Posniak provided a technical perspective, first dismissing the notion that “you can just ask an LLM” because “their fluency isn’t necessarily verifiable in international and world politics” and the models are “opaque… not always viable” for accountability [82-85]. He reminded the audience of an “80-year-old toolkit” of game theory, decision analysis and machine-learning methods that must be integrated with modern AI rather than replaced by chat-bots [86-92]. Posniak identified three core challenges for diplomatic AI: (1) representing dynamic, strategic interactions that evolve over time [94-98]; (2) handling intentional mis-representation by actors [98-99]; and (3) defining success criteria for multi-party outcomes [100-101]. To address these, his team proposes a cyclical workflow of “research, analysis, strategising, and execution” supported by “autonomous research agents, source validation, real-time transcription and translation services” and modular, transparent tools [102-115][117-121]. He later expanded on concrete functionalities such as “position-tracking dashboards, strategy sandboxes, red-team training and predictive geopolitics models” that can process “vast amounts of unstructured data” [284-290][286-290].


Human-in-the-loop emphasis - During the discussion a brief slip (“woman in the loop”) illustrated the panel’s humor and reinforced the emphasis on keeping humans central to AI-augmented processes [350-352].


Consensus - All panelists agreed that AI must remain an augment-tool rather than a replacement for diplomats. Both Ancheva and Posniak stressed that negotiations are “fundamentally interpersonal” and that AI should “give them the tools to manage these complexities much better” while keeping “human authority… central” [61-62][117-121]. Ramos echoed this, insisting that any AI-driven recommendation must be “questioned” and that “the AI tools… should not be built to beat the person in front of me” [352-367]. Scott reinforced the idea of staying “above the algorithm” – using AI as support rather than surrendering agency – and warned that without careful framing AI could create a zero-sum dynamic that diminishes human agency [254-259].


Capacity gaps - Robyn Scott presented empirical evidence on capacity gaps. A survey of 5 000 public servants showed that “north of 90 % think there is huge possibility in the public sector” yet “only 26 % of them say they understand their own country’s ethical frameworks” and many pilots lack systematic evaluation [226-242][248-250]. She described the phenomenon of “sleeping at the wheel,” where users over-trust AI after it reaches high accuracy, leading to false confidence [330-334]. Her “below-the-algorithm / above-the-algorithm” heuristic urges policymakers to “move people up above the algorithm” to preserve decision-making power [256-259].


Strategic implications - McQuade argued that AI will provide “competitive leverage” and that tools should be “dispersed actively and offensively, not defensively” to reshape power balances in both cooperative and adversarial negotiations [208-212][431-432]. Balakrishnan reinforced this view, noting that AI is now “a foundational way we need to think about geopolitics” and that “you cannot divorce AI… when you’re trying to understand geopolitics and foreign policy” [194-196].


Points of disagreement - Posniak warned that the opacity of LLMs makes them unsuitable for treaty-shaping work because “accountability… is not always viable” [82-85], whereas Scott acknowledged the black-box nature but argued that “the lack of transparency is not insurmountable” and that work-arounds are possible [254-259]. A second tension concerned the level of trust to place in AI outputs: McQuade suggested AI could serve as a trusted augment-tool that aggregates information and offers new levers for negotiators [210-212]; Balakrishnan counter-pointed that AI should remain a “data point” requiring human validation and explanation [297-304]. Finally, Ramos advocated for AI that “opens space for human understanding” rather than being used to “beat” counterparts, while Scott cautioned that without careful framing AI could create a zero-sum dynamic that diminishes human agency [354-367][254-259].


Next steps - MOVE 37 will develop a suite of AI-augmented tools covering the four phases identified by Posniak (research, analysis, strategy, execution) [102-115] and will create “evaluation methodologies” to assess their impact [125-129]. The project will continue “one-on-one interviews” with current and former diplomats to capture “process-level insights” and to inform the design of position-tracking repositories and strategic-option generators [267-278]. It will also pursue multilingual, culturally inclusive datasets, drawing on examples such as the Swiss quasi-governmental LLM trained on over 100 languages for diplomatic use [424-425]. Capacity-building initiatives will be launched to raise AI literacy across intelligence, the State Department and other federal agencies, and pilot programmes will be equipped with systematic evaluation frameworks to close the “pilotitis” gap [242-250][194-200].


Key take-aways


– AI should augment, not replace, human negotiators.


– Transparency, modularity and human-in-the-loop design are non-negotiable.


– Multilingual, culturally representative data are essential to avoid bias.


– Capacity-building and rigorous evaluation are required to move beyond pilots.


– Governance frameworks must balance strategic advantage with ethical safeguards.


Overall, the discussion highlighted strong consensus on AI as an augment-tool, the necessity of transparent, modular, human-centered systems, and the importance of multilingual inclusivity and capacity-building. At the same time, disagreements over model opacity, the appropriate level of trust, and whether AI should be framed as a collaborative aid or a competitive lever indicate that MOVE 37 will need flexible governance structures that balance strategic advantage with ethical safeguards. The panel’s thought-provoking remarks – from Posniak’s challenge to “just ask an LLM” [82-85] to Scott’s “below/above the algorithm” heuristic [256-259] – set the tone for a pragmatic, yet cautious, roadmap toward AI-augmented diplomacy.


Session transcriptComplete transcript of the session
J. Michael McQuade

I’ve been a major figure in international policy for the United States and in education at the Belfer Center, where our objective is to teach, train, and do research on subjects related to the applications of science and technology for international affairs. We have scholars, practitioners, students, all working to address the gaps and the opportunities for technology, science, and geopolitics. I’m very delighted to have everybody here today. The Emerging Tech Program, which I have the honor of running, was launched about a year ago, specifically to look at where emerging technologies are creating new policy frontiers, new opportunities to use technology to engage in policy, and the implications that technologies are creating for governance, geopolitics, global stability, and global conflict.

And the MOVE 37 initiative that we’re here to talk to you about today. is a part of that program. As you can imagine, in a program that’s relating technologies to modern issues around geopolitics, that artificial intelligence is one of the major aspects of our work. We have a terrific panel here today. It’s my pleasure to introduce them by name, and we’ll talk a little bit more about each one in just a moment. The missing chair, which we expect shortly, is Gabriela Ramos, who’s the former Assistant Director General for Social and Human Sciences at UNESCO. Nandita Balakrishnan is the Director of Intelligence at the Special Competitive Studies Project in Washington. And ⁠Robyn Scott is the CEO and co -founder of Apolitical.

And then at the far end, two of my colleagues, researchers on our program at the Belfer Center. Charlie Posniak is a full -time fellow with the program, research fellow with the program. And Slavina Ancheva is a current. student within our program, an MPP student at the Belfer Center. I also want to acknowledge a colleague of ours. who is not here, was not able to get to India in time for the conference, Carme Artigas, who is the former co -chair of the UN AI Advisory Panel, who has been an integral part of starting this work and the ongoing progress we are making. Carme is also the Spain -India Ambassador for AI, High Commissioner in Spain -India for AI, and is here with us in spirit and maybe even on the live stream that we’re doing here today.

So a big shout -out to Karame for all of her help. So why are we here? We are embarking on a major new project, specifically looking at the use of artificial intelligence in diplomacy and negotiation. We are here at a conference about the use of artificial intelligence and the implications of artificial intelligence in so many aspects of society. And our work, is looking at how one will engage non -human intelligences in the process. of diplomacy and negotiation. So we’re here because this is a broad -based project. It is not something that is solely the purview of a small team in Cambridge where we are located, but something that we are looking for collaborators, partners, and input from the community here in the community around the world as we build what will surely be a place where AI is used and surely will be a place where we need to be cognizant and careful about how AI will be used in that exercise.

So the work we do plays on an increasingly bigger role in shaping the relationships between states and within states. We want to have this conversation about the role for AI specifically because of the global nature and the integrated way that AI will play, and more specifically, how will we use artificial intelligence tools to augment humans in what is at its core a fundamentally human process of negotiation. Thank you. diplomacy. Diplomatic negotiations are very high stakes. They are very different, as you will hear from our team, very different than classic one -on -one win -lose negotiations or win -win negotiations. They are very much more complex than that, and that means they require both a unique human touch and a unique application of how artificial intelligence might be used in that process.

But it’s also an area where an enormous amount of potential exists to tackle resource constraints, to find better outcomes, and to use artificial intelligence to enable a more stable and prosperous world, and one for which the follow -up to negotiations can be a subject for the tools and applications of modern technology. How we go about that is crucial, and how we talk about it from the beginning is crucial. There are already a number of tools emerging in this space, but it’s our belief that a more rigorous approach is needed, and you’ll hear some of that in just a moment. So what are we going to do? So what are we going to do? Our team is going to do a brief overview of the way we are thinking about this problem.

And after that, we’re going to have this amazing panel that we will engage in a conversation about views from others who are involved in negotiation and or diplomacy writ large and their views on how technology can be used, be used well or be used not so well, and what the implications of that will be. So with that, let me turn it over to Charlie and Slavina to talk about the project itself. I think Slavina is up first.

Slavina Ancheva

Thank you, Michael. And thank you all for being here this morning. A big welcome. Over the next 10 minutes or so, Charlie and myself would like to present you with a little bit of a framing for the expert discussion that we’ll be getting into right after. We’ll broadly focus on three areas. How do negotiation processes currently look and the complexity that comes with them? The potential for AI to augment many of these challenges and processes. Thinking beyond just LLM. and the need for responsible deployment of these tools. Before we do that, I’d like you to close your eyes and imagine. You walk into a negotiation, you look down at the agenda, and there’s 10 items on it.

But as any good negotiator, you know that it’s not just about those 10 items. It’s a lot of other factors that are happening both inside and outside that room that are affecting how that negotiation process is happening. So for one example, you’re sitting across seven counterparts from seven political groups, seven different countries, and behind you, you have your own team, but also 27 other countries that you’re representing, that you’ve promised a certain outcome or a certain deal. Of course, this is not my story. It’s the story of Carme Artigas that Michael mentioned, who was one of the chief negotiators of the EU AI Act, and later the UN AI Advisory Body and many other negotiations. But it’s not just her story.

It’s the story of many of you. It’s the negotiations that you’ve engaged in at the UN, at COP, bilateral. It’s the negotiations that you’ve engaged in at the UN, interagency negotiations within your own organizations. So you know very well that negotiations are complex and they evolve over time. So what might look like just two states negotiating with each other bilaterally is actually a whole set of issues that are on the table. It could be natural resources, it could be AI, it could be climate, and a whole lot of external and internal stakeholders that are also trying to influence that process. We start to dive into some of this complexity. And more than that, there’s a lot of teams that are sitting behind these principal negotiators, the different departments and agencies that are supporting them with evidence, with documents.

And we’d really like to stress that this is a fundamentally interpersonal process. We’re not looking to replace diplomats or negotiators here, but just to give them the tools to manage these complexities much better. And finally, rarely in this world do we have just two states negotiating nowadays. There’s often a third state at the table. In the case of the EU, maybe 27 member states and, of course, a lot of hundreds of others that could be out there. So with that being said, what are some of the impacts of this complexity? Well, for one, there’s a whole lot of information that needs to be managed. There’s a simple negotiation can generate thousands of documents, transcripts, drafts. On top of that, there’s a certain amount of finite resources that any team has as they grapple with many other challenges throughout the day.

There’s a lot of strategic elements. Sometimes in groups, you might have a group think or herding that leads you in one direction as opposed to exploring your full set of options. And finally, there’s the time pressure. So most negotiations do have some sort of time element and handover element to future teams. So with that being said, how can AI help? And I’d like to turn over to Charlie.

Charlie Posniak

Thanks, Lavinia. So AI systems can now beat some of the best human players at Go, at chess, at video games. At board games, language models, as we’ve heard, have become increasingly competent at delivering a range of sophisticated. legal, academic, technical, software contributions, the pace of change has been staggering. And so what our interdisciplinary team has been looking at is we’re trying to envision a better future for diplomacy where computational methods can transform the practice of diplomatic negotiations and statecraft that Slavina just outlined. So supporting better communications, better resolutions, and processes between states can augment their functions. So we’re trying to chart existing technical tools, develop new ones, and provide a range of policy guidelines to ensure that this happens responsibly, safely, effectively.

So the classic question that we get in response is, why can’t you just ask an LLM? Lots of people are interested in trying to see if language models can simulate diplomacy or if chatbots can guide people through a negotiation all in one step. But ultimately, language models are remarkable, and they used to be carefully scoped for three key reasons. Firstly, their fluency isn’t necessarily verifiable in this international and world politics. Secondly, the opacity where you can’t tell what’s going on inside a model is not always verifiable, is not always viable. Because high -stakes negotiations require… accountability both democratically and internally to understand why recommendations shape treaties in certain ways. And additionally, there’s a toolkit that’s 80 years old here.

We have game theory, decision analysis, machine learning, a great range of theoretical developments that exist precisely to model strategic interactions under uncertainty. And so we see LLMs as playing this role at the heart of a really broad set of learning paradigms that’s tying together. We’re both like supervised and unsupervised, self -supervised learning here. But LLMs provide a really strong way to interact with all of these different learning paradigms and technical architectures that the best advances in AI have been built from. So whether that’s the systems that play chess or Go or board games, these are all pulling together lots of different methods. And if we just rely on chatbots at the heart of things, we miss out on all of the technical developments that the last 80 years have experienced.

But there are three key challenges with trying to expand these techniques into the world of diplomacy and world policy. One is to be able to do it in a way that’s not just a way to do it in a way that’s a way to do it in a way that’s not just a way to do it in a way that’s not just a way to do it in a way that’s not just a way to do it in a way that’s not just a way Firstly, representation. As Sabina was touching on, the game that’s being played here isn’t a board game. These are things that are these interactions are fundamentally changeable over time. The institutions that constrain the actions of states can be made and unmade over the course of a negotiation.

Inference like these are environments where there’s real strategic misrepresentation, where people are lying or deceiving or trying to shape outcomes for their own advantages in ways that the current methods aren’t quite well suited to handle. And finally, there’s as we’re touching on, there’s this sense of specifying success. How can you bring together all of the different counterparts and come up with a relatively set, coherent set of preferences and priorities over the course of a really massive negotiation? So these are three challenges that we’re trying to embark on. And one of the ways that we’re approaching this is by breaking down the the tasks of diplomacy and the tasks of negotiation for AI applications. So just broadly, one of the ways we’ve looked at this, is saying that there are some foundational tasks of research analysis.

analysis, strategizing, and execution that build this evidence base with research, that analysis processes the information that you’ve managed to gather. Strategizing relies on using the analysis and the research to come up with a map from your preferences to your outcomes. And then finally, in the room, executing a negotiation, you’ve got to be able to dynamically adapt and adjust over time. And this isn’t a linear process, but a re -entrant cyclical sense of you have all of these things as they change, feeding up through this knowledge base. And so you need this really strong computational infrastructure to be able to even begin to apply some of the really exciting and fascinating AI and ML methods we’re touching on.

So with this, we see a future where research can be done with autonomous research agents, and you can have source validations and get immediately generated counterpart biographies, analysis of gaps and preferences and evidence bases, strategy sandboxes, red team training, and trying to simulate how both the public and the public can interact with each other. And so you need this really strong data set to be able to do that. And then finally, in the room, executing a negotiation, you’ve got to be able to be able to identify the best way to do that. And so you need this really strong data set to be able to identify the best way to do that. And so you need this really strong data set to be able to identify the best way to do that.

And so and then in real time having transcription and translation services that AI and ML methods are doing a really phenomenal job at. All of these things we think will play a role in this multi -model, multi -method world of computational support for diplomacy negotiations. And so this is just a sense of how we’ve tried to break down this problem and get a grasp on the existing and future technical developments. Finally, we want to end on these three commitments that are central to a lot of the stuff that we’re talking about. One, human authority has to remain central. We can’t have any objection of responsibility over decisions of war and peace. We have to make sure that the tools themselves are modular and transparent so that you can see what’s happening at each stage of the process and which parts of which computational systems are supporting analysis.

And then finally, making sure that augmentation is appropriately scoped for the team, the institution, and the setting that it’s in. So with that, I’d like to hand over to Michael in the panel, our director of the program. And what I hope will be a wonderfull discussion.

J. Michael McQuade

Great. Thanks, Charlie. Thanks, Slavina. So just as everybody’s sort of getting settled in, so we have a plan for a project. We have a vision of how one has a set of signposts and goalposts in what is essentially the ability to augment with intelligence, human intelligence and participation. So there are lots of technical elements of that. We’ll be developing tools. We’re looking at evaluation methodologies, et cetera, et cetera, et cetera. It’s the whole technical side. But one of the benefits of the approach we’re taking is that we have access to a large body of people for whom the day -to -day practice of negotiation diplomacy, not necessarily constrained by the definition of diplomacy, meaning state -to -state to get to an answer, but organization -to -organization, people -to -people, negotiation -to -negotiation.

And I am delighted then to have three people here who can talk a little bit about their views on how artificial intelligence will be used in the process of their work. That allows us to then learn from that experience and how we map that into the Move 37 project. So Gabriela, let me start with you. Welcome. Thank you for negotiating all the traffic to get here. So you’ve been at the center of international policy design and negotiations on issues such as climate change, international taxation, gender equality, artificial intelligence, a whole list of things in a brilliant career. You’ve done this through key roles at UNESCO, but also at the G20 and G7 and at the OECD.

We’re delighted to have you here. And let me ask you just to sort of start the discussion, if you would, to just talk a little bit about what it’s like to sit in the driver’s seat as a mediator trying to bridge sides, and how you would think about AI capability augmenting you in that process.

Gabriela Ramos

Well, thank you. Thank you so much for inviting me. Is it working? For inviting me to this early morning. And I find this topic fascinating. because when you are a diplomat and when you have negotiated many standards or agreements you don’t think about this taxonomy you never think about the taxonomy you just think that you need to get it someday and that you need to find consensus and that you need to find where the problems will be and therefore it’s very interesting that you asked me something to structure better how we do things and I’m going to refer to the negotiation of the recommendation on the ethics of artificial intelligence because that was a very difficult one, 193 countries negotiating during COVID and actually it was very helpful to have a zoom where I could see where all the countries were positioning themselves which actually helped a lot but the interesting thing is that it was about artificial intelligence we were negotiating and we had to map out where countries were and it was very interesting to see that some of the usual suspects that are always blocking the effectiveness of international instruments were aligning with countries that are very supportive of those but that didn’t want to see UNESCO playing a role in this field so I have Russia and I have UK in the same position that helped me because I called the UK and say are you happy to be in the same position and then they just hold one second but the interesting thing is it’s a very heavy document it’s very very because there are so many cultures we have to almost define the step by step and the interesting thing is that when thinking how can AI help us organize better at the moment it did not provide with so many inputs it was 2021 but UNESCO has this idea of being super inclusive we developed the recommendation with all the regions in the world represented and all the disciplines but then we put it out to the world and we receive 55 thousand comments therefore we use AI to integrate them.

That was, no? But then when you think about how do you map the positioning of countries, I think that would have been super useful to have more AI. I used to have full teams providing me with briefings for the people we were going to talk because you need to be conducting a lot of, one thing is the negotiation in the room, another thing is all the legwork that you need to be doing, talking to the different actors, knowing where they stand. And that would have been amazing, just to have a repository of what is the traditional position of certain countries or certain negotiations, which had to do with the substance, but probably has to do also with the positioning of that country in the international context and how much they abide by the rules and how much they support these things.

And then what I find fascinating, but this is always, as my colleagues here said, how you keep the woman, the woman in the loop, I love it. Yes, woman in the loop, not human in the loop, human in the loop. it was a lapsus it’s not lost on me my panelists here it was a lapsus but I like my lapsus the whole point is when you are in front of a person and you’re trying to convince that person that he’s alone nobody’s supporting his position and therefore he should not continue blocking the negotiation how would it be that you can have more information about that person what moves them how can you offer something that will be important for them because this is the kind of things that we do negotiating what would you want to have out of it I know you have your bosses in your shoulder and you need to bring them something to the table but tell them you’re alone that you’re blocking it and imagine you can have the information about that person but that’s also risky because it deals with privacy and all of those things but I feel it would be fantastic because this is strategic thinking and using the right words to get the countries to agree that will bring you on some places and that I think is a very important thing that’s a capacity that can be augmented by AI.

Thank you very much.

J. Michael McQuade

Yeah. Is it on? Yeah. Thank you very much for, actually it’s a terrific transition because Charlie was talking about the complexity of these negotiations, about how they’re not dynamic. I can think of nothing more dynamic than a UNESCO negotiation. Just trying to understand where people’s positions are by itself is a complexity. Trying to integrate 20 or 30 of those positions or 190 of those positions and then trying to find what are the right levers that I might be able to pull. We do this now all the time with people, with you. And the question is how can modern tools help in that process without removing or absolving responsibility for people. So thank you. So Nandita, you have had an amazing career in academia, the public, the private sector.

You’ve been at Stanford. You’ve worked in intelligence and advisory. And you’ve seen all the different things sides of these negotiations, both from the government side, the private sector side, inside and outside. You are currently at the Director of Intelligence at the Special Competitive Studies Project. For those of you who don’t know, SCSP is a major effort funded by Eric Schmidt, sponsored by Eric Schmidt, after the conclusion of the National Security Commission on Artificial Intelligence in the U .S. in the way that technology will be used in competition for economics, national security, et cetera. So it’s a big, broad role. Every day you are negotiating. Every day in your life you have negotiating. So can I just ask you to talk a little bit about your view, both from an SCSP point of view, but also from your career about how you would see this evolving?

Nandita Balakrishnan

Absolutely. And thank you so much for having me. And good morning, everyone. So my career, as you mentioned, has sort of spanned three distinct sectors. And they kind of came at different times, academia, public sector, and private sector. Now, there was a time, I would say, that all… three of these groups would have, the particular ways they would have leveraged technology would have obviously its variations, but the access and adoption of it were much more similar. This is just fundamentally not true of AI. The public sector has been more in the passenger seat, if not the backseat, especially over the last decade. And so what was really interesting to me is I started in academia, then went to the public sector, came out of the private sector, and I saw that dip in my access to AI.

Now, I have been in intelligence, and one important thing about intelligence and maybe a misconception of it is that it is primarily used for military, feeding information for military applications. It’s not true. We are just as vital to diplomatic efforts because every opportunity you’re looking for that something bad could happen, you’re just as much looking at what are the opportunities for something positive to happen. How can you open the negotiating space? So we’re looking at everything from both sides. So I wanted to give that perspective. As an analyst, I can say personally, it was very valuable. valuable to have the rigorous training I had to do things very, very manually. So learning how to write an assessment without the access to AI.

But now that I’m on the back end of it, I can tell you every day I ask myself, like, if I had access to these tools as an analyst, how could I have worked much faster and much smarter? Because at the end of the day, and something that Gabriella was mentioning, there’s a lot of data out there, but a human analyst is never going to be able to manually process most of that by themselves. The story I always like to tell is the very first time I wrote this intelligence piece, I was so proud of it. I thought the argumentation was great. The data I had used was great. I showed it to a mentor, and they said, this is awesome, but you didn’t consider this one piece of data from 10 years ago that completely negates your argument.

And here’s the thing. It’s not that I didn’t do good work. It’s just that there was no way I was going to know that that piece of information exists. Now imagine a tool that can help you not only identify that that data exists, but learn how to synthesize it. Okay. Thank you. Obviously, as Charlie and Slavina mentioned, human in the loop is always going to be important because you want these assessments to have a human level at the end of it. But there is a way to move better and smarter. And this is something that SESP is really advocating for in sort of three distinct ways. So first, at a very meta level, we make the argument that AI has fundamentally changed the threat landscape and the scope for global competition.

It is now kind of the foundational way we need to think about geopolitics, especially as this technology is rapidly evolving. So you really cannot divorce AI and AI adoption when you’re trying to understand geopolitics and foreign policy. Number two, in order to have an ability to make assessments about this emerging technology, to understand geopolitics, you have to have a public sector that is actually leveraging these tools to the best of their ability. Now, there are a lot of ways that AI is being adopted at the public sector. sector, you know, you’re obviously thinking about sort of the, again, military application of drones, but you need to have your day -to -day workflows integrating this technology.

And this is something that we are really focusing on, especially about how to build up AI literacy within the public sector, not just at the military level, but within the intelligence community, within the State Department, and even within, like, commerce, OPM, all the, like, any federal sector employee at some point needs to be moving smarter and faster with AI. And third, we’re looking at the, like, specific use cases. So one of the projects that we were working on last year is looking at how AI can be used for predicting geopolitical events, both for military applications, but also for State Department applications. So, and the reason we do that is because in order to convince the public sector that they should be using AI, you almost need to show them how it could look like 10 years from now as we’re moving to that future.

So by kind of demystifying its use and showing them targeted ways that you can use it, it actually solves your meta -problem of understanding why AI is so important to geopolitics. .

J. Michael McQuade

So I’m hearing a couple of things. I’m hearing this general statement that much of the world is going to be about AI and much of the world is going to be AI creating that world. And that’s the metaphor that comes directly to the project we’re looking at, which is we’re negotiating with AI tools. We have to have a baseline of capability. And yet the landscape in which negotiation and diplomacy are happening is being fundamentally changed by AI itself. And so that whole issue around preparedness and around setting the ground rules. I also heard not just the thought process around artificial intelligence as a trusted agent to accumulate information. Both of you mentioned that. But also as an agent to help understand new pathways for success, new pathways for leverage, whether those are national security or whether those are economic vitality.

The scope of the negotiations doesn’t really change that. So. In the area of. sort of preparation, let me come to you, Robin. Robin is the co -founder and CEO of Apolitical. Apolitical is a global platform for policymakers that specializes in government innovation. She’ll talk about that in just a second. In particular, you have courses in helping governments prepare their workforces for the modern world in which they live. Your AI courses have reached hundreds of thousands of people around the world. And much of what you are trying to do is to prepare the world for the kind of things that we are talking about in Move 37, obviously much broader than just that topic. So let me ask you to talk a little bit about that, if you wouldn’t mind.

What sort of lessons you’ve learned from the field and how you think about policymakers’ willingness or how you change policymakers’ willingness to embark on journeys with new tools and new capabilities.

⁠Robyn Scott

Stanford HAI is one of our collaborators. So we are more context experts than content experts, and we bring the content experts into the middle. So where are we at? Let me give you some data. And this is from a 5 ,000 -person survey that we’ve recently run. Overall, public servants are incredibly optimistic about AI. North of 90 % think there is huge possibility in the public sector. And there are lots of paradoxes here. They’re also wary of it, right? There is a huge value creation opportunity. One figure from BCG estimates that there’s 1 .75 trillion of public sector value to be unlocked if we harness AI in the right way, because AI loves bureaucracy, all these repeatable processes. And about a third of most public officials daily watch And I’m going to do something about it.

And I’m going to do something about it. is research and writing related. AI is great at that. So the prize is very, very big and that’s just the painkiller prize. When you get to the vitamin prize, when you get to what AI could do in terms of predictive policy making and responsive policy and adaptive policy, et cetera, then you get into a space that’s only really bounded by the imagination. So there’s lots of AI talk. There’s less AI action. Increasingly, we’re in a pilotitis zone where almost everyone’s got pilots. 70 % of leaders say they’ve either got AI pilots or plan to launch them this year, but only 45 % of them say they have any plan to evaluate their pilots.

So that’s a pretty big gap to close and we see gaps like this all the time. One of the biggest gaps is leaders not using the technology themselves, which is a real problem because you can’t understand this technology in the abstract. You cannot look over your grandson’s shot. older and see them using it. You’ve got to use it. You’ve got to feel the speed of change. Of the public servants who are implementing AI in the public sector globally, these are people who self -identify of having AI in their jobs. Only 26 % of them say they understand their own country’s ethical frameworks. So approximately three -quarters of all the people rolling out this technology are freestyling. That’s terrifying.

So that’s a skills and knowledge gap not even closed within an institution. It’s not even getting to how do we actually understand the basics of this technology. Just to close, and there’s a whole lot more fascinating data, but one of the things that is increasingly worrying me talking to leaders around the world working on this is that we are now getting quite drunk on the idea of AI agency. Thank you. but we’re not talking about human agency in the process and maintaining it. So I think we risk getting into a zero sum dynamic where, and I think this is relevant to diplomacy, where the agency drains away to AI and that all comes at a cost to humans.

So we need to be building up humans at the same time. And the framing and heuristic I found most helpful for this overall is this idea of this recently merged of being below or above the algorithm. If you’re below the algorithm, you might be an Uber driver being dispatched, an Amazon packing worker being allocated to put stuff into boxes. If you’re above the algorithm, you are using tools to further your goals. We need, when we think about closing that capability gap, and I think in diplomacy, to keep moving people up above the algorithm.

J. Michael McQuade

Great, fantastic comments from all of you, thank you. I’m gonna ask Charlie and Sylvina just a couple of quick comments to make comments on the project. All right. Because then I’m going to come back to the three of you and I’ll prompt you with the question now, which is what would you want to be comfortable with knowing about the tools and the capabilities that you will be asked to use or be offered to use? So, Slavina, can you just follow on Robin’s comments? One of the benefits we have of doing this program at the Belfer Center and the Kennedy School is a large set of people who have done this for a living. This is what diplomats and negotiators have done.

Can you talk a little bit about how we engage that group and what we’re trying to get from that?

Slavina Ancheva

Definitely. So I think, as Michael put it, obviously at the Belfer Center we have quite a variety of current, former diplomats, practitioners, not just from the U .S., but from all over the world. And a large part of the work that we’ve been doing is sitting down for one -on -one interviews with all of them and really getting a sense of how they think. Thank you. just about the content of the major negotiations that they’ve been leading, but more about the process. So very similar to the panel discussion we’re having here today, what are some of the uses that you see one day you could be using AI for? So a lot of what we’ve heard so far, I mean, the position tracking that I think Gabriella referenced.

We’ve heard a lot about historical precedent, the generating of options and strategic options, and really uncovering the deepest interests. And I think where this ties really well to what Robin was saying is a lot of them are also expressing their hesitancy. So they’re being very forthcoming in that, and I think that allows us to take a really sober look at what are the risks of integrating these tools. One of the main questions we get is if you’re using these tools, we ask them, what would you like to know? So exactly what Michael’s saying now. So a lot of these interviews have really been integrated across the different work streams of our project, and we really put diplomats and practitioners at the heart of the rest of the work that we’re doing.

All right. Thank you.

J. Michael McQuade

And Charlie, you talked a little bit before about, you know, there’s an obvious role that LLS has. I think that’s a really good point. And I think that’s a really good point. in helping people accumulate and synthesize a very large amount of information. But there are many more aspects of a negotiation. Can you talk just a little bit about some of the other ways the tools are going to be used in the project that we’re doing?

Charlie Posniak

Yeah, absolutely. Thanks so much, and thanks so much for the comments as well so far. The panelists have touched on a couple of really interesting applications, whether it’s, especially as Robin was talking about, with predictive or adaptive policies, if you’re looking at, and Anita as well, with the predictive geopolitical events. We have a really fascinating array of algorithms that are incredibly competent at these sorts of predictive tasks. And what we now have with the current computational ability and also the ability of language models is we can process vast amounts of unstructured data in ways that makes these algorithms even more accessible to a wider range of people. So I think that that’s one area that I’m particularly excited about.

There’s also a bunch of stuff, as Gabriella was touching on, is how do you take these vast unstructured transcripts and come up with natural… natural language processing additions on top, and can we represent positions that they track? and change over time. I think that is one of the big parts of cognitive load that I think diplomats have spoken about a lot.

J. Michael McQuade

Great, thank you. We’re going to see if questions from the audience are in just a moment, but Nandita, I’m going to start with you. I’m going to, forgive me for doing this, I’m going to characterize your career as an analyst. Every job that I read and see about has been an analyst of some kind. You’re constantly in a place where people are suggesting new tools and new capabilities. So think about AI in the world you’ve inhabited. What’s going to make you most comfortable when somebody shows up and says, here’s the thing, it’s going to make your life better?

Nandita Balakrishnan

That they can explain what the outputs are. So one of the things, like when we were looking at, for example, could we be using AI for predicting geopolitical events, a lot of people, both in industry, in academia, and in the public sector, that are working on these types of projects, they all say the same thing, which is this should be seen as a data point or a shaper of the way you should view the world, not as finished intelligence. Finished intelligence, should always… ultimately be done by a human who is accountable to their policymakers. Now, if you are working in policy, you understand actually how this works. You’ll have your head of state or your head of government come to you and ask you to explain exactly how you got to your assessment.

Right now, we have a human who ultimately has to do that. But what happens when we’re starting to rely more and more on AI tools is you never want to lose that ability to explain the outputs and particularly demonstrate that you’ve looked at all the counterpoints that you could possibly do. Now, this is where I think AI is super helpful because oftentimes when I was trying to figure out, especially in academia, what are all the things I could have done wrong? How could I have measured this differently? You’re always prepared to thinking about all the decisions that you made and how to justify and validate them. But as the scenarios get broader, as they get more complicated, your ability to figure out what the counterarguments are are going to just dwindle over time.

Oftentimes, the argument we make is that humans are biased at the end of the day. We make our decisions differently. We make our decisions differently. We make our decisions differently. We make our decisions differently. We make our decisions differently. We make our decisions based on how we got to where we are today. the experience that you have. So AI can be really, really helpful in helping you sort out the counterarguments, but you still need to understand how those counterarguments work and why ultimately you’ve come to the assessment that you have. So where I would feel super comfortable is this is how I relied on AI. This is how it came to the output that it did.

This is fundamentally why I made the assessment that I did.

J. Michael McQuade

Great. Thank you. And Robyn, I think this is the world you live in every day of helping governments and government officials and civil service workers. So project that onto an AI for diplomacy landscape. What do you think is going to be important to get people to say, I’m going to trust this, I’m going to work it, but I’m at least going to try?

⁠Robyn Scott

Well, at the risk of stating the obvious, I think we should just acknowledge that the people developing these models don’t even have full legibility over how they’re working. So that’s where we’re starting from. So that’s the kind of ceiling. On where we can get to. You can break down the thinking process as it were, but you still have that black box. I don’t think it’s insurmountable. I think some of the things I’m worried about relate to the more psychological aspect of this, and in particular, sleeping at the wheel, this phenomenon where we have this strange relationship with AI where we get false negatives too quickly. So it does a bunch of clever things, except it didn’t do this one thing, and therefore we can’t use it for anything.

And if you check back in in like a month’s time, often it can do the thing. So you have that, the false negative, and then the phenomenon of sleeping at the wheel is where it starts to get very, very good, creeping upwards of like 85%, 90 % accuracy, and then you assume it’s 100 % accurate. And it’s really quite hard to edit your assumptions and say, no, it’s not. And you’ve probably all found this. Because if they’re power users of AI, and I’m one of them, some of them are not. Sometimes it comes across as so smart and so brilliant and comes up with a whole lot of counter -arguments. I use it for sort of kicking the tires from different perspectives all the time on stuff I’m doing.

That it’s almost overwhelmingly smart, and you’re like, it must have covered everything. That’s a default. So I think giving us the human tools and the psychological sort of counter -arguments and weaponry to deal with this is really, really important. I already have a heuristic that whenever I open my phone and I’m dealing with anything with an algorithm, I am in opposition to that algorithm because its interests don’t generally coincide with mine. So I try and get all algorithmic stuff off my phone as a starting point. But the dynamic with AI is a bit different, but I still think you have to have that sort of battle mentality with the technology. So that would be my…

There are many other things to consider, but that’s top of mind.

J. Michael McQuade

I think that’s terrific. And I think this idea of calibrating on the… Like what I want from completeness when I ask for analysis may be very different than what I want from can you just give me some different ideas that I haven’t thought about before. And different stages in a negotiation are going to require different levels of calibration. So, Gabriela, what do you think?

Gabriela Ramos

Well, it’s very difficult to follow these two girls. But the fact is that when you know a little bit more about how these things work, I’m not a technologist. But I have been looking at all of what can go wrong. Misrepresentation, over -representation of certain cultures, certain languages, assumptions. Therefore, if I am negotiating and you’re going to offer me a tool to improve my negotiating skills, I need to be sure that the assumptions that you use to build that tool are not just to beat the person in front of me. or not just to maximize efficiency or not just to do the kind of things that we are teaching the AI to do. And therefore, it’s much more complex.

Because what you want to do is to open a space of human understanding. How do you do that? And therefore, I will be questioning, as Robin said, always questioning, but it’s not what we do. And the other point is that the AI, what is amazing, is that it’s just reproducing cognitive abilities that humans do. So when you go into the using whatever chat box you use to get information, you take for granted what it comes out. What you would never do if you hire somebody in the first week. Even if you have done all the checkpoints for that person to have the capacities that you are looking in the market. So I feel that there is this question of, first, really bringing to the table the AI.

tools that are going to be reliable and trustworthy, and I know that these words are almost a cliche, but the reality is that sometimes they’re not. And the other point is that you can become very lazy. And how do you avoid just to grab the thing and say, that’s perfect? How do you keep that space for ourselves to take the decisions and be not only in the driver’s seat, but actually to think of AI as a supporter cast. And if we get the Oscar, it’s us and not the AI.

J. Michael McQuade

That’s fantastic. I have this mental picture in my mind for those of you who’ve done negotiations in any kind. The first thing you do is you grab a bunch of your team in a room and you say, let’s talk about strategy, and what are we going to do? And Bob in the corner says, here’s an option, and you realize Bob had a bad night last night, so maybe you discount what Bob says. So what happens when the AI says, here’s a thing? I don’t just trust it, it’s a priority. I have to apply a human judgment to what I’m hearing. So terrific points. Okay, we have time for a question or two, and I see one.

Just say who you are, where you’re from, and a quick question.

Audience

Thanks so much, Michael.

J. Michael McQuade

We have a microphone. Thank you.

Audience

Thanks so much. I’m Sam Dawes. I’m a senior advisor to the Oxford University AI Governance Initiative and director of multilateral AI. But my background is in diplomacy, working for Kofi Annan when he was Secretary General, and then for the Foreign Office and Cabinet Office. I wish we had had AI tools back then. So I was really inspired. It’s such a timely, rich panel, so thank you all for that. Something that Gabriella said around culture I think is so important, and I’m thinking about the positives and the risks with applying AI in this space. How can we ensure that the diverse cultural inputs of the world’s most diverse countries, of different societies, are… embedded in the data sets and the models which inform negotiations.

So is that something that UNESCO is working on in the long term and connects to the tools we use? And the second question is around the flip side. If AI is to be a useful neutral mediator in disputes or an assistant to a mediator, a human mediator, then what do we do about data poisoning and prompt injection and those kinds of risks? Thank you.

Gabriela Ramos

Very fast, not on the question, the question of culture. Culture is expressed by language. And therefore the more we can try to represent those languages in the models we use, I think the best we will be prepared to understand it. And I’m fascinated by that. I’m not a linguist, but if I… I would choose another life, I would do that. Because when you hear, for example, there was this Namibian representative during the negotiations of the ethics of AI, and she was saying, I find your draft very individualistic. It’s always about the human. It’s always about the outcomes for people, improving their welfare. And at the end, what I’m thinking about is the Ubuntu philosophy, which is I am because you are, and we are because it’s nature, and we are interlinked.

And therefore, how do you capture this when the models that we are developing are maximizing individual welfare? And so the only answer I have is try to be representative, and I think this is nothing new. We have seen how much these tools can discriminate if you are just built in one language or with the representation of certain characteristics of people. or countries. but really to be sure that you are capturing the richness that comes through language and opening up the sources and that’s the other point the sources this is one thing that I would always ask the answer you’re giving me is based on what sources and that would might help but these are checkpoints that we always need to be testing on the ground

J. Michael McQuade

I think you also if I just raise one other thing I think you raise one other really important point which is you know there’s a whole spectrum of things here there is negotiation because we have a set of interested parties to get to a common good understanding we also have very adversarial negotiations so adversarial negotiations open up this whole possibility of data poisoning of training set differentiation etc. so it’s a very complex world I really appreciate you bringing it up we have time for one more question I think let’s go right here can somebody tell me are we counting down to zero or are we counting down to zero or are we counting down to five Are we okay to keep going to zero here?

Okay, good. We’re going to go to zero no matter what.

Audience

Good morning. Good morning. Namaste. My name is Devika Rao. I meet 300 to 600 people per day, and I work around different languages. So basically I’m an Indian classical dance teacher. Okay. So I have data. I have a human connection. So, and what we want to do, how this cultural education can be supported by AI. So what is the step I can take further? Presently I’m actually working on a framework, cultural framework, which is India and UK POCC 2025, 2030. So I’m also interested in NEP and national health policy because people connected to their health and education. And education, which is the center point. So where I can go. and what kind of co -creation, co -collaboration can happen in this?

J. Michael McQuade

Robyn, is this something you want to jump in on? Maybe give her your email address.

⁠Robyn Scott

I wish I had an immediate response to that. I don’t think there is any default place to go, but I do think this is where the conversation is evolving, and there’s more and more recognition of the cultural oversight and importance. So I would just encourage you to please keep making those points. And I will just make one comment on the first question. The Swiss have built sort of a quasi -Swiss government, quasi -multilateral initiative to build an LLM that is trained from the outset on more than 100 languages, and it is actually run by a friend of mine who’s a former Swiss diplomat, so she’s coming at it very much with a diplomatic context. I’m very happy to make that connection.

Gabriela Ramos

Education. Super. Super complex. Don’t look at the technology. because we always focus on the technology the countries that have introduced so much technology in their educational systems didn’t get better student outcomes because of content we go to the internet and we go to the systems and we try to bring tools to help kids and we never see if they are contextually relevant culturally linked and therefore if you don’t produce the content the tools will not make it.

Nandita Balakrishnan

I’ll just add one last thing I think the way to think about AI is also is it actually solving a problem or are you just trying to introduce it to create a new problem I think this is where you have to think about the point of AI augmentation I think there are a lot of ways we can think about how AI can augment the problem sets that we have but sometimes you don’t actually have the problem that AI is going to solve and you don’t need to force AI to fix it

J. Michael McQuade

Thank you very much Okay we’re going to negotiate If you have a really quick question you can ask it No behind you Thank you it’s got to be quick though

Audience

My name is Arman I’m working for JPL South Asia just a quick question on how do you think this would impact balance of power like given that every country has different access to the kind of data sets that they have and as we saw there can be three states also in the play how it would look like state A knows everything about the rest of the players and the others don’t.

J. Michael McQuade

So we think a lot about I’ll answer this if you guys are okay we think a lot about this in the project which is what’s the evolution of a set of AI tools it’s like everything else that we are here at this conference which is where will tools provide competitive leverage where are the kind of tools in the world we live in ones that should be dispersed actively and offensively not defensively in a world where some of the negotiation is about getting everybody to a positive and some of the negotiation is adversarial so I think it is a huge element of how it will change power structures not just because it’s a thing we think about from a negotiation diplomacy but because the general AI tool.

Okay with that we are just about out of time. I want to thank this amazing panel. Gabriela, Nandita and Robin. I want to thank my colleagues Slavina and Charlie for and I want to thank all of you. We are at the beginning of a long process. When you work at a place like I do at the Belfer you think about projects that have beginnings, middles and ends and you think about projects that can grow into something really really important. So any of you who have interest in what we’re doing please let us know if you feel like you have questions that we ought to be asking or you have answers or you have answers to questions we have asked.

We would love to hear from you as we begin to build what we think is a really important discipline. So thank you and thank you to the sponsors and hosts. I appreciate everybody joining us. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (30)
Factual NotesClaims verified against the Diplo knowledge base (5)
Confirmedhigh

“J. Michael McQuade introduced the MOVE 37 initiative, a new project by Harvard’s Belfer Centre exploring AI’s role in diplomatic negotiations.”

The knowledge base explicitly describes MOVE 37 as a new Belfer Center project introduced by J. Michael McQuade to explore AI augmentation of diplomatic negotiations [S2].

Confirmedhigh

“Artificial intelligence is a major aspect of the Emerging Tech Programme’s work and can augment human capabilities in diplomacy and negotiation.”

S2 notes that AI is central to the programme’s aim to augment human capabilities in diplomatic negotiations, and S16 lists specific AI applications for negotiation analysis, confirming the claim [S2] and [S16].

Additional Contextmedium

“AI tools can help participants accumulate and synthesize large volumes of information during negotiations.”

S1 highlights the role of large language models in helping people accumulate and synthesize very large amounts of information, providing additional nuance to the claim [S1]; S16 further describes AI-driven data analysis for negotiation scenarios [S16].

Confirmedhigh

“Diplomacy remains a fundamentally human activity; AI should augment but not replace human agency in high‑stakes negotiations.”

Both S17 and S24 stress that AI is a tool that must remain under human control and that the art of negotiation and trust-building are profoundly human, supporting the report’s framing [S17] and [S24].

Additional Contextmedium

“Negotiations often involve many counterparts, multiple countries, and generate massive amounts of documents, creating information overload and strategic group‑think pressures.”

S106 discusses the complexity of international negotiations, including the struggle over process, multiple participants, and extensive documentation, adding context to the described information overload [S106].

External Sources (106)
S1
https://dig.watch/event/india-ai-impact-summit-2026/how-ai-is-transforming-diplomacy-and-conflict-management — And then at the far end, two of my colleagues, researchers on our program at the Belfer Center. Charlie Posniak is a ful…
S2
How AI Is Transforming Diplomacy and Conflict Management — And then at the far end, two of my colleagues, researchers on our program at the Belfer Center. Charlie Posniak is a ful…
S3
How AI Is Transforming Diplomacy and Conflict Management — And the MOVE 37 initiative that we’re here to talk to you about today. is a part of that program. As you can imagine, in…
S4
How AI Is Transforming Diplomacy and Conflict Management — -Slavina Ancheva- Research fellow and MPP student at the Belfer Center, working on the MOVE 37 initiative
S5
https://dig.watch/event/india-ai-impact-summit-2026/how-ai-is-transforming-diplomacy-and-conflict-management — And then at the far end, two of my colleagues, researchers on our program at the Belfer Center. Charlie Posniak is a ful…
S6
How AI Is Transforming Diplomacy and Conflict Management — And the MOVE 37 initiative that we’re here to talk to you about today. is a part of that program. As you can imagine, in…
S7
https://dig.watch/event/india-ai-impact-summit-2026/how-ai-is-transforming-diplomacy-and-conflict-management — And the MOVE 37 initiative that we’re here to talk to you about today. is a part of that program. As you can imagine, in…
S8
How AI Is Transforming Diplomacy and Conflict Management — – Robyn Scott- J. Michael McQuade- Charlie Posniak
S9
tABle of Contents — and Costs, 24 health aff . 1103, 1103 (Sept./Oct. 2005), available at http://content. healthaffairs.org/cgi/reprint/24/5…
S10
MASTERPLAN FLAGSHIP PROGRAMMES — | S/N | Name | State Departments | |——-|———————|-…
S11
WS #280 the DNS Trust Horizon Safeguarding Digital Identity — – **Audience** – Individual from Senegal named Yuv (role/title not specified)
S12
Building the Workforce_ AI for Viksit Bharat 2047 — -Audience- Role/Title: Professor Charu from Indian Institute of Public Administration (one identified audience member), …
S13
Nri Collaborative Session Navigating Global Cyber Threats Via Local Practices — – **Audience** – Dr. Nazar (specific role/title not clearly mentioned)
S14
The digital economy in the age of AI: Implications for developing countries (UNCTAD) — Gabriela Ramos, Assistant Director General for Social and Human Sciences at UNESCO, has highlighted the unique mandate o…
S15
https://dig.watch/event/india-ai-impact-summit-2026/how-ai-is-transforming-diplomacy-and-conflict-management — And the MOVE 37 initiative that we’re here to talk to you about today. is a part of that program. As you can imagine, in…
S16
Negotiations — Negotiation is a complex and dynamic process requiring strategic thinking, psychological insight, and cultural awareness…
S17
Why will AI enhance, not replace, human diplomacy? — AI tools are already here to assist certain aspects of negotiations, from language translation to data analysis. However…
S18
Artificial Intelligence & Emerging Tech — Jörn Erbguth:Thank you very much. So I’m EuroDIG subject matter expert for human rights and privacy and also affiliated …
S19
Cybermediation: What role for blockchain and artificial intelligence? — After explaining in further detail some aspects of NLP, she suggested that these tools can be used to support the work o…
S20
What are diplomatic competencies for the AI era? — In emerging hybrid intelligence models, diplomats must effectively collaborate with AI systems, blending human judgment …
S21
Enhancing rather than replacing humanity with AI — Human judgment stays central, especially for important decisions. AI provides information, analysis, or options, but peo…
S22
Turbocharging Digital Transformation in Emerging Markets: Unleashing the Power of AI in Agritech (ITC) — Moreover, while AI and new technologies have significant potential in agriculture, it is crucial to understand that they…
S23
Seeing, moving, living: AI’s promise for accessible technology — Human oversight and choice must remain central. Users should control when their devices collect data, how that data is u…
S24
AI diplomacy — However, we must remain masters of our tools. The final analysis, the subtle art of negotiation, the building of trust; …
S25
Diplomatic policy analysis — Overreliance on technology:While machine learning and analytics are powerful tools, they are not infallible. Overdepende…
S26
From agentic AI to agreement technologies: LLMs as a new layer in diplomatic negotiation — Another concerns transparency and accountability. For negotiation support, it is not enough that a system produces a pla…
S27
Engineering Accountable AI Agents in a Global Arms Race: A Panel Discussion Report — Moderate disagreement with significant implications. While speakers share common concerns about AI governance, they diff…
S28
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Alex Moltzau: Yes, so one thing that I didn’t mention that we are working on currently is also these AI regulatory sandb…
S29
Artificial intelligence and diplomacy: A new tool for diplomats? — Artificial intelligence (AI) is transitioning from science fiction into our everyday lives. Over the past few years, the…
S30
WS #97 Interoperability of AI Governance: Scope and Mechanism — Yik Chan Chin: Thank you, Sam, because we know he’s an expert in terms of the UN. So thank you very much for your comm…
S31
How AI Drives Innovation and Economic Growth — The speakers show broad agreement on AI’s transformative potential for development but significant disagreements on impl…
S32
Panel Discussion Summary: AI Governance Implementation and Capacity Building in Government — The discussion revealed a common theme across different contexts: the gap between policy ambition and implementation cap…
S33
AI as critical infrastructure for continuity in public services — Awareness and capacity gaps exist in understanding available standards and building blocks
S34
Artificial intelligence (AI) – UN Security Council — Another significant risk is the potential for bias in AI algorithms, which can reflect existing prejudices and stereotyp…
S35
How to ensure cultural and linguistic diversity in the digital and AI worlds? — Audience:Hamid Hawja is my name, from Morocco, director of Hebdo magazine. I have two questions. First, I’m just wonderi…
S37
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — Qian Xiao:OK, well, I’m doing a lot of research on the international governance of AI. And from our perspective, we thin…
S38
AI diplomacy — However, we must remain masters of our tools. The final analysis, the subtle art of negotiation, the building of trust; …
S39
Enhancing rather than replacing humanity with AI — People’s judgment remains crucial, particularly for decisions that involve values, context, or individual circumstances.
S40
Why will AI enhance, not replace, human diplomacy? — In sum, AI is a powerful tool, but it is still just a tool. It is a good master and a bad servant. As we step into AI tr…
S41
Ateliers : rapports restitution et séance de clôture — Joseph Nkalwo Ngoula Merci. C’est toujours difficile de restituer la parole d’experts de haut vol. sans courir le risque…
S42
Open Forum #37 Digital and AI Regulation in La Francophonie an Inspiration and Global Good Practice — Boukar Michel: Thank you, Mr. Henri. Mr. Ambassador in charge of digital, thank you for giving me this opportunity to ta…
S43
How to ensure cultural and linguistic diversity in the digital and AI worlds? — Such inclusivity has far-reaching implications for achieving SDG 4: Quality Education and SDG 10: Reduced Inequalities, …
S44
Artificial intelligence — Content policy Cultural diversity Inclusive finance Multilingualism
S45
Embracing AI in diplomacy: How can Europe prepare for pivotal transformation in global affairs? — Firstly, AI is reshaping the geopolitical environment in which diplomacy operates. It facilitates the redistribution of …
S46
Shaping an inclusive global action to anticipate quantum technologies — Cooperation amongst non-quantum states may shift power dynamics. Moreover, the strategic alignment among the Global Sou…
S47
Revisiting 10 AI and digital forecasts for 2025: Predictions and Reality — Digital networks and AI developments are critical assets for countries worldwide. Thus, they become central to national …
S48
How AI Is Transforming Diplomacy and Conflict Management — And Charlie, you talked a little bit before about, you know, there’s an obvious role that LLS has. I think that’s a real…
S49
Negotiations — Artificial Intelligence (AI)has various applications in diplomacy. It can be used for data analysis to predict the outco…
S50
AI Algorithms and the Future of Global Diplomacy — “But of course, we have specific applications in the foreign office like supporting negotiations.”[14]. “A lot of what d…
S51
World Economic Forum Town Hall on AI Ethics and Trust — Risk Assessment Before Trust Trust requires context and cannot be evaluated without specific use cases. Botsman argues …
S52
AI with Trust — how trust inAIcan be increased through a coordinated interplay between standards, laws and conformity assessment
S53
Searching for Standards: The Global Competition to Govern AI | IGF 2023 — Audience:Thank you. My name is Sonny. I’m from the National Physical Laboratory of the United Kingdom. There’s a few wor…
S54
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Alex Moltzau: Yes, thank you so much. My name is Alex Maltzau. And I work as a second national expert in the European AI…
S55
WS #283 AI Agents: Ensuring Responsible Deployment — As the session reached its time limit (with Prendergast noting the final 10 minutes), the discussion revealed both the p…
S56
Building the Next Wave of AI_ Responsible Frameworks & Standards — I think there is a significant role the governments, innovation hubs, academia, and startups have to play in developing …
S57
Artificial intelligence (AI) – UN Security Council — Algorithmic transparency is a critical topic discussed in various sessions, notably in the9821st meetingof the AI Securi…
S58
AI as critical infrastructure for continuity in public services — “I believe that there is perhaps awareness challenge as well as the capacity challenge, because I think that this whole …
S59
Open Forum #17 AI Regulation Insights From Parliaments — AI governance requires ongoing education for all stakeholders – politicians, policymakers, and the general public. This …
S60
Gender rights online — AI systems can learnbiasesfrom training data, leading to discriminatory outcomes online, includinggender-based dispariti…
S61
Leveraging AI to Support Gender Inclusivity | IGF 2023 WS #235 — In conclusion, the use of AI language understanding has made significant progress in reducing inappropriate sexual conte…
S62
AI diplomacy — However, we must remain masters of our tools. The final analysis, the subtle art of negotiation, the building of trust; …
S63
Why will AI enhance, not replace, human diplomacy? — AI tools are already here to assist certain aspects of negotiations, from language translation to data analysis. However…
S64
Oman: Nexus between traditional and tech diplomacy — There is much discussion about AI and digital tools changing diplomacy. Yet, it is clear that technology will not replac…
S65
From agentic AI to agreement technologies: LLMs as a new layer in diplomatic negotiation — Another concerns transparency and accountability. For negotiation support, it is not enough that a system produces a pla…
S66
How AI Is Transforming Diplomacy and Conflict Management — “We have seen how much these tools can discriminate if you are just built in one language or with the representation of …
S67
AI Transformation in Practice_ Insights from India’s Consulting Leaders — For consulting firms, the path forward involves embracing AI as an enabler whilst focusing on uniquely human capabilitie…
S68
How AI agents are quietly rebuilding the foundations of the global economy  — AI agents have rapidly moved from niche research concepts to one of the most discussed technology topics of 2025. Search…
S69
Harmonizing High-Tech: The role of AI standards as an implementation tool — Sezio Onoe:Yeah, actually the ITU has published over 100 standards and also the 120, around 120 now ongoing. So actually…
S70
How AI Drives Innovation and Economic Growth — The speakers show broad agreement on AI’s transformative potential for development but significant disagreements on impl…
S71
Panel Discussion Summary: AI Governance Implementation and Capacity Building in Government — The tone was pragmatic and solution-oriented throughout, with speakers acknowledging both challenges and opportunities i…
S72
Workshop 6: Perception of AI Tools in Business Operations: Building Trustworthy and Rights-Respecting Technologies — Ellis emphasized that while organizations are investing in AI technology, there’s a significant skills and capability ga…
S73
AI as critical infrastructure for continuity in public services — Awareness and capacity gaps exist in understanding available standards and building blocks
S74
How to ensure cultural and linguistic diversity in the digital and AI worlds? — Audience:Hamid Hawja is my name, from Morocco, director of Hebdo magazine. I have two questions. First, I’m just wonderi…
S75
Artificial intelligence (AI) – UN Security Council — Another significant risk is the potential for bias in AI algorithms, which can reflect existing prejudices and stereotyp…
S77
WSIS Action Line C8: Multilingualism in the Digital Age: Inclusive Strategies for a People-Centered Information Society — Sofiya Zahova: Thank you, Davide. I’m honored and delighted to join you today on this important panel, but even more ple…
S78
How Multilingual AI Bridges the Gap to Inclusive Access — The tone was consistently collaborative, optimistic, and mission-driven throughout the conversation. Speakers demonstrat…
S79
WAIGF Opening Ceremony & Keynote — The overall tone was formal yet optimistic. Speakers expressed enthusiasm about the potential of digital technologies wh…
S80
Opening — The overall tone was formal yet optimistic. Speakers acknowledged the serious challenges posed by rapid technological ch…
S81
World Economic Forum Annual Meeting Closing Remarks: Summary — The tone is consistently positive, celebratory, and grateful throughout the discussion. It begins with formal appreciati…
S82
Opening address of the co-chairs of the AI Governance Dialogue — The tone is consistently formal, diplomatic, and optimistic throughout. It maintains a ceremonial quality appropriate fo…
S83
Digital trade negotiations- understanding non-participation (National board of Trade – Sweden) — The lack of global representation in digital trade negotiations is problematic as it can lead to a fragmented approach a…
S84
Afternoon session — Israel criticizes the negotiation process, particularly the final stage, as lacking transparency and not reflecting the …
S85
WS #219 Generative AI Llms in Content Moderation Rights Risks — – **Technical Limitations and Trade-offs**: The discussion covered inherent technical challenges including the precision…
S86
Main Session on Cybersecurity, Trust & Safety Online | IGF 2023 — Christopher Painter:Thank you, Olga, and it’s great to be here, and I should say that you’re wondering what I’m wearing …
S87
Transforming Health Systems with AI From Lab to Last Mile — The discussion maintained a cautiously optimistic and collaborative tone throughout. It began with enthusiasm about AI’s…
S88
Strengthen Digital Governance and International Cooperation to Build an Inclusive Digital Future — The recognition of Global South leadership and the importance of environmental sustainability represents a maturing of d…
S89
Agenda item 5: discussions on substantive issues contained in paragraph 1 of General Assembly resolution 75/240 (continued)/4/OEWG 2025 — Israel: Good morning and thank you, Chair. We will present in brief, for the sake of time, some main points of our nat…
S90
Quantum Technologies: Navigating the Path from Promise to Practice — The tone was cautiously optimistic and pragmatic throughout. Panelists demonstrated excitement about quantum’s potential…
S91
High-Level session: Building and Financing Resilient and Sustainable Global Supply chains and the Role of the Private Sector — Conclusively, the discourse maintained an optimistic outlook, acknowledging the transition’s dual aspects of risk and pr…
S92
NATIONAL CYBER SECURITY FRAMEWORK MANUAL — – Hallingstad, Geir, and Luc Dandurand. Cyber Defence Capability Framework – Revision 2. Reference Document RD…
S93
Global Governance of Digital Technologies: A Contemporary Diplomacy Challenge — Technology and international affairs are interrelated. The relationship between technological and international relation…
S94
Creating Eco-friendly Policy System for Emerging Technology — Furthermore, there is an emphasis on inculcating global consciousness, forging new partnerships, and pushing for innovat…
S95
Challenges and Opportunities: Emerging Technologies and Sustainability Impacts  — In summary, the workshop highlighted the need for a comprehensive approach to sustainability in technology, covering inf…
S96
Closing Ceremony — It pushed the discussion towards considering future technological developments and how governance should adapt to them. …
S97
9821st meeting — Ecuador:Mr. President, I thank the United States for convening this important meeting. I also thank the Secretary Genera…
S98
IGF 2023 WS #313 Generative AI systems facing UNESCO AI Ethics Recommendation — Changfeng Chen:Yes, the discussion, the question were very interesting and inspired me to bring up a relative thinking, …
S99
UN General Assembly appoints experts to the Independent International Scientific Panel on AI — The UN General Assembly hasappointed40 experts to serve on a newly created Independent International Scientific Panel on…
S100
GPAI: A Multistakeholder Initiative on Trustworthy AI | IGF 2023 Open Forum #111 — Abhishek Singh:I can take that, no worries. Thank you, Abhishek. The floor is yours. You can give your question. Yeah, t…
S101
Embedding Human Rights in AI Standards: From Principles to Practice — Karen McCabe: But before I go there, first, I want to thank the organizers for this session. It’s really a very importan…
S102
Main Session on Artificial Intelligence | IGF 2023 — James Hairston:I’ll maybe start with two projects that I think begin to get it sort of solving for this, but again, are …
S103
Leveraging the UN system to advance global AI Governance efforts — The current difficulties in achieving consensus in multilateral systems underscore the necessity for inclusive negotiati…
S104
Charting the Course: Discussing the Impact and Future of the Internet Governance Forum — Anriette Esterhuysen:I think the new challenges are endless. Just look at climate change. Look at the impact of climate …
S105
Open Forum #33 Open Consultation Process Meeting for WSIS Forum 2025 — Moving forward, participants suggested several action items:
S106
HUMANITARIAN NEGOTIATION — A great deal of the conflict in any negotiation is often played out in a struggle over process. It may sound ridiculous …
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
Slavina Ancheva
6 arguments193 words per minute880 words273 seconds
Argument 1
Negotiations involve multiple parties, extensive documentation, strategic dynamics, and time pressure, making them cognitively demanding.
EXPLANATION
Slavina describes diplomatic negotiations as complex processes that involve many counterparties, large volumes of documents, strategic group dynamics, and strict time constraints. This complexity creates a heavy cognitive load for negotiators.
EVIDENCE
She illustrates the complexity by describing a scenario where a negotiator faces seven counterparts from different countries, a supporting team, and the interests of 27 other countries, highlighting the multi-layered stakeholder environment [58-65]. She then enumerates the practical challenges: thousands of documents generated, limited analyst resources, strategic groupthink, and tight time limits that pressure negotiators [66-72].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The complexity of diplomatic negotiations, including many counterparties, large document volumes and tight timelines, is highlighted in the discussion of AI transforming diplomacy [S2].
MAJOR DISCUSSION POINT
Complexity of diplomatic negotiations
Argument 2
The MOVE 37 project will develop tools, evaluation methodologies, and conduct stakeholder interviews to embed AI responsibly in diplomatic workflows.
EXPLANATION
J. Michael outlines the technical agenda of MOVE 37, while Slavina explains how the project gathers insights from practitioners through interviews. Together they emphasize building tools, evaluation frameworks, and stakeholder engagement to integrate AI into diplomacy responsibly.
EVIDENCE
J. Michael notes that the project will create tools, evaluation methodologies, and address technical aspects of AI-augmented negotiation [124-129]. Slavina adds that the team conducts one-on-one interviews with current and former diplomats worldwide to inform the project’s design [267-278].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The description of MOVE 37’s objectives, including tool creation, evaluation and practitioner interviews, appears in the AI-transforming-diplomacy overview [S2].
MAJOR DISCUSSION POINT
MOVE 37 practical implementation
Argument 3
AI should be positioned as an augmenting tool for diplomats, preserving the fundamentally interpersonal nature of negotiations.
EXPLANATION
Slavina stresses that the goal is not to replace negotiators but to give them better tools to manage complexity, keeping the human touch central.
EVIDENCE
She says, “We’re not looking to replace diplomats or negotiators here, but just to give them the tools to manage these complexities much better” [61-62].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The view that AI enhances but does not replace human diplomacy is echoed in discussions about AI’s role as a supportive tool rather than a substitute [S17] and the emphasis on human judgment in AI-augmented decision-making [S21].
MAJOR DISCUSSION POINT
AI as augmentation, not replacement, for diplomatic actors
Argument 4
Responsible deployment of AI in negotiations requires moving beyond large language models to incorporate diverse, domain‑specific methods.
EXPLANATION
Slavina calls for a broader view of AI tools, indicating that reliance solely on LLMs is insufficient for the nuanced demands of diplomatic processes.
EVIDENCE
She outlines the need to “think beyond just LLM” and stresses the need for responsible deployment of these tools [43].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Limitations of relying solely on LLMs and the call for broader, domain-specific AI approaches are noted in the analysis of AI’s role in negotiations [S2] and the broader AI-in-negotiation literature [S16].
MAJOR DISCUSSION POINT
Broadening AI approaches beyond LLMs for negotiation
Argument 5
Negotiations increasingly involve multiple actors beyond bilateral settings, requiring AI tools that can handle multi‑stakeholder dynamics.
EXPLANATION
Slavina notes that modern negotiations often include several states or groups, not just two parties, which adds layers of complexity that AI must be able to manage.
EVIDENCE
She describes scenarios where negotiators face seven counterparts from different countries and mentions that “rarely in this world do we have just two states negotiating nowadays” and that there can be a third state or dozens of members, such as the EU’s 27 members, highlighting the multi-party nature of negotiations [48-65].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The shift from bilateral to multi-party negotiations and the associated information-management challenges are described in the AI-transforming-diplomacy discussion [S2].
MAJOR DISCUSSION POINT
Multi‑party negotiation complexity
Argument 6
Supporting teams behind principal negotiators need AI assistance for evidence gathering and document management.
EXPLANATION
She points out that many departments and agencies provide evidence and documents to lead negotiators, suggesting AI could help these supporting teams handle large information flows.
EVIDENCE
Slavina explains that “there are a lot of teams that are sitting behind these principal negotiators, the different departments and agencies that are supporting them with evidence, with documents” [59-60].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The heavy documentation burden on supporting teams and the potential for AI-assisted evidence gathering are highlighted in the same source on diplomatic complexity [S2].
MAJOR DISCUSSION POINT
AI support for negotiation support staff
C
Charlie Posniak
8 arguments211 words per minute1310 words371 seconds
Argument 1
AI can help manage information overload, synthesize documents, generate strategic options, and support real‑time execution of negotiations.
EXPLANATION
Charlie proposes that AI can reduce the cognitive burden of negotiations by organizing large data sets, producing actionable insights, and assisting negotiators during live discussions.
EVIDENCE
He outlines a three-stage framework-research analysis, strategizing, and execution-supported by autonomous research agents, source validation, real-time transcription and translation, and strategy sandboxes, showing how AI can automate and augment each phase [103-115].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI’s capacity to accumulate, synthesize large information sets and support real-time transcription/translation in negotiations is discussed in the AI-diplomacy overview [S2].
MAJOR DISCUSSION POINT
AI‑enabled negotiation support
Argument 2
LLMs suffer from unverifiable fluency, opacity, and lack of accountability; they cannot replace established analytical frameworks such as game theory and decision analysis.
EXPLANATION
Charlie warns that large language models are not sufficiently transparent or accountable for high‑stakes diplomatic work and that traditional tools like game theory remain essential.
EVIDENCE
He points out that LLM fluency is not verifiable, their internal reasoning is opaque, and accountability is lacking for treaty-shaping recommendations [82-85]. He also references the 80-year-old toolkit of game theory, decision analysis, and machine learning that predates LLMs [86-88].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Critiques of LLMs’ opacity and the need to integrate them with traditional frameworks like game theory are presented in the same discussion [S2].
MAJOR DISCUSSION POINT
Limitations of LLMs
Argument 3
Effective diplomatic AI must integrate decades of methodological advances rather than depend only on chat‑bot style interactions.
EXPLANATION
Charlie argues that AI for diplomacy should build on a broad set of learning paradigms and the rich history of computational methods, not just on conversational agents.
EVIDENCE
He explains that LLMs sit at the intersection of supervised, unsupervised, and self-supervised learning, but relying solely on chatbots would ignore the extensive advances made over the past 80 years [89-93].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The argument for building on a long history of computational methods beyond chat-bots aligns with the broader AI-in-negotiation literature [S16].
MAJOR DISCUSSION POINT
Need for broader AI methods
Argument 4
Human authority must remain central; AI tools should be modular, transparent, and augment rather than replace decision‑makers.
EXPLANATION
Charlie stresses that ultimate decision‑making authority must stay with humans, and AI systems should be designed to be understandable and modular so that users can see how each component contributes to analysis.
EVIDENCE
He states that human authority is essential, tools must be modular and transparent, and augmentation should be appropriately scoped to the team and institution [117-120].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The principle that human judgment stays central and AI should be transparent and modular is emphasized in the human-centric AI guidelines [S21].
MAJOR DISCUSSION POINT
Human‑in‑the‑loop governance
Argument 5
Autonomous research agents, real‑time transcription/translation, strategy sandboxes, and red‑team simulations illustrate a roadmap for AI‑supported negotiations.
EXPLANATION
Charlie envisions a future where AI provides autonomous research, instant language services, and simulated environments for testing negotiation strategies, forming a comprehensive support ecosystem.
EVIDENCE
He describes autonomous research agents that generate validated sources, counterpart biographies, gap analyses, strategy sandboxes, red-team training, and real-time transcription/translation capabilities [108-115].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The roadmap featuring autonomous agents, transcription, translation and strategy sandboxes is outlined in the AI-transforming-diplomacy presentation [S2].
MAJOR DISCUSSION POINT
Future vision of AI‑augmented diplomacy
Argument 6
A robust computational infrastructure is essential to process large, unstructured data streams and enable real‑time analysis for AI‑supported negotiations.
EXPLANATION
Charlie argues that without strong data pipelines and processing capabilities, AI cannot effectively support the research, strategizing, and execution phases of diplomacy.
EVIDENCE
He notes that “you need this really strong computational infrastructure to be able to even begin to apply some of the really exciting and fascinating AI and ML methods” [107-108].
MAJOR DISCUSSION POINT
Infrastructure prerequisite for AI‑augmented diplomacy
Argument 7
Established analytical frameworks such as game theory and decision analysis should be integrated with AI tools to model strategic interactions in diplomacy.
EXPLANATION
Charlie argues that the 80‑year‑old toolkit of game theory, decision analysis, and machine learning remains essential and must be combined with modern AI capabilities for robust diplomatic analysis.
EVIDENCE
He references “We have game theory, decision analysis, a great range of theoretical developments that exist precisely to model strategic interactions under uncertainty” and notes that these tools are “80 years old” and should complement LLMs [86-88].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need to combine AI with game theory and decision analysis for strategic modeling is discussed in the same source [S2].
MAJOR DISCUSSION POINT
Integrating traditional strategic models with AI
Argument 8
AI systems must be designed to detect strategic misrepresentation and deception within negotiations.
EXPLANATION
Charlie highlights that negotiations involve environments where parties may lie or deceive, and AI tools need capabilities to identify such behavior to support trustworthy outcomes.
EVIDENCE
He describes “environments where there’s real strategic misrepresentation, where people are lying or deceiving or trying to shape outcomes for their own advantages” and stresses the need for AI to handle these challenges [98-99].
MAJOR DISCUSSION POINT
AI handling of deception in negotiations
G
Gabriela Ramos
7 arguments164 words per minute1455 words531 seconds
Argument 1
The UNESCO AI ethics negotiation illustrated how AI could have streamlined position‑tracking and stakeholder mapping.
EXPLANATION
Gabriela reflects on the UNESCO AI ethics recommendation process, noting that AI tools would have helped map country positions and manage massive public feedback more efficiently.
EVIDENCE
She recounts negotiating the UNESCO recommendation with 193 countries during COVID, using Zoom to see each country’s stance, and receiving 55,000 public comments that were later integrated with AI assistance [141-144].
MAJOR DISCUSSION POINT
UNESCO case study of AI in negotiation
Argument 2
Concerns about misrepresentation, cultural bias, and over‑reliance demand continuous questioning of AI assumptions and safeguards to keep humans in control.
EXPLANATION
Gabriela warns that AI tools can misrepresent cultures or over‑represent certain languages, and stresses the need for ongoing scrutiny to ensure human decision‑making remains primary.
EVIDENCE
She highlights risks of misrepresentation, over-representation of cultures, and the need to question AI assumptions, as well as the danger of becoming lazy and over-trusting AI without verification [352-358][363-367].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The importance of human oversight, bias mitigation and continuous scrutiny of AI outputs is highlighted in the human-centric AI perspective [S21] and the data-governance focus on cultural representation [S23].
MAJOR DISCUSSION POINT
Ethical governance and bias concerns
Argument 3
Multilingual, culturally diverse training data are required to avoid bias; UNESCO’s experience shows language and philosophical diversity must be reflected in models.
EXPLANATION
Gabriela stresses that AI models must incorporate many languages and cultural philosophies to prevent bias, citing UNESCO’s encounter with diverse linguistic perspectives during the AI ethics negotiation.
EVIDENCE
She explains that culture is expressed through language, cites the Namibian representative’s critique of an individualistic draft, references Ubuntu philosophy, and calls for models that represent multiple languages and sources to avoid discrimination [391-403].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for multilingual, culturally diverse datasets to prevent bias is emphasized in the data-governance and cultural representation discussion [S23].
MAJOR DISCUSSION POINT
Cultural representation in AI models
Argument 4
Overreliance on AI can foster complacency and laziness among negotiators, reducing critical human judgment.
EXPLANATION
Gabriela warns that excessive trust in AI tools may lead negotiators to become passive, undermining the rigorous scrutiny required in high‑stakes diplomacy.
EVIDENCE
She remarks that “you can become very lazy” and stresses the need to keep humans in the driver’s seat, warning against letting AI dominate decision-making [365-367].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Warnings about AI-induced complacency and the necessity of retaining human judgment are echoed in the human-in-the-loop AI guidelines [S21].
MAJOR DISCUSSION POINT
Risk of negotiator complacency due to AI overtrust
Argument 5
AI tools should be designed to foster collaborative understanding rather than to be used merely as weapons to outmaneuver counterparts.
EXPLANATION
Gabriela argues that AI should open a space for human understanding in negotiations, not simply be built to beat the other side or maximise efficiency, and that negotiators must stay in the driver’s seat.
EVIDENCE
She says tools must not be just to “beat the person in front of me” or “maximize efficiency,” but to “open a space of human understanding” and keep humans in control of the process [354-357].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The view that AI should support collaborative understanding rather than be a competitive weapon aligns with the broader argument that AI enhances but does not replace human diplomacy [S17].
MAJOR DISCUSSION POINT
Ethical design of AI for diplomacy
Argument 6
AI could provide negotiators with a searchable repository of historical country positions and past negotiation outcomes to inform strategic preparation.
EXPLANATION
Gabriela notes that having a centralized database of how countries have traditionally positioned themselves would help diplomats anticipate moves and craft more effective arguments during negotiations.
EVIDENCE
She remarks that it would have been “amazing, just to have a repository of what is the traditional position of certain countries or certain negotiations” and that such a tool would aid in understanding both substantive and contextual stances of actors [144-145].
MAJOR DISCUSSION POINT
Historical data repositories for diplomatic strategy
Argument 7
Using AI to gather personal or strategic information about individual negotiators raises privacy and ethical concerns that must be addressed.
EXPLANATION
Gabriela warns that while AI could provide detailed insights into a counterpart’s motivations, such profiling risks violating privacy and requires careful ethical safeguards.
EVIDENCE
She asks “how can you have more information about that person, what moves them… but this is risky because it deals with privacy and all of those things” while discussing the use of AI to influence a negotiator [146-147].
MAJOR DISCUSSION POINT
Privacy risks of AI‑driven profiling
N
Nandita Balakrishnan
9 arguments214 words per minute1355 words379 seconds
Argument 1
AI outputs should be treated as data points that require human explanation and validation; they must not replace human‑crafted intelligence.
EXPLANATION
Nandita argues that AI‑generated assessments are only one input among many and must be accompanied by human justification, especially in high‑stakes policy contexts.
EVIDENCE
She states that AI outputs should be seen as data points, that policymakers need to explain how assessments were derived, and that humans must retain ultimate accountability for intelligence products [297-304].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The principle that AI-generated assessments are only one input and must be validated by humans is reinforced in the human-centric AI guidance [S21].
MAJOR DISCUSSION POINT
Human validation of AI outputs
Argument 2
Expanding AI literacy across intelligence, State Department, and other federal agencies, and demonstrating concrete use cases (e.g., geopolitical event prediction) are key to adoption.
EXPLANATION
Nandita emphasizes the need for AI training throughout the public sector and showcases predictive geopolitical event modeling as a tangible example to encourage uptake.
EVIDENCE
She notes that AI reshapes the threat landscape, calls for AI literacy in intelligence, State Department, and other agencies, and describes a project that used AI to predict geopolitical events for both military and diplomatic purposes [194-202].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for AI literacy across diplomatic and intelligence communities and the importance of concrete use-cases are discussed in the diplomatic competencies for the AI era document [S20].
MAJOR DISCUSSION POINT
AI literacy and use‑case demonstration
Argument 3
Predictive and adaptive policy tools can reshape diplomatic analysis, but must be integrated into everyday workflows with clear human oversight.
EXPLANATION
Nandita points out that AI‑driven predictive tools can improve diplomatic forecasting, yet they must be embedded in routine processes and remain subject to human review.
EVIDENCE
She references the same geopolitical-event-prediction project as an example of predictive capability and reiterates that human-in-the-loop remains essential for accountability and validation [194-202][191-193].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need to embed predictive AI tools into routine diplomatic workflows while retaining human oversight is highlighted in the same competency framework [S20].
MAJOR DISCUSSION POINT
Predictive policy tools and human oversight
Argument 4
AI can surface historically overlooked data, enabling analysts to incorporate long‑term evidence that would otherwise be missed.
EXPLANATION
She illustrates how AI could have identified a ten‑year‑old data point that contradicted an intelligence assessment, highlighting AI’s role in uncovering hidden information.
EVIDENCE
She recounts a story where a mentor pointed out a missing piece of data from ten years ago that “completely negates your argument,” and suggests that an AI tool could have identified that gap [185-189].
MAJOR DISCUSSION POINT
AI as a tool for uncovering hidden historical evidence
Argument 5
AI can increase analyst efficiency by automating manual processes, allowing faster and smarter work.
EXPLANATION
Nandita reflects that without AI she had to perform assessments manually, but with AI tools she could work much faster and smarter, indicating significant productivity gains for analysts.
EVIDENCE
She asks herself, “if I had access to these tools as an analyst, how could I have worked much faster and much smarter?” highlighting the contrast between manual work and AI-assisted efficiency [178-180].
MAJOR DISCUSSION POINT
Productivity gains for intelligence analysts through AI
Argument 6
AI can surface counter‑arguments and alternative perspectives, helping analysts consider a broader evidence base and reduce personal bias.
EXPLANATION
She explains that AI tools can automatically generate possible counter‑points to an analyst’s assessment, prompting the analyst to evaluate and justify their conclusions more rigorously.
EVIDENCE
Nandita states that AI is “really helpful in helping you sort out the counterarguments, but you still need to understand how those counterarguments work” and that this process strengthens the analyst’s justification of their assessment [304-308].
MAJOR DISCUSSION POINT
AI‑assisted critical review of analytical reasoning
Argument 7
AI can facilitate cross‑agency collaboration by embedding tools into the daily workflows of intelligence, State Department, commerce, and other federal entities.
EXPLANATION
She argues that for AI to be effective in geopolitics, it must be integrated across multiple government sectors, enabling shared data and analytical capabilities.
EVIDENCE
Nandita points out the need for “day-to-day workflows integrating this technology” within the intelligence community, the State Department, commerce, OPM, and other federal employees, emphasizing a coordinated adoption across agencies [198-200].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Cross-agency integration of AI tools for coordinated decision-making is emphasized in the diplomatic competencies for the AI era [S20].
MAJOR DISCUSSION POINT
Cross‑sector AI integration in public‑sector decision‑making
Argument 8
Artificial intelligence has fundamentally altered the global threat landscape and must be treated as a foundational element in geopolitics and foreign‑policy analysis.
EXPLANATION
She argues that AI reshapes security, economic, and diplomatic competition, making it impossible to separate AI considerations from geopolitical assessments.
EVIDENCE
Nandita states “AI has fundamentally changed the threat landscape and the scope for global competition. It is now kind of the foundational way we need to think about geopolitics” [194-196].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The assertion that AI is now a core factor in geopolitics and security analysis is reflected in the discussion of AI’s impact on diplomatic competencies [S20].
MAJOR DISCUSSION POINT
AI as a core factor in geopolitics
Argument 9
AI capability constitutes a strategic asset that can provide competitive leverage to states, influencing power dynamics in international negotiations.
EXPLANATION
She emphasizes that nations with advanced AI tools gain advantages, making AI adoption a key factor in diplomatic and security competition.
EVIDENCE
She notes that “AI has fundamentally changed the threat landscape… you really cannot divorce AI and AI adoption when you’re trying to understand geopolitics” indicating its role as a source of competitive leverage [194-196].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The strategic advantage that AI provides to states and its effect on power balances is discussed in the analysis of AI’s impact on global power structures [S22].
MAJOR DISCUSSION POINT
AI as a strategic competitive advantage
R
Robyn Scott
13 arguments0 words per minute0 words1 seconds
Argument 1
Users must stay “above the algorithm,” preserving human agency and avoiding a zero‑sum dynamic where AI dominates the process.
EXPLANATION
Robyn introduces the metaphor of being “below” or “above” the algorithm, urging users to keep AI as a tool rather than allowing it to dictate outcomes, thereby protecting human agency.
EVIDENCE
She describes workers below the algorithm (e.g., Uber drivers) versus users above it who leverage tools for their goals, and warns against a zero-sum dynamic where AI takes over decision-making [256-259][254-255].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The recommendation that human users remain above algorithmic control to safeguard agency aligns with the human-in-the-loop AI principles [S21].
MAJOR DISCUSSION POINT
Maintaining human agency
Argument 2
Public‑sector officials are optimistic about AI but lack systematic evaluation; building AI literacy and robust pilot frameworks is essential.
EXPLANATION
Robyn reports that while public servants see huge AI potential, many lack structured evaluation of pilots, highlighting a gap that must be closed through training and assessment frameworks.
EVIDENCE
She cites a 5,000-person survey showing >90 % optimism, notes the paradox of optimism versus caution, points out that 70 % of leaders have pilots but only 45 % evaluate them, and highlights a skills gap where most implementers do not understand their own ethical frameworks [228-242][243-250].
MAJOR DISCUSSION POINT
AI optimism vs evaluation gap
Argument 3
Data poisoning, prompt injection, and over‑trust in AI outputs pose security and reliability threats; psychological safeguards and “battle‑mindset” against the algorithm are needed.
EXPLANATION
Robyn warns that AI systems can produce false negatives and that users may become over‑confident, advocating for a vigilant, skeptical stance toward algorithmic outputs.
EVIDENCE
She describes false-negative failures, the “sleeping at the wheel” phenomenon where users over-trust high-accuracy models, and proposes a heuristic of treating any algorithmic output as an adversary to maintain critical scrutiny [329-344].
MAJOR DISCUSSION POINT
Security and psychological safeguards
Argument 4
Effective AI policy work requires strong contextual expertise rather than merely content expertise, ensuring that AI tools are tailored to the specific policy environment.
EXPLANATION
Robyn emphasizes that her organization brings contextual knowledge to the table and then integrates content experts, arguing that understanding the policy context is essential for successful AI deployment.
EVIDENCE
She states that “we are more context experts than content experts, and we bring the content experts into the middle” indicating a strategic approach that prioritizes contextual insight over raw content expertise [224].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The emphasis on contextual expertise over pure content expertise for effective AI policy is highlighted in the diplomatic competencies for the AI era [S20].
MAJOR DISCUSSION POINT
Importance of contextual expertise in AI‑enabled policy
Argument 5
AI has the potential to unlock roughly $1.75 trillion of value in the public sector by automating bureaucratic, repeatable processes.
EXPLANATION
Citing a BCG estimate, Robyn argues that the scale of economic benefit from AI in government stems from its ability to handle routine, bureaucratic tasks efficiently.
EVIDENCE
She references a BCG figure that estimates “1.75 trillion of public sector value to be unlocked if we harness AI in the right way, because AI loves bureaucracy, all these repeatable processes” [231-233].
MAJOR DISCUSSION POINT
Economic upside of AI in public‑sector bureaucracy
Argument 6
Leaders must personally use AI tools to develop a concrete understanding, rather than relying on abstract knowledge.
EXPLANATION
Robyn argues that senior officials need hands‑on experience with AI to appreciate its speed and implications, which is essential for informed decision‑making.
EVIDENCE
She notes that “you cannot understand this technology in the abstract” and stresses that “you’ve got to use it” and “feel the speed of change” for leaders to grasp AI’s impact [242-247].
MAJOR DISCUSSION POINT
Need for leader‑level hands‑on AI experience
Argument 7
A large share of public officials lack knowledge of their own national AI ethical frameworks, creating a governance and skills gap.
EXPLANATION
Robyn points out that only a minority understand their country’s ethical guidelines, which risks uncoordinated and potentially unsafe AI deployments.
EVIDENCE
She reports that “Only 26 % of them say they understand their own country’s ethical frameworks, so approximately three-quarters are freestyling,” describing this as a “terrifying” skills gap [248-250].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The gap in understanding national AI ethical guidelines among public servants is noted in the human-centric AI governance discussion [S21].
MAJOR DISCUSSION POINT
Ethical‑framework knowledge gap among public servants
Argument 8
AI excels at research and writing tasks, providing productivity gains for public servants.
EXPLANATION
Robyn points out that AI is particularly strong at handling research and writing, which can help civil servants manage large volumes of documentation more efficiently.
EVIDENCE
She states that AI is “great at” research and writing tasks, highlighting its potential to automate repeatable bureaucratic processes and unlock value for public sector workers [231-236].
MAJOR DISCUSSION POINT
AI as a productivity tool for routine public‑sector work
Argument 9
AI can enable predictive and adaptive policy making, opening a “vitamin” prize of transformative possibilities beyond routine automation.
EXPLANATION
Robyn distinguishes between the immediate “painkiller” of automating bureaucracy and a longer‑term “vitamin” where AI supports predictive, responsive, and adaptive governance, limited only by imagination.
EVIDENCE
She explains that moving to the “vitamin prize” involves AI doing predictive policy making, responsive policy, and adaptive policy, which she describes as a space bounded only by imagination [237-239].
MAJOR DISCUSSION POINT
Future potential of AI for advanced policy functions
Argument 10
Partnering with academic research centres such as Stanford HAI strengthens AI policy work by providing deep contextual expertise and access to cutting‑edge research.
EXPLANATION
Robyn explains that her organization acts as a context expert and brings content experts into the process, and that collaboration with Stanford HAI exemplifies how academic partnerships can enrich the design and deployment of AI tools for policymakers.
EVIDENCE
She states that “Stanford HAI is one of our collaborators” and describes their role as “more context experts than content experts, and we bring the content experts into the middle,” highlighting the value of the partnership [223-226].
MAJOR DISCUSSION POINT
Importance of academic collaboration for AI‑enabled policy making
Argument 11
A multilingual, diplomatically‑oriented LLM initiative led by Swiss actors demonstrates a concrete step toward inclusive AI tools that can serve diverse linguistic and diplomatic contexts.
EXPLANATION
Robyn points to a Swiss‑driven project that is building a large language model trained on more than 100 languages and operated by a former Swiss diplomat, showing how language diversity and diplomatic relevance are being embedded into AI development.
EVIDENCE
She notes that “The Swiss have built sort of a quasi-Swiss government, quasi-multilateral initiative to build an LLM that is trained from the outset on more than 100 languages, and it is actually run by a friend of mine who’s a former Swiss diplomat” [424-425].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The Swiss-driven multilingual LLM project aimed at diplomatic applications is described in the data-governance and multilingual AI discussion [S23].
MAJOR DISCUSSION POINT
Multilingual AI models for diplomatic applications
Argument 12
High optimism about AI in the public sector is not matched by concrete implementation; there is a gap between AI talk and AI action that must be closed.
EXPLANATION
Robyn observes that while public servants express strong enthusiasm for AI’s potential, actual deployment of AI solutions remains limited, indicating a need to move from discussion to tangible projects and measurable outcomes.
EVIDENCE
She cites a 5,000-person survey showing over 90 % of public servants are optimistic about AI, yet she notes that “there’s lots of AI talk. There’s less AI action.” She also highlights the prevalence of pilots (70 % of leaders have them) but the low rate of systematic evaluation (only 45 % evaluate), illustrating the implementation gap [228-236][241-242].
MAJOR DISCUSSION POINT
Implementation gap between AI optimism and action
Argument 13
AI tools in education must be culturally and contextually relevant; otherwise they fail to improve student outcomes.
EXPLANATION
Robyn stresses that deploying AI technology in schools is insufficient on its own; the tools need to be aligned with local cultural norms and content relevance to be effective. Without such alignment, even sophisticated AI systems will not lead to better learning results.
EVIDENCE
She observes that countries which introduced large amounts of technology into their educational systems did not achieve better student outcomes because the content was not culturally or contextually appropriate, and that tools must be paired with locally relevant content to be useful [426-430].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The necessity for culturally and contextually appropriate AI tools in education is emphasized in the cultural relevance and digital divide analysis [S23].
MAJOR DISCUSSION POINT
Cultural and contextual relevance of AI in education
J
J. Michael McQuade
6 arguments182 words per minute2899 words954 seconds
Argument 1
The MOVE 37 project will develop tools, evaluation methodologies, and conduct stakeholder interviews to embed AI responsibly in diplomatic workflows.
EXPLANATION
Michael outlines the project’s technical roadmap, emphasizing tool development, evaluation strategies, and direct engagement with diplomats to ensure responsible AI integration.
EVIDENCE
He states that the project will create tools, evaluation methodologies, and leverage a large network of negotiation practitioners, while Slavina adds that one-on-one interviews with diplomats inform the design [124-129][267-278].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The description of MOVE 37’s objectives, including tool creation, evaluation and practitioner interviews, appears in the AI-transforming-diplomacy overview [S2].
MAJOR DISCUSSION POINT
MOVE 37 implementation plan
Argument 2
Establishing a new discipline for AI‑augmented negotiations, with ongoing collaboration among scholars, practitioners, and policymakers, is the overarching goal.
EXPLANATION
Michael frames the initiative as the foundation of a new field that brings together academia, policy, and practice to shape AI‑supported diplomacy over the long term.
EVIDENCE
He describes the vision of a discipline with clear start-middle-end phases, invites ongoing participation, and thanks collaborators, underscoring the intent to build a lasting community [124-129][436-440].
MAJOR DISCUSSION POINT
Creation of AI‑augmented diplomacy discipline
Argument 3
AI tools can reshape global power dynamics by providing competitive leverage to states that adopt them, potentially widening asymmetries in diplomatic negotiations.
EXPLANATION
Michael notes that the diffusion of AI capabilities will affect how power is distributed, with AI‑enabled states gaining strategic advantages in both cooperative and adversarial negotiations.
EVIDENCE
He states that “we think a lot about… where will tools provide competitive leverage… how it will change power structures” and that AI tools will be “dispersed actively and offensively” affecting negotiations [431-432].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The potential for AI to alter international power balances and give strategic advantage to AI-enabled states is discussed in the analysis of AI’s impact on power structures [S22].
MAJOR DISCUSSION POINT
AI’s impact on international power balance
Argument 4
Human judgment must filter AI‑generated recommendations in negotiations to maintain accountability.
EXPLANATION
Michael stresses that when AI proposes options, negotiators need to apply human judgment rather than accept AI outputs uncritically, preserving democratic and internal accountability.
EVIDENCE
He says, “When the AI says, here’s a thing? I don’t just trust it, it’s a priority. I have to apply a human judgment to what I’m hearing” indicating the need for human oversight of AI suggestions [371-374].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The requirement that human judgment oversee AI recommendations to preserve accountability aligns with the human-in-the-loop AI guidelines [S21].
MAJOR DISCUSSION POINT
Human‑in‑the‑loop governance for AI‑augmented diplomacy
Argument 5
AI can serve as a trusted information aggregator, helping negotiators identify leverage points and new pathways to successful outcomes.
EXPLANATION
Michael suggests that AI should be viewed as a reliable source that consolidates data and insights, enabling negotiators to discover strategic options they might otherwise miss.
EVIDENCE
He says “we are looking for collaborators, partners, and input… and the follow-up to negotiations can be a subject for the tools and applications of modern technology” and later notes “the trusted agent to accumulate information… new pathways for success” [210-212].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The role of AI as a trusted aggregator of information for strategic insight is highlighted in the AI-transforming-diplomacy discussion [S2].
MAJOR DISCUSSION POINT
AI as an information‑gathering aid
Argument 6
Establishing baseline capabilities and clear ground rules for AI use is essential to ensure responsible deployment in diplomatic contexts.
EXPLANATION
Michael stresses the need for predefined standards and capabilities so that AI tools augment negotiations without undermining accountability or creating ambiguity.
EVIDENCE
He remarks that “we have to have a baseline of capability” and that “the question is how can modern tools help in that process without removing or absolving responsibility for people” indicating the importance of ground rules [208-212].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The call for baseline capabilities and governance rules for AI in diplomacy is reflected in the human-centric AI governance recommendations [S21].
MAJOR DISCUSSION POINT
Baseline capabilities and governance for AI in diplomacy
A
Audience
1 argument158 words per minute399 words151 seconds
Argument 1
Audience concerns highlighted the need for transparent source attribution and safeguards against malicious manipulation of training data.
EXPLANATION
An audience member raised questions about ensuring cultural diversity in datasets and protecting AI models from data poisoning and prompt‑injection attacks.
EVIDENCE
The participant asked how to embed diverse cultural inputs in models and how to guard against data poisoning and prompt injection in AI-mediated negotiations [386-390]. Gabriela responded by emphasizing language representation, source transparency, and testing safeguards [391-403].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for transparent data provenance, cultural diversity in datasets, and protection against data poisoning are discussed in the data-governance and cultural representation literature [S23] and reinforced by human-centric AI safeguards [S21].
MAJOR DISCUSSION POINT
Audience demand for data governance and cultural representation
Agreements
Agreement Points
AI should augment, not replace, diplomats; human authority and judgment must remain central.
Speakers: Slavina Ancheva, Charlie Posniak, Gabriela Ramos, J. Michael McQuade, Robyn Scott, Nandita Balakrishnan
AI is not replacing diplomats but giving them better tools to manage complexities [61-62] Human authority must remain central; tools should be modular and transparent [117-120] Need to keep humans in the loop and question AI assumptions to avoid misrepresentation and over-reliance [352-358][363-367] Human judgment must filter AI-generated recommendations to preserve accountability [371-374] Users must stay “above the algorithm” to preserve human agency [256-259] AI outputs are data points that require human explanation and validation [297-304]
All speakers emphasized that AI is a supportive tool while final decision-making must stay with humans, requiring transparency, questioning, and human judgment throughout the negotiation process [61-62][117-120][352-358][371-374][256-259][297-304].
POLICY CONTEXT (KNOWLEDGE BASE)
Both the AI diplomacy discourse and scholarly commentary stress that AI is a preparatory tool while final decisions must rest with human diplomats, preserving the art of negotiation and trust-building [S38][S39][S40].
Building AI literacy and capacity across the public sector is essential for effective adoption.
Speakers: Nandita Balakrishnan, Robyn Scott
Expanding AI literacy across intelligence, State Department and other federal agencies is key, with concrete use-cases like geopolitical event prediction [194-200] Leaders must personally use AI tools; there is a large skills gap and many pilots lack systematic evaluation [242-247][228-242]
Both speakers highlighted the need for widespread AI training and hands-on experience among officials, noting current gaps in skills and evaluation of pilots [194-200][242-247][228-242].
POLICY CONTEXT (KNOWLEDGE BASE)
Capacity-building was highlighted as a key challenge for Francophone states in digital-AI regulation forums [S42] and reiterated in UN-AI infrastructure discussions emphasizing skills and standards development [S58]; parliamentary insights further call for continuous education of all stakeholders [S59].
Multilingual and culturally diverse training data are required to avoid bias and ensure inclusive AI tools.
Speakers: Gabriela Ramos, Robyn Scott
Multilingual, culturally diverse training data are required to avoid bias; language expresses culture and must be represented in models [391-403] Swiss initiative building an LLM trained on >100 languages for diplomatic use demonstrates a concrete step toward inclusive AI [424-425]
Both speakers stressed that AI systems must incorporate many languages and cultural perspectives to prevent discrimination and reflect global diversity [391-403][424-425].
POLICY CONTEXT (KNOWLEDGE BASE)
Ensuring linguistic diversity aligns with SDG-4 and SDG-10 recommendations and broader AI content-policy goals for multilingualism and cultural inclusion [S43][S44]; gender-bias research also underscores the need for diverse datasets to mitigate discriminatory outcomes [S60][S61].
AI can alleviate information overload in negotiations by synthesizing documents, generating strategic options, and supporting real‑time execution.
Speakers: Charlie Posniak, Slavina Ancheva, J. Michael McQuade, Nandita Balakrishnan, Robyn Scott
AI can manage information overload, synthesize documents, generate strategic options, and support real-time execution [103-115] Negotiations involve massive documentation, strategic dynamics and time pressure, creating heavy cognitive load that AI could help mitigate [66-72] AI is a trusted agent to accumulate information and reveal new pathways for success [210-212] AI can surface historically overlooked data, improving analyst efficiency [185-189] AI excels at research and writing tasks, offering productivity gains for public servants [231-236]
All speakers recognized AI’s potential to handle large data volumes, produce analyses, and aid negotiators during live discussions, thereby reducing cognitive burden and improving outcomes [103-115][66-72][210-212][185-189][231-236].
POLICY CONTEXT (KNOWLEDGE BASE)
Case studies of AI-enabled diplomacy demonstrate its role in aggregating large document sets, modeling negotiation scenarios, and offering actionable options, thereby reducing cognitive load for negotiators [S48][S49][S50].
Responsible AI deployment requires transparent, modular tools, baseline capabilities, and systematic evaluation.
Speakers: Charlie Posniak, J. Michael McQuade, Robyn Scott, Gabriela Ramos
Tools must be modular, transparent and augment rather than replace decision-makers [117-120] Establishing baseline capabilities and clear ground rules is essential for responsible deployment [208-212] There is a gap between AI optimism and actual evaluation of pilots; systematic assessment is needed [228-242] Continuous questioning of AI assumptions is necessary to keep humans in control [352-358]
Speakers agreed on the need for clear standards, modular transparency, and rigorous evaluation to ensure AI is used responsibly in diplomatic contexts [117-120][208-212][228-242][352-358].
POLICY CONTEXT (KNOWLEDGE BASE)
The UN Security Council stresses algorithmic transparency and rigorous testing as prerequisites for trustworthy AI systems [S57]; responsible-deployment frameworks and emerging standards discussed at IGF and EU workshops further call for modular, benchmarked tools and systematic assessment [S55][S56][S53][S54].
AI constitutes a strategic asset that can reshape global power dynamics and provide competitive leverage to states.
Speakers: J. Michael McQuade, Nandita Balakrishnan
AI will change power structures and provide competitive leverage; tools will be dispersed actively and offensively [431-432] AI has fundamentally changed the threat landscape and is a strategic competitive advantage in geopolitics [194-196]
Both highlighted that AI adoption will affect international power balances, giving advantage to states that effectively integrate AI into diplomatic and security strategies [431-432][194-196].
POLICY CONTEXT (KNOWLEDGE BASE)
Analyses of AI’s geopolitical impact note its capacity to shift economic and societal power, creating new centres of influence and serving as a strategic lever for national advantage [S45][S47].
Similar Viewpoints
Both stress that without widespread AI competence and hands‑on experience among officials, AI initiatives will remain ineffective and untested [194-200][242-247][228-242].
Speakers: Nandita Balakrishnan, Robyn Scott
Expanding AI literacy across agencies and need for leaders to use AI personally [194-200][242-247] Large skills gap and lack of systematic evaluation of AI pilots [228-242]
Both call for clear, transparent standards and baseline capabilities to guide AI integration in diplomacy [117-120][208-212].
Speakers: Charlie Posniak, J. Michael McQuade
Need for modular, transparent tools and baseline capabilities for responsible AI use [117-120][208-212]
Both agree that linguistic and cultural diversity must be embedded in AI models to ensure fairness and relevance in diplomatic contexts [391-403][424-425].
Speakers: Gabriela Ramos, Robyn Scott
Importance of multilingual, culturally diverse data to avoid bias [391-403] Swiss multilingual LLM initiative as a concrete example [424-425]
Both recognize the heavy information burden of negotiations and see AI as a means to reduce cognitive load and improve decision‑making [66-72][103-115].
Speakers: Slavina Ancheva, Charlie Posniak
Negotiations are complex and generate massive documentation; AI can help manage this complexity [66-72] AI can manage information overload and support real-time execution [103-115]
Unexpected Consensus
Cultural and linguistic inclusivity in AI models.
Speakers: Gabriela Ramos, Robyn Scott
AI must represent many languages and cultural philosophies to avoid bias [391-403] Swiss project building a multilingual LLM for diplomatic use [424-425]
Despite coming from different professional backgrounds (UN diplomacy vs. policy-implementation consultancy), both converged on the necessity of multilingual, culturally aware AI, highlighting a cross-sector recognition of this issue that was not anticipated earlier in the discussion [391-403][424-425].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy documents on AI content and multilingualism call for inclusive model design to serve diverse linguistic communities and reduce inequities [S43][S44].
Treating AI outputs as supplemental data points rather than definitive intelligence.
Speakers: Nandita Balakrishnan, Charlie Posniak
AI outputs should be seen as data points requiring human validation [297-304] LLMs have unverifiable fluency and opacity; they cannot replace established analytical frameworks [82-85]
Both emphasized the limitation of LLMs and the need for human oversight, aligning on the view that AI should augment rather than replace traditional analytical methods-a convergence not explicitly stated before their individual remarks [297-304][82-85].
Overall Assessment

The panel displayed strong consensus on keeping humans central to AI‑augmented diplomacy, the necessity of building AI capacity and literacy, the importance of multilingual and culturally diverse data, and the need for transparent, evaluated, and modular AI tools. There was also agreement that AI will become a strategic asset influencing power dynamics.

High consensus across most themes, indicating broad alignment among scholars, practitioners, and policymakers on the principles governing AI use in diplomatic negotiations. This convergence suggests that future initiatives like MOVE 37 are likely to adopt human‑in‑the‑loop designs, prioritize capacity building, and address cultural inclusivity, while also preparing for the strategic implications of AI on international power structures.

Differences
Different Viewpoints
Acceptability of opacity in AI models for diplomatic use
Speakers: Charlie Posniak, Robyn Scott
LLM fluency is not verifiable, opacity hampers accountability for high-stakes negotiations [82-85] Full legibility over model workings is not possible, but the lack of transparency is not insurmountable [323-328]
Charlie argues that the opacity of large language models makes them unsuitable for treaty‑shaping work because their reasoning cannot be verified or held accountable. Robyn acknowledges the black‑box nature but contends that it does not preclude use, suggesting work‑arounds are feasible. The two therefore disagree on how much opacity can be tolerated in diplomatic AI tools.
POLICY CONTEXT (KNOWLEDGE BASE)
UN discussions flag algorithmic opacity as a risk, emphasizing the need for transparency and verifiable evaluation in diplomatic AI applications [S57]; responsible-deployment dialogues similarly caution against opaque systems [S55].
Degree of trust to place in AI‑generated assessments
Speakers: J. Michael McQuade, Nandita Balakrishnan
AI can be a trusted information aggregator that offers new levers and pathways for negotiators [210-212][371-374] AI outputs should be treated only as data points that require human explanation and validation; they must not replace human-crafted intelligence [297-304]
Michael envisions AI as a reliable source that negotiators can lean on for strategic insight, whereas Nandita stresses that AI outputs must always be justified and overseen by humans, limiting the level of trust placed in them. Both see value in AI but diverge on how much autonomous reliance is acceptable.
POLICY CONTEXT (KNOWLEDGE BASE)
World Economic Forum and AI-trust frameworks argue that trust must be grounded in context-specific risk assessments and supported by standards, laws, and conformity mechanisms [S51][S52].
Framing AI as a collaborative aid versus a competitive lever
Speakers: Gabriela Ramos, Robyn Scott
AI tools should not be built to ‘beat’ counterparts; they must open space for human understanding and keep humans in control [354-357][363-367] There is a risk of a zero-sum dynamic where AI dominates the process; a ‘battle-mindset’ against the algorithm is needed to avoid complacency [254-259][424-425]
Gabriela stresses that AI should foster collaborative understanding and avoid being used as a weapon against other negotiators. Robyn, while also warning against over‑reliance, frames AI more as a competitive technology that could create a zero‑sum environment, advocating for a defensive stance toward algorithms. Their goals of responsible AI overlap, but their conceptual framing diverges.
POLICY CONTEXT (KNOWLEDGE BASE)
Strategic analyses portray AI as a competitive asset reshaping power balances [S45], while governance sessions advocate for collaborative, responsible use of AI agents to support-not replace-human decision-making [S55].
Unexpected Differences
Readiness to codify AI governance standards versus practical evaluation gaps
Speakers: J. Michael McQuade, Robyn Scott
We need to set baseline capabilities and ground rules for AI-augmented negotiations now [208-212] A large share of pilots are unevaluated, and most officials lack understanding of ethical frameworks, indicating that solid standards are premature [241-242][243-250]
It is surprising that Michael, leading the MOVE 37 initiative, pushes for immediate baseline rules, while Robyn, who works closely with public‑sector pilots, highlights the lack of evaluation and ethical‑framework awareness, suggesting the field is not yet ready for firm standards.
POLICY CONTEXT (KNOWLEDGE BASE)
IGF and EU policy roadmaps highlight a tension between the push to formalise AI standards and the reality of limited evaluation data, calling for evidence-based benchmarks and iterative testing before full codification [S53][S54][S55][S56][S58][S59].
Overall Assessment

The panel largely concurs that AI can augment diplomatic work, but key tensions arise around model transparency, the level of trust to place in AI outputs, and whether AI should be framed as a collaborative facilitator or a competitive tool. These disagreements reflect differing risk tolerances and disciplinary backgrounds (technical vs policy).

Moderate – while there is broad consensus on the need for human‑in‑the‑loop and responsible deployment, the speakers diverge on practical governance (opacity, trust, framing). The disagreements suggest that any MOVE 37 implementation will need flexible guidelines that accommodate both high‑accountability requirements and the pragmatic constraints of existing AI technology.

Partial Agreements
Both agree that AI should support negotiators with richer contextual and strategic information, but Gabriela emphasizes a searchable historical database while Charlie stresses formal analytical models; they differ on the primary type of tool to prioritize.
Speakers: Gabriela Ramos, Charlie Posniak
A repository of historical country positions would aid negotiators in preparation [144-145] Integrating game-theoretic and decision-analysis frameworks with AI is essential for strategic modelling [86-88][98-100]
Both want responsible AI deployment, but Michael assumes a baseline can be defined now, whereas Robyn points out that the current pilot landscape is too unevenly evaluated to support firm baselines.
Speakers: J. Michael McQuade, Robyn Scott
Establish baseline capabilities and clear ground rules for AI use in diplomacy [208-212] Many AI pilots lack systematic evaluation, creating a gap that must be closed before standards can be set [241-242][243-250]
Takeaways
Key takeaways
Diplomatic negotiations are highly complex, involving many parties, massive documentation, strategic dynamics, and time pressure, creating a heavy cognitive load. AI can alleviate information overload, synthesize documents, generate strategic options, provide real‑time transcription/translation, and support execution, but must be integrated with existing analytical frameworks (game theory, decision analysis). Large language models alone are insufficient due to unverifiable fluency, opacity, and lack of accountability; a broader suite of AI methods and decades‑old techniques is required. Human authority and human‑in‑the‑loop must remain central; AI tools should be modular, transparent, and used to augment—not replace—decision‑makers. Maintaining human agency (“above the algorithm”) is essential to avoid a zero‑sum dynamic where AI dominates the process. Significant gaps exist in AI literacy, systematic pilot evaluation, and ethical governance within the public sector; training and clear evaluation frameworks are needed. Risks of cultural bias, data poisoning, prompt injection, and over‑reliance on AI outputs were highlighted; multilingual, culturally diverse training data and robust safeguards are required. The MOVE 37 project will develop tools, evaluation methodologies, stakeholder interviews, autonomous research agents, strategy sandboxes, and red‑team simulations to embed AI responsibly in diplomatic workflows. Collaboration among scholars, practitioners, and policymakers is crucial to establish a new discipline of AI‑augmented diplomacy.
Resolutions and action items
Commit to keeping human authority central and ensuring AI tools are modular and transparent (as stated by Charlie Posniak). Develop a suite of AI tools for research, analysis, strategy, and execution phases of negotiations (MOVE 37 project). Create and deploy evaluation methodologies for AI pilots in the public sector (Robyn Scott’s observation). Conduct one‑on‑one interviews with current and former diplomats to inform tool design (Slavina Ancheva). Build AI literacy programs across intelligence, State Department, and other federal agencies (Nandita Balakrishnan). Assemble a multilingual, culturally representative training dataset for negotiation‑support models (Gabriela Ramos). Establish red‑team and sandbox environments for testing AI‑augmented negotiation strategies (Charlie Posniak). Invite interested stakeholders to join the project and provide feedback (Michael’s closing invitation).
Unresolved issues
How to systematically ensure that AI models capture the full cultural and linguistic diversity of all negotiating parties. Concrete mechanisms to protect against data poisoning, prompt injection, and other adversarial attacks on negotiation‑support AI. Standardized procedures for evaluating AI pilots and translating pilot results into policy adoption. Balancing power asymmetries when some states have far greater data access and AI capabilities than others. Defining clear criteria for when AI should be used for strategic option generation versus when human judgment must dominate. Methods for transparent source attribution in AI‑generated analyses to satisfy accountability requirements.
Suggested compromises
Use AI as a decision‑support tool while retaining final authority and accountability with human negotiators. Maintain modular AI components that can be inspected and turned off, allowing negotiators to stay “above the algorithm.” Limit AI deployment to specific, well‑defined tasks (e.g., document synthesis, translation) while keeping higher‑level strategic choices human‑driven. Implement pilot projects with built‑in evaluation and human‑oversight checkpoints before wider rollout. Adopt a “battle‑mindset” toward AI: continuously question outputs and verify against independent sources rather than accepting them unquestioningly.
Thought Provoking Comments
Why can’t you just ask an LLM? … Their fluency isn’t necessarily verifiable in international politics, the opacity makes accountability hard, and relying only on chatbots would miss out on 80 years of game theory, decision analysis, and other technical developments.
Challenges the simplistic notion that large language models alone can handle diplomatic negotiations, highlighting issues of verification, transparency, and the rich existing methodological toolbox.
Shifted the discussion from enthusiasm about AI tools to a more cautious, nuanced view, prompting subsequent speakers (e.g., Robyn and Nandita) to address trust, evaluation, and the need for human oversight.
Speaker: Charlie Posniak
The heuristic of being ‘below the algorithm’ (e.g., an Uber driver dispatched by a system) versus ‘above the algorithm’ (using tools to further your goals). We must move people up above the algorithm in diplomacy.
Introduces a clear, relatable framework for thinking about human agency versus algorithmic control, emphasizing empowerment rather than replacement.
Reoriented the conversation toward preserving and enhancing human decision‑making power, leading to deeper discussion about AI literacy, pilot evaluation, and the risk of “drunk on AI” attitudes.
Speaker: Robyn Scott
When I wrote my first intelligence piece, a mentor pointed out a ten‑year‑old data point that completely negated my argument. An AI tool that could surface such hidden evidence and synthesize it would have made the analysis faster and smarter.
Provides a concrete, personal illustration of AI’s potential to overcome human cognitive limits and data‑access bottlenecks in intelligence work.
Illustrated the practical value of AI for analysts, reinforcing the earlier point about augmenting—not replacing—human judgment and prompting calls for explainability and transparency.
Speaker: Nandita Balakrishnan
We received 55,000 public comments on the UNESCO AI ethics recommendation and used AI to integrate them. I wish we had AI to map the traditional positions of countries and give me a repository of their negotiation histories.
Shows a real‑world application of AI in a massive, multilingual policy process and raises the strategic question of how AI could support position‑tracking while flagging privacy and bias concerns.
Moved the discussion from abstract theory to a tangible use‑case, sparking follow‑up questions about cultural representation, data poisoning, and the ethical limits of profiling negotiators.
Speaker: Gabriela Ramos
Culture is expressed by language. To capture philosophies like Ubuntu, we must ensure models are trained on many languages and sources, otherwise they will reflect individual‑welfare bias and miss collective worldviews.
Links linguistic diversity directly to cultural fairness in AI models, highlighting a subtle but critical source of bias in diplomatic AI tools.
Expanded the conversation to include multilingual fairness and the need for diverse training data, leading to mentions of the Swiss multilingual LLM project and reinforcing the earlier point about representation.
Speaker: Gabriela Ramos (response to audience question)
We think a lot about how AI tools could provide competitive leverage and whether they should be dispersed offensively, not just defensively, because they will reshape power structures in both cooperative and adversarial negotiations.
Frames AI deployment as a geopolitical issue, moving the dialogue from technical feasibility to strategic implications for global balance of power.
Served as a turning point that broadened the scope of the panel, prompting participants to consider equity, access, and the macro‑level consequences of AI‑augmented diplomacy.
Speaker: J. Michael McQuade
Overall Assessment

The discussion was steered by a series of pivotal remarks that moved it from a high‑level enthusiasm about AI to a layered, critical examination of its role in diplomacy. Charlie’s warning against over‑reliance on LLMs introduced the need for verification and accountability, which set the stage for Robyn’s agency heuristic and Michael’s geopolitical framing. Personal anecdotes from Gabriela and Nandita grounded the debate in real‑world practice, highlighting both the promise (handling massive comment volumes, surfacing hidden data) and the perils (privacy, cultural bias, power imbalances). Audience questions about cultural representation and data poisoning, answered by Gabriela, further deepened the conversation around fairness and security. Collectively, these comments redirected the panel toward a balanced view that AI should augment, not replace, human negotiators, that transparency and multilingual inclusivity are essential, and that the strategic distribution of AI capabilities will shape future power dynamics.

Follow-up Questions
How can we ensure that the diverse cultural inputs of the world’s most diverse countries are embedded in the data sets and models that inform negotiations?
Cultural representation is essential for fairness, legitimacy, and effectiveness of AI‑mediated diplomatic processes.
Speaker: Sam Dawes (Audience)
What safeguards are needed to address data poisoning, prompt injection, and other adversarial risks when AI is used as a neutral mediator or negotiation assistant?
Ensuring the integrity and security of AI tools is critical to maintain trust among parties and prevent manipulation of outcomes.
Speaker: Sam Dawes (Audience)
What concrete steps can practitioners like Devika Rao take to advance AI‑supported cultural‑education frameworks and foster co‑creation collaborations?
Guidance is needed for individuals working at the intersection of culture, education, and AI to translate ideas into actionable projects and partnerships.
Speaker: Devika Rao (Audience)
How will AI impact the balance of power given unequal access to data sets and AI capabilities among states, especially in multilateral negotiations?
Understanding power asymmetries is vital to design equitable AI governance mechanisms and avoid exacerbating geopolitical tensions.
Speaker: Arman (Audience)
Develop a comprehensive, searchable repository of historical country positions and negotiation precedents to support AI‑driven diplomatic tools.
Such a knowledge base would enable negotiators to quickly assess counterpart stances, reduce cognitive load, and improve strategic preparation.
Speaker: Gabriela Ramos; Slavina Ancheva
Research effective AI‑literacy and training programmes across all public‑sector agencies (beyond defence and intelligence) to enable informed AI adoption.
Broad AI competence is required for policymakers to evaluate, trust, and responsibly integrate AI into diplomatic workflows.
Speaker: Nandita Balakrishnan
Design robust evaluation frameworks for AI pilots in the public sector to measure impact, identify failures, and guide scaling decisions.
Many pilots lack systematic assessment, limiting learning and the ability to demonstrate value or address shortcomings.
Speaker: Robyn Scott
Create multilingual, culturally inclusive large language models (e.g., the Swiss initiative) tailored for diplomatic contexts.
Multilingual models ensure that non‑English perspectives are accurately captured, reducing bias and enhancing global relevance.
Speaker: Robyn Scott
Investigate methods to keep AI tools modular, transparent, and accountable, preserving human authority throughout the negotiation process.
Transparency and explainability are prerequisites for trust, legal compliance, and ethical deployment in high‑stakes diplomacy.
Speaker: Charlie Posniak
Explore AI techniques for generating strategic options, counter‑arguments, and preference mapping to aid negotiators in complex, multi‑party settings.
These capabilities can help negotiators navigate information overload, avoid groupthink, and identify creative solutions.
Speaker: Charlie Posniak; Gabriela Ramos
Study the psychological effects of over‑reliance on AI (e.g., “sleeping at the wheel,” false‑negative bias) and develop safeguards to maintain critical human oversight.
Preventing complacency ensures that AI augments rather than replaces human judgment, preserving decision quality.
Speaker: Robyn Scott
Assess the feasibility and implications of using AI to predict geopolitical events for proactive policy‑making and diplomatic strategy.
Predictive analytics could give governments early warning and strategic advantage, but requires validation and ethical oversight.
Speaker: Nandita Balakrishnan
Determine appropriate calibration levels of AI assistance at different negotiation stages (pre‑negotiation preparation vs. real‑time execution).
Different phases demand distinct types of AI support; calibrating these helps align tool output with negotiators’ needs and maintains control.
Speaker: J. Michael McQuade (prompting Gabriela)

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm

Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm

Session at a glanceSummary, keypoints, and speakers overview

Summary

In his keynote, Vijay Shekhar Sharma declared that India has become the global hub for artificial intelligence talent, attributing this concentration to the country’s leadership and ecosystem [2-5]. He praised the Indian government, especially the Prime Minister, for fostering AI enthusiasm much as it did for Startup India a decade earlier [7-9]. Sharma noted that most people already interact with AI daily through personal agents or co-pilots, and that this usage can become addictive [11-15]. He illustrated widespread adoption by recounting his early QR-code initiative, where even a household helper in Aligarh could use Paytm after a simple photo, demonstrating how technology spreads to the common man [17-23]. According to Sharma, by 2025 AI was largely an individual experience, but from 2026 onward it will be embedded in businesses to solve problems previously considered unsolvable [28-31]. He argued that AI will transform financial services by improving credit assessment, thereby extending wealth creation to previously unserved populations [32-36]. Similar breakthroughs are expected in agriculture and livestock, where AI-driven solutions developed in India could address global challenges [36]. Sharma emphasized that India must build its own foundation models, not merely adopt foreign ones, and highlighted the success of his fellow entrepreneur Sarvam in this effort [36]. He described the upcoming “demographic technology dividend,” where the country’s young population will both create and consume AI engines tailored to diverse use cases [37-40]. Rather than focusing solely on generic language models, Sharma advocated developing sector-specific engines-such as for call-centers that could evolve into remote healthcare providers [42-49][58-59]. He warned that AI will not simply eliminate jobs but will generate abundance, urging stakeholders to ride the wave instead of being victimized by it [59]. Concluding, Sharma called the audience to join the AI revolution, asserting that India is now the world’s AI “center of gravity” [60-65]. He ended with a rallying cry that collective participation will reshape how India is perceived globally [63-68].


Keypoints

Major discussion points


India’s AI leadership and demographic advantage – Sharma repeatedly stresses that the world’s AI talent is concentrated in India, that the country’s “demographic dividend” will become a “demographic technology dividend,” and that India is the “center of gravity of AI.” [2-7][60-64]


AI as a catalyst for financial inclusion and sectoral transformation – He explains how AI can deepen credit access, make financial services reach every corner, and extend to agriculture, livestock and other industries, turning local solutions into global ones. [32-36]


Building indigenous foundation models and specialized AI agents – The speaker argues that India must develop its own foundation models, LLMs and “engines” (agents) rather than only using foreign models, positioning these as the basis for sector-specific AI applications. [42-49]


AI will create abundance, not just job loss, and a call to join the AI revolution – Sharma contends that AI will enable new services (e.g., call-center-to-health-care) and generate “AI-led abundance.” He urges the nation to “ride the wave” and collectively “join the revolution.” [59][61-68]


Overall purpose / goal


The discussion is a rallying speech aimed at inspiring Indian entrepreneurs, policymakers, and the broader public to embrace AI, invest in home-grown models and sector-specific solutions, and position India as the world’s premier AI hub that can solve both domestic and global challenges.


Overall tone


The tone is consistently enthusiastic and patriotic, moving from celebratory praise of India’s current AI stature, to an optimistic vision of transformative impact across industries, to a persuasive call-to-action urging collective participation. While the speech briefly acknowledges potential concerns about job displacement, it quickly reframes them as opportunities for abundance, maintaining an upbeat, rally-like momentum throughout.


Speakers

Vijay Shekhar Sharma


– Role/Title:


– Area of Expertise:


Speaker 1


– Role/Title: Event host / moderator (inferred) [S3][S5]


– Area of Expertise:


Additional speakers:


– None


Full session reportComprehensive analysis and detailed insights

Vijay Shekhar Sharma opened his keynote by echoing the emcee’s welcome and declaring India the global hub of artificial-intelligence talent, praising the host nation’s leadership and the Prime Minister’s vision for AI as a new “Startup India” drive that is reshaping the country’s future [1-5][60-64].


He then noted that most people already interact with AI through personal agents or co-pilots, and that this everyday convenience can become “addictive” as users grow accustomed to the technology [11-15].


Illustrating rapid mass adoption, Sharma recalled pitching QR-code payments to a skeptical government official during the demonetisation era; he later showed that even his house-help in Aligarh could complete a Paytm transaction simply by photographing a QR code, proving that the common man could grasp the system and that the service now reaches every corner of the country [17-23].


Turning to the future, he marked 2026 as a turning point when AI will move from an individual, experiential tool to a core capability embedded in businesses, emphasizing that AI’s utility will extend beyond chat or photo editing to power entire industries [26-28][28-31].


In the financial-services sector, Sharma argued that AI can handle the “corner cases” that traditional credit-risk models miss, enabling more inclusive lending and turning the smartphone-enabled financial system into a truly inclusive one [32-35].


He extended this vision to agriculture and livestock, citing a recent discussion between Nandan Sir and the Prime Minister about AI-driven cattle-health monitoring and suggesting that similar AI applications could address crops, farm machinery and broader agrarian challenges [36-38].


Central to realising these ambitions, he likened foundation models to engines and sector-specific AI applications to vehicles, arguing that India must build its own “engines” (large-language models) and then deploy them across finance, agriculture, healthcare, etc., rather than merely importing foreign models [42-49].


Linking this technical agenda to the country’s youthful population, Sharma coined the term “demographic-technology dividend,” asserting that India’s demographic advantage will both generate and consume AI engines, turning the traditional demographic dividend into a catalyst for rapid AI diffusion and economic growth [37-40].


Among sector-specific use cases, he described how call-centres could evolve into remote healthcare providers: AI agents would manage enquiries, track health metrics and enable human oversight, creating AI-led abundance rather than merely displacing jobs [58-59].


Reflecting on his own entrepreneurial journey, Sharma compared the current AI disruption to the 2010 shift from feature-phone value-added services to smartphones, warning that firms that fail to adapt risk obsolescence while those that embrace AI can extend services far beyond current expectations [59-60].


He invoked the wisdom of the Gita, noting that “change is the only constant” and asserting that India will not only embrace AI change but also lead it [55-57].


Concluding, Sharma reiterated that India is now the “centre of gravity of AI,” urging government, startups, academia and citizens to join the AI revolution; collective participation will cement India’s leadership and reshape global perceptions of the nation, ending with a rallying chant that “we are here” and a thank-you to the audience [60-68].


Session transcriptComplete transcript of the session
Speaker 1

Ladies and gentlemen, please welcome Mr. Vijay Shekhar Sharma.

Vijay Shekhar Sharma

Wow. First of all, I do believe that everybody who is an Indian must be very proud that all the AI people in the world are in one city and one country. For that, we need to clap for this event’s host. And I think this is the power of India, my friend. I don’t have to say this. Everybody who is somebody in AI is right now in this country. Our Prime Minister has been able to bring the excitement of AI. Just like 10 years back, he was able to do it for Startup India. So from the Startup India to the AI India, once again for our Honorable Prime Minister this time, guys. I don’t have to tell you how the powerful capability of AI all of us have experienced.

Many of you must be using a personal agent in every other day. And if not agent, you must be using a co -pilot. You must be asking questions to him. And the beauty is that… But the more you use it, the more it becomes addictive. It is where the technology is. When we launched the QR code, I still remember, I went to the government and I had a discussion with them that this is a matter of demonetization, that this can be paid in this way. So the person with whom I was talking, he asked me, do you think the common man will understand what to do? So I said, sir, I went to Aligarh and my house help said, brother, we also do Paytm.

So I asked, how do you do it? He said, you have to take a photo of it from Paytm. And when I told him, I said, sir, when a common man understood how to do Paytm, then this publicity has now become confirmed in the world. And now today, in every nook and corner of the country, we can see the payments reaching and completing itself. And now this takes us to the next milestone, where every one of us who uses it. Every smartphone can now use power of AI. Now, I don’t have to tell this once again. The capabilities that we will harness over the period will not be just limited by the. chat or let’s say the photo you are making or editing something or picking up a message from WhatsApp, it will go towards the industry.

So till 2025, AI was more of an individual experiential play, if you will. You know, you were trying to find out use case and the problem answers that you fundamentally believe that it will be. But 2026 begins with a commitment and confidence that AI will bring the capability in the business and the work and the problem that we typically would not have assumed that would be solved. And let me say this. Typically, I come from financial service industry and I fundamentally believe access to credit creates the wealth. But access to credit requires a lot of insights and abilities to confirm whether this money will come back or not. Many rules and regulations are allowing us to expand the reach of credit.

But by the capability of AI, we will be able to take care of corner cases where it should not go or it should go. So people will become more financially inclusive than ever before. as you are knowing the smartphone gave access to the financial system to the every nook and corner of the country now this time financial institutions will serve those customers so from access to the rich ability of financial system will reach financial systems bring wealth to the country bring access bringing access to the credit to the last person brings wealth to the person there and that is what i believe ai will be able to do let’s say in financial system you could talk about agriculture you could talk about husbandry i remember the conversation between nandan sir and prime minister sir yesterday was happening about let’s say how could you use the power of ability of cattle to use in ai and then a mull case was talked about now imagine the same thing could be done even for machines even for plants even for agriculture the capability of ai that we want to use will make it possible for us to build it in india for the problem and solution that we build for india and this time we while we are solving the problems here we will not solve a local problem we will solve a global problem because the capability of indians have been proven that we can make world -class technology the technology that falls at an order of magnitude scale and abilities that are globally renowned and capable once again i’m going to say that that this is not about foundation model only important a foundation model is a horizontal capable model i don’t mind saying that we must must and for sure have a foundation model in india all because we have a capability and resources to do it and i’m very proud that my fellow entrepreneur sarvam has done the job and i do believe that is an acknowledgement that we can build it it’s not something rocket science it’s not something that we cannot build it but the point is not about just building a foundation model point is about building the models that solve for us solve for global south solve for global problems and those models and the requirement of those models to bring in everyday life can only happen in a country where the demographic difference between the two countries is very important and i think that is the key to evidence belongs to us young people, if I tell them to use it, with whom you will be able to do it, your capability will increase, they will experience it.

So the first time our demographic dividend will also become the demographic technology dividend, if you will. The capability of our young, capability and ability and intent of our young will aid to the propagation of AI unlike ever before. It is not about just using, let’s say, a messaging platform or a payment platform. It is about adding the capability in your everyday life. And that is rare and possible only in this country. Again, there is a question of, for me, that will you build LLM models or will you build agents on top of it? I’m sure all of us have understood that models are the foundation and the, let’s say, on the top of it an agent.

It’s like asking, will you build vehicles or will you build engines? It’s not like when Daimler Benz made an engine, India didn’t make it and no other country made it. We will also make our engines. Our engines will be small, big, different, different, what will be the use? Our engines will be small, big, different, different, what will be the use? Our engines will be small, big, different, different, different, what will be the use? Our engines will be small, big, different, different, what will be the use? Our engines will be small, big, different, different, what will be the use? Our engines will be small, big, different, different, what will be the use? Our engines will different, what will be the use?

Our engines will be small, big, different, different, what will be the use? Our engines will be small, big, different, different, what will be the use? Our engines will be small, big, different, different, what will be the use? Our engines will be small, big, different, what will be the use? Our engines will be small, big, different, different, what will be the use? Our engines will be and many more fold than ever before imagine so right now what has happened in the world is that someone has made an engine which is called ICE and you are saying that can you make an engine, yes we can make an engine because we know the nuances of it but what is more important than that is the use case of that engine using it to make a passenger vehicle, using it to make a bus using it to make a truck, using it to make a trailer that is the use case the world wants to see, India not only will be the use case capital of the world but India will also be the capital of number of LLMs that India will build India will build more number of LLMs for the section of usage and ability of usage than ever before the fight is not about just the foundation model, fight is about AI that works for a sector, works for a segment and solves the problem of an agent for example like call center, call center, call center is a talked about thing that we will let’s say what will happen to the jobs of call centers I don’t mind saying that call center as a literally job may or may not be challenged yes but the capability is immense if we can solve the call of someone else’s country why can’t we solve the healthcare problem of someone else’s country if imagine a European there is there is a old age in Europe and you need to solve for their health care tracking and conditioning and requirements so a call center can evolve to become a healthcare provider because they can track the local knowledge of that country in the newest of that country and remotely somebody can humanly look at it and confirm yes you should take that action and that capability can only happen in a country that is embracing the change and embracing the technology it is not a question of whether there will be AI led job reduction it is rather a question of there will be AI led abundance and are you on the riding the wave or are you getting victimized on the wave I remember 2010 when this country had feature force I remember the business model that I used to run was feature phone led value added services business ringtone ringback tone many of you might have been the customer and you remember that and I want to tell you one thing I was going for IPO in 2010 and the challenge was that what will happen to the future forward because the smartphone I had seen in US and I was uncomfortable that we should do an IPO at that point of time because I was like the business model is going to change and the power of capability of smartphone was not about that they will be PCOS CDA and that is the power of AI that you should look at it and that is the power of AI that you should look at it and that is the power of AI that you should look at it and that is the power of AI that you should look at it and that is the capability that Indians will look at it some of us will embrace it as a ability and capability that we can extend and deliver even further set of services capabilities that are not yet seen and reached within ourselves and some of us will feel that we are victim of the capability this machine gives and that is the change my friend always continuous in the world and I think the India and the land of Gita, which has told that change is the only constant in the world, will not only embrace it and lead it, but it will lead it from the front and show the world the ability and capabilities of AI that will show up.

So ladies and gentlemen, I’m very proud to be in the country where we today are talking and the center of universe of AI gravity is. And from here onwards, we will, instead of looking at AI as a challenger to any problem that we see or any opportunity that we today yield, but to a larger opportunity and larger capability that India will make and all Indian will make India proud. So with this, I again and again say the ability of India can only be underestimated when we all together join our hands and join in the revolution. So I would say this once again, join the revolution and change the way India is perceived in the world.

And today, our Honorable Prime Minister has shown that the center of gravity of AI is India. We are here. We are here. We are here. We are here. We are here. We are here. Thank you so much, guys.

Related ResourcesKnowledge base sources related to the discussion topics (17)
Factual NotesClaims verified against the Diplo knowledge base (6)
Confirmedhigh

“Most people already interact with AI through personal agents or co‑pilots, and the more they use it, the more it becomes addictive.”

The speaker’s statement matches the wording in the knowledge base, which notes that users employ personal agents or co-pilots and that increased use can become addictive [S8].

Additional Contextmedium

“Sharma launched QR‑code payments during the demonetisation era and demonstrated that even his house‑help in Aligarh could complete a Paytm transaction by photographing a QR code, showing mass‑market adoption.”

The knowledge base confirms that Sharma discussed the launch of QR-code payments, but it does not mention the house-help anecdote; the QR-code launch detail provides supporting context [S8].

Confirmedhigh

“2026 will be a turning point when AI shifts from an individual, experiential tool to a core capability embedded in businesses and entire industries.”

A source explicitly frames 2026 as a year when the conversation around AI moves from surprise to stewardship and broader industry impact, confirming the claim [S59].

Additional Contextmedium

“AI can handle the “corner cases” that traditional credit‑risk models miss, enabling more inclusive lending and turning the smartphone‑enabled financial system into a truly inclusive one.”

While the knowledge base does not use the phrase “corner cases,” it discusses AI as a game-changer in financial services and notes banks will continue core functions while transforming delivery, providing contextual support for the inclusive-lending narrative [S2] and [S63].

Confirmedhigh

“AI‑driven cattle‑health monitoring and broader AI applications for crops, farm machinery, and agrarian challenges are being discussed at the highest level.”

The knowledge base cites an AI-powered robot (SwagBot) for cattle farming and references AI initiatives in agriculture, confirming the existence of AI projects in livestock and crop sectors [S64] and [S11].

Confirmedhigh

“India’s youthful population creates a “demographic‑technology dividend,” turning the demographic advantage into a catalyst for rapid AI diffusion and economic growth.”

Multiple sources highlight India’s young, digitally native population as a driver of both creation and consumption of AI technologies, aligning with the speaker’s coined term [S39] and [S66].

External Sources (68)
S1
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S2
From Innovation to Impact_ Bringing AI to the Public — – Vijay Shekhar Sharma- Audience – Vijay Shekhar Sharma- Harinder Takhar
S3
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S4
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S5
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S6
Welcome Address — Artificial intelligence
S7
Seismic Shift — Startup activity in India rose to prominence with the Modi government’s 2016 launch of Startup India, an initiative desi…
S8
https://dig.watch/event/india-ai-impact-summit-2026/building-trusted-ai-at-scale-cities-startups-digital-sovereignty-keynote-vijay-shekar-sharma-paytm — Many of you must be using a personal agent in every other day. And if not agent, you must be using a co -pilot. You must…
S9
AI driving transformation in financial services — At YourStory’s Tech Leaders’ Conclave, Ankur Pal, Chief Data Scientist at Aplazo,discussedhow AI is transforming the fin…
S10
https://dig.watch/event/india-ai-impact-summit-2026/from-innovation-to-impact_-bringing-ai-to-the-public — And will I get a loan or not? The question and answer we don’t get from anyone, and they try to do it on the phone call,…
S11
AI for agriculture Scaling Intelegence for food and climate resiliance — A very good morning to all of you. Shri Devesh Chaturvedi ji, Rajesh Agarwal ji, Vikas Rastogi ji. Mr. Jonas Jett, Srima…
S12
How Small AI Solutions Are Creating Big Social Change — African languages. And we just released a data set of 21 now, 27 voice languages, given that Africa has 2 ,000 or so lan…
S13
Shaping the Future AI Strategies for Jobs and Economic Development — So we have been able in recent times to use our resources to establish telemedicine in particular areas. We have now ove…
S14
Agentic AI drives structural change in customer care — Customer careis undergoing structural changeas agentic AI moves from experimental pilots to large-scale deployment. Adva…
S15
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vivek Raghavan Sarvam AI — And I want this. The most important thing that I want people to understand is… just because, and I think that the, you…
S16
https://dig.watch/event/india-ai-impact-summit-2026/keynote-vinod-khosla — But that’s sort of my hope. So the AI talks directly to patients. It diagnoses, prescribes tests, prescriptions. We are …
S17
Building the Workforce_ AI for Viksit Bharat 2047 — From the community health worker delivering nutrition to an expecting mother to the balancing worker strategizing access…
S18
DRAFT AUGUST, 2024 — Nigeria shall also emphasise technical skills and talent to drive AI adoption and initiatives, as well as change managem…
S19
AI Innovation in India — This comment energized the discussion by providing a grand vision that contextualized all the individual innovations wit…
S20
AI 2.0 Reimagining Indian education system — The discussion positioned India’s educational AI integration within broader national aspirations for global AI leadershi…
S21
IndoGerman AI Collaboration Driving Economic Development and Soc — India’s demographic dividend, combined with rapid innovation capabilities, complements Germany’s engineering precision a…
S22
Powering AI _ Global Leaders Session _ AI Impact Summit India Part 2 — And we are one of the cheapest data connectivity package in the world. So we are. The largest user of YouTube in the wor…
S23
Revisiting 10 AI and digital forecasts for 2025: Predictions and Reality — The financial inclusion sector is transforming in 2025, moving beyond mere access to financial services to focus onfinan…
S24
Secure Finance Risk-Based AI Policy for the Banking Sector — “Yet, inclusion cannot be assumed”[73]. “If harnessed responsibly, AI can convert this expanding digital footprint into …
S25
Impact of the Rise of Generative AI on Developing Countries | IGF 2023 Town Hall #29 — Additionally, generative AI can democratise financial services by allowing all participants to easily access the service…
S26
Open Forum #53 AI for Sustainable Development Country Insights and Strategies — Participant: See, when you look at AI or when you look at digital public infrastructure solutions, one thing that one sh…
S27
World Economic Forum Open Forum: Visions for 2050 – Discussion Report — Arjun envisions a 2050 where AI creates abundance in healthcare, human services, and education systems that are easy to …
S28
Harnessing Collective AI for India’s Social and Economic Development — Kushe Bahl believes that AI will fundamentally reshape jobs rather than just replacing them outright. He suggests this t…
S29
How to make AI governance fit for purpose? — Economic and Social Impact Economic | Development The Trump administration believes AI will bring countless revolution…
S30
AI Meets Agriculture Building Food Security and Climate Resilien — Dr. Soumya Swaminathan This comment introduces crucial nuance to AI adoption by highlighting the socioeconomic implicat…
S31
Lightning Talk #209 Safeguarding Diverse Independent NeWS Media in Policy — ## Background and Research Context None identified beyond those in the speakers names list.
S32
Laying the foundations for AI governance — Lan Xue: Okay. I think my job is easier. I can say I agree with all of them. So I think that’s probably the easiest way….
S33
morning session — In addition to the discussions surrounding confidence-building measures and the BWC, this expanded summary also emphasiz…
S34
Table of contents — + Even though Estonia is esteemed as a digital country in the world, our attention and resources are largely directed to…
S35
Building the Next Wave of AI_ Responsible Frameworks & Standards — What is interesting is India is uniquely positioned in this global AI discourse. Most global AI frameworks are designed …
S36
Leaders’ Plenary | Global Vision for AI Impact and Governance- Afternoon Session — Julie Sweet from Accenture highlighted another crucial advantage: India’s human capital. With over 350,000 employees in …
S37
The Global Power Shift India’s Rise in AI & Semiconductors — The panelists emphasized that true AI leadership requires alignment across four key pillars: silicon, software, systems,…
S38
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — India’s unique position—combining technical talent, diverse datasets, a vibrant startup ecosystem, and supportive policy…
S39
Keynote_ 2030 – The Rise of an AI Storytelling Civilization _ India AI Impact Summit — India’s advantages in this transformation include demographic energy, linguistic complexity, cultural depth spanning tho…
S40
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Giordano Albertazzi — Albertazzi positioned India as central to the AI evolution, citing several key advantages that make the country particul…
S41
AI Collaboration Across Borders_ India–Israel Innovation Roundtable — And I think that India is the spiritual capital of the world. You have thousands of years in exploring the human spirit….
S42
AI Innovation in India — This comment energized the discussion by providing a grand vision that contextualized all the individual innovations wit…
S43
AI 2.0 Reimagining Indian education system — The discussion positioned India’s educational AI integration within broader national aspirations for global AI leadershi…
S44
Technology Rewiring Global Finance: A Panel Discussion Summary — Economic | Infrastructure Hu argues that financial services, being data-rich, are ripe for AI transformation. He emphas…
S45
Revisiting 10 AI and digital forecasts for 2025: Predictions and Reality — The financial inclusion sector is transforming in 2025, moving beyond mere access to financial services to focus onfinan…
S46
How the Global South Is Accelerating AI Adoption_ Finance Sector Insights — Technology will drive financial inclusion by making services accessible through natural language interactions in local l…
S47
Impact of the Rise of Generative AI on Developing Countries | IGF 2023 Town Hall #29 — In conclusion, generative AI has immense potential to revolutionize various sectors and bring about significant benefits…
S48
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — Building indigenous foundation models and sector‑specific LLMs Sharma stresses that India must create its own foundatio…
S49
Scaling Trusted AI_ How France and India Are Building Industrial & Innovation Bridges — Unexpectedly, these speakers represent different philosophies toward AI development. Sheth emphasizes building indigenou…
S50
From Innovation to Impact_ Bringing AI to the Public — The conversation highlights India’s advantageous position as a $2.5-3.5 trillion economy with potential to add another $…
S51
The Intelligent Coworker: AI’s Evolution in the Workplace — -Workforce Impact and Career Evolution- Discussion of how AI will reshape job structures, eliminate traditional entry-le…
S52
High Level Session 3: AI & the Future of Work — Junha Li: Thank you. Good morning. Good to see you again in this plenary hall. Before I’ll distinguish the panel, starti…
S53
Harnessing Collective AI for India’s Social and Economic Development — Kushe Bahl believes that AI will fundamentally reshape jobs rather than just replacing them outright. He suggests this t…
S54
Shaping the Future AI Strategies for Jobs and Economic Development — Some of them are very close to the coastal area so that it can also accommodate cable landing stations for us, so that t…
S55
How to make AI governance fit for purpose? — Economic and Social Impact Economic | Development The Trump administration believes AI will bring countless revolution…
S56
Ghibli trend as proof of global dependence on AI: A phenomenon that overloaded social networks and systems — It is rare to find a person in this world (with internet access) who has not, at least once, consulted AI about some dil…
S57
People are forming emotional bonds with AI chatbots — AI is reshaping how peopleconnectemotionally, with millions turning to chatbots for companionship, guidance, and intimac…
S58
Are AI companions reshaping human intimacy or eroding real connections? — AI companion apps, such as Replika, Character.AI, and others, redefinehow people experience emotional connection. These …
S59
AI in 2026: Learning to live with powerful systems — As we look ahead to 2026, things start to change. In this sense, 2026 may come to represent a period of recalibration. …
S60
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — Excellency, thank you very much first and foremost I would like to thank India for hosting this excellent event Malaysia…
S61
AI and Digital Developments Forecast for 2026 — Data governance | Privacy and data protection | Capacity development Kurbalija applies the Pareto principle to AI inves…
S62
Most transformative decade begins as Kurzweil’s AI vision unfolds — AI no longer belongs to speculative fiction or distant possibility. In many ways, it has arrived. From machine translati…
S63
The rise of AI in financial services: balancing opportunities and challenges — According to industry executives, AIis increasingly seenas a game-changer in the financial services sector, offering sig…
S64
SwagBot aims to prevent soil degradation — A revolutionary AI-powered robot named SwagBotis changing the face of cattle farming. Developed by researchers at the Un…
S65
Not Losing Sight of Soft Power — Prime Minister’s Background and Vision
S67
(Interactive Dialogue 2) Summit of the Future – General Assembly, 79th session — Civil Society 2: Mr. Chairman, distinguished colleagues. Today we face a world filled with growing geopolitical tensio…
S68
https://dig.watch/event/india-ai-impact-summit-2026/indogerman-ai-collaboration-driving-economic-development-and-soc — That’s very diplomatically put. Sindhu, I come to you. You have just inaugurated a huge campus and your ambitions of tak…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
Speaker 1
1 argument119 words per minute9 words4 seconds
Argument 1
Opening welcome establishes focus on India’s AI prominence
EXPLANATION
The host introduces Vijay Shekhar Sharma, signalling the importance of the event and setting the stage for a discussion centered on India’s role in AI. This brief welcome frames the subsequent narrative about India’s AI leadership.
EVIDENCE
The moderator says, “Ladies and gentlemen, please welcome Mr. Vijay Shekhar Sharma,” thereby formally opening the session and drawing attention to the speaker and the topic of AI in India [1].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The welcome address for the AI session explicitly frames the event around AI in India, matching the opening welcome described [S6].
MAJOR DISCUSSION POINT
Opening welcome
AGREED WITH
Vijay Shekhar Sharma
V
Vijay Shekhar Sharma
13 arguments207 words per minute2135 words616 seconds
Argument 1
National AI pride – India hosts the world’s AI talent and events
EXPLANATION
Sharma asserts that all leading AI researchers and practitioners are now concentrated in India, creating a source of national pride. He emphasizes that this clustering underscores India’s emerging status as a global AI hub.
EVIDENCE
He states, “everybody who is an Indian must be very proud that all the AI people in the world are in one city and one country,” and adds that the audience should applaud the event’s host for making this possible [2-3].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Sharma’s claim that India is the AI “gravity centre” is echoed in the keynote where India is described as the focal point of global AI innovation [S1].
MAJOR DISCUSSION POINT
AI talent concentration
AGREED WITH
Speaker 1
Argument 2
Government leadership – Prime Minister’s push mirrors Startup India success
EXPLANATION
Sharma credits the Prime Minister for driving AI enthusiasm, drawing a parallel with the earlier Startup India initiative that spurred entrepreneurship. He suggests that government backing is crucial for replicating past successes in the AI domain.
EVIDENCE
He notes, “Our Prime Minister has been able to bring the excitement of AI,” and compares it to the “Startup India” effort from ten years ago, saying the current AI push is a continuation of that leadership [7-9].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The Startup India initiative’s impact on the ecosystem is documented, and Sharma’s reference to the Prime Minister’s AI drive aligns with this government-led momentum [S7]; the keynote also highlights the PM’s role in AI promotion [S1].
MAJOR DISCUSSION POINT
Government‑driven AI momentum
Argument 3
Financial inclusion – AI will improve credit assessment, reaching the unserved
EXPLANATION
Drawing on his experience in financial services, Sharma argues that AI can analyze vast data to assess credit risk more accurately, enabling loans for people who were previously excluded. This, he says, will broaden wealth creation across the country.
EVIDENCE
He explains that “access to credit creates wealth” but requires “insights and abilities to confirm whether this money will come back,” and that AI will handle “corner cases” to make credit more inclusive [32-36].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Sharma’s remarks on AI-driven credit assessment are supported by the keynote’s discussion of expanding credit reach through AI and by a separate transcript on AI-enabled loan decisions [S1], [S10].
MAJOR DISCUSSION POINT
AI‑driven credit expansion
Argument 4
Agriculture & livestock – AI can solve local problems and scale to global solutions
EXPLANATION
Sharma envisions AI applications in farming, animal husbandry, and plant science, starting with Indian challenges and then extending to worldwide issues. He claims that solving local problems with AI will demonstrate India’s capacity to address global needs.
EVIDENCE
He references a recent discussion with the Prime Minister about using AI for cattle, and expands the idea to “machines, plants, agriculture,” asserting that Indian AI solutions will become globally relevant [36].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Examples of AI models for plant nutrient detection, livestock health, and broader agricultural use cases are provided in the “Innovation to Impact” session and in a dedicated agriculture AI briefing [S2], [S11].
MAJOR DISCUSSION POINT
AI for agri‑livestock
Argument 5
Call‑center evolution – AI agents can turn call centers into remote healthcare providers
EXPLANATION
Sharma suggests that AI‑powered agents can transform traditional call‑center jobs into services like remote health monitoring, leveraging localized knowledge to serve other countries. This illustrates a broader shift from simple automation to sector‑specific AI services.
EVIDENCE
He describes how a call centre could “evolve to become a healthcare provider” by tracking local health needs and enabling remote human verification, highlighting the potential for cross-border service delivery [59].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The transformation of customer-care operations through agentic AI and the deployment of tele-medicine services illustrate the call-center to health-service shift Sharma describes [S14], [S13].
MAJOR DISCUSSION POINT
AI‑enabled service transformation
Argument 6
Need for home‑grown foundation models – India must create its own LLMs
EXPLANATION
Sharma argues that India should develop its own large language models rather than rely on foreign foundations, positioning this as a strategic necessity for technological sovereignty. He likens model creation to building engines for vehicles.
EVIDENCE
He asks, “Will you build LLM models or will you build agents on top of it?” and compares it to “building vehicles or building engines,” concluding that “We will also make our engines” [42-46].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The keynote stresses the strategic necessity for indigenous foundation models and parallels with engine development, directly supporting Sharma’s argument [S1].
MAJOR DISCUSSION POINT
Indigenous LLM development
Argument 7
Proof of capability – Indian entrepreneur Sarvam’s model demonstrates feasibility
EXPLANATION
Sharma cites the work of fellow entrepreneur Sarvam as concrete evidence that India possesses the talent and resources to build advanced AI models. This example serves to validate the claim that home‑grown foundation models are achievable.
EVIDENCE
He expresses pride that “my fellow entrepreneur Sarvam has done the job” and treats it as “an acknowledgement that we can build it” [36].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Sarvam’s successful development of a world-class model in India is highlighted in a dedicated interview and a follow-up discussion on adapting the model for Indian languages [S15], [S16].
MAJOR DISCUSSION POINT
Local success story
Argument 8
Shift to sector‑specific agents – Focus on models that solve concrete industry problems
EXPLANATION
Sharma emphasizes moving beyond generic foundation models toward specialized AI agents tailored for particular sectors, such as call‑centers, healthcare, or logistics. This shift is presented as essential for delivering real economic value.
EVIDENCE
He notes the question of “will you build LLM models or will you build agents on top of it?” and later illustrates sector-specific use cases like call-center-to-healthcare transformation [42-45][59].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The shift toward industry-specific AI agents is discussed in the agentic AI structural change report and reiterated in the keynote’s sector-focused model strategy [S14], [S1].
MAJOR DISCUSSION POINT
Industry‑focused AI agents
Argument 9
Youth‑driven AI adoption – Young population will turn demographic dividend into a tech dividend
EXPLANATION
Sharma claims that India’s large youth cohort will accelerate AI uptake, converting the traditional demographic dividend into a “demographic technology dividend.” He suggests that young people’s enthusiasm and capability will drive AI propagation.
EVIDENCE
He states, “our demographic dividend will also become the demographic technology dividend,” and highlights the “capability of our young… will aid to the propagation of AI” [37-38].
MAJOR DISCUSSION POINT
Tech‑focused demographic dividend
Argument 10
AI‑led abundance vs. job loss – Embrace AI to create new opportunities rather than be victimized
EXPLANATION
Sharma argues that AI will generate abundance and new kinds of work, countering fears of job displacement. He frames the choice as riding the AI wave versus being victimized by it.
EVIDENCE
He says, “it is not a question of whether there will be AI led job reduction it is rather a question of there will be AI led abundance and are you on the riding the wave or are you getting victimized on the wave” [59].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The keynote frames AI as a driver of job evolution rather than displacement and notes structural changes in the workforce due to agentic AI [S1], [S14].
MAJOR DISCUSSION POINT
AI and future of work
Argument 11
Historical parallel – Smartphone transition shows the need to adapt to disruptive tech
EXPLANATION
Sharma draws parallels between past technology shifts—such as QR‑code/Paytm adoption and the move from feature phones to smartphones—and the current AI wave, warning that failure to adapt will leave businesses behind.
EVIDENCE
He recounts his QR-code experience with the government and Paytm adoption as an illustration of rapid tech uptake [17-23], and later reflects on his 2010 feature-phone business model that was disrupted by smartphones [59].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Sharma’s analogy to QR-code/Paytm adoption and the shift from feature phones to smartphones is documented in the speaker’s recount of rapid tech uptake [S8].
MAJOR DISCUSSION POINT
Learning from past disruptions
Argument 12
Collective participation – All Indians must unite to drive AI forward and reshape global perception
EXPLANATION
Sharma issues a rallying call for nationwide involvement in the AI revolution, asserting that collective effort will change how India is viewed internationally. He repeats the invitation to “join the revolution.”
EVIDENCE
He urges, “join the revolution and change the way India is perceived in the world,” and reinforces the message with repeated chants of “We are here” [62-68].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The keynote repeatedly calls listeners to “join the revolution” and emphasizes nationwide mobilisation for AI leadership [S1].
MAJOR DISCUSSION POINT
National AI mobilisation
Argument 13
India as AI gravity centre – The country is now the focal point of AI innovation; engagement is essential
EXPLANATION
Sharma declares India the “center of gravity” for AI, suggesting that the nation now leads global AI activity. He links this status to the Prime Minister’s leadership and calls for continued engagement.
EVIDENCE
He proclaims, “the center of universe of AI gravity is,” and later notes that “our Honorable Prime Minister has shown that the center of gravity of AI is India” [60][64].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The statement that India is the “center of gravity” for AI is directly quoted in the keynote, confirming the claim [S1].
MAJOR DISCUSSION POINT
India as AI hub
Agreements
Agreement Points
India is positioned as the global AI hub and a source of national pride
Speakers: Speaker 1, Vijay Shekhar Sharma
Opening welcome establishes focus on India’s AI prominence National AI pride – India hosts the world’s AI talent and events
Speaker 1 opens the session by welcoming the speaker and framing the event around AI in India [1]; Sharma immediately reinforces this framing by stating that all AI people in the world are now in one Indian city and country and later declares India the centre of gravity of AI [2-3][60][64].
POLICY CONTEXT (KNOWLEDGE BASE)
Multiple speakers emphasized India’s unique standing in the global AI ecosystem, citing its large technical talent pool, multilingual datasets, vibrant startup scene, and supportive policy environment, which together frame the country as a leading AI hub and a point of national pride [S35][S36][S37][S38].
Similar Viewpoints
Sharma repeatedly emphasizes that India has become the world’s AI hub, first by highlighting the concentration of AI talent and later by proclaiming India the centre of gravity of AI [2-3][60,64].
Speakers: Vijay Shekhar Sharma
National AI pride – India hosts the world’s AI talent and events India as AI gravity centre – The country is now the focal point of AI innovation; engagement is essential
Unexpected Consensus
None identified
Speakers:
The transcript contains only two speakers, and the only overlap is the expected alignment on India’s AI prominence; no surprising areas of agreement emerge.
Overall Assessment

The discussion shows a clear, though limited, consensus that India is emerging as the global centre for AI, framed by the opening welcome and reinforced throughout Sharma’s remarks. Apart from this shared narrative, there is little substantive overlap on other themes such as financial inclusion, agriculture, or AI‑led abundance, which remain the domain of Sharma’s individual arguments.

Low to moderate – agreement is confined to the overarching claim of India’s AI leadership, implying a unified national narrative but limited convergence on policy‑specific or sector‑specific issues.

Differences
Different Viewpoints
Unexpected Differences
Overall Assessment

The transcript contains only two speakers: Speaker 1, who delivers a brief welcome ([1]), and Vijay Shekhar Sharma, who presents a continuous monologue covering many themes ([2-68]). No opposing statements or contrasting viewpoints are expressed between the speakers; Sharma does not directly address or contradict the welcome, and Speaker 1 does not raise any substantive claim. Consequently, there are no identifiable disagreement points, no partial agreements with differing approaches, and no unexpected areas of conflict.

Minimal – the interaction is essentially a one‑sided presentation, so disagreement does not influence the discussion of any of the listed topics.

Takeaways
Key takeaways
India is positioned as the global hub for AI talent and events, driven by strong government leadership and a legacy of initiatives like Startup India. AI is expected to transform key sectors: financial inclusion through better credit assessment, agriculture and livestock management, and the evolution of call centers into remote healthcare providers. Building indigenous foundation models (LLMs) is essential; Indian entrepreneurs have already demonstrated feasibility, and the focus should shift toward sector‑specific AI agents that solve concrete problems. India’s large youth population can convert the demographic dividend into a ‘technology dividend’ by rapidly adopting and creating AI solutions, leading to AI‑led abundance rather than job loss. A collective call to action urges all stakeholders—government, industry, and citizens—to unite and drive the AI revolution, positioning India as the world’s AI gravity centre.
Resolutions and action items
Encourage the development of home‑grown foundation models and sector‑specific AI agents within India. Leverage the youth demographic to accelerate AI adoption and skill development. Promote AI‑driven financial inclusion initiatives that expand credit access to underserved populations. Explore AI applications in agriculture, livestock, and remote healthcare, using Indian‑built models to address both local and global challenges. Mobilize a nationwide AI coalition—government, startups, academia, and citizens—to collaborate on building and deploying AI solutions.
Unresolved issues
Specific funding mechanisms and investment strategies for creating large‑scale Indian foundation models. Regulatory frameworks and data governance policies needed to safely deploy AI in credit assessment and financial services. Detailed implementation road‑maps for AI integration in agriculture, livestock, and healthcare sectors. Quantitative assessment of AI’s impact on employment and strategies to mitigate potential job displacement. Clear timelines and measurable milestones for achieving India’s goal of becoming the AI gravity centre.
Suggested compromises
None identified
Thought Provoking Comments
Till 2025, AI was more of an individual experiential play… 2026 begins with a commitment and confidence that AI will bring capability to business and solve problems we typically would not have assumed could be solved.
Marks a clear temporal shift, moving the conversation from AI as a personal gadget to a strategic, enterprise‑level catalyst, highlighting a future‑oriented vision.
This comment pivots the discussion from anecdotal uses to a macro‑level forecast, setting up subsequent points about industry transformation, financial inclusion, and sector‑specific AI solutions.
Speaker: Vijay Shekhar Sharma
Access to credit creates wealth, but it requires deep insights to assess repayment risk. AI will enable us to handle corner cases, making credit more financially inclusive than ever before.
Links AI directly to a critical socioeconomic challenge—financial inclusion—by proposing a concrete application (risk assessment) that could reach the ‘last person.’
Introduces a tangible use‑case that expands the conversation from generic AI hype to specific societal impact, leading to later references about agriculture, livestock, and broader inclusion.
Speaker: Vijay Shekhar Sharma
It’s not just about having a foundation model; we must build models that solve problems for the Global South and for global challenges, leveraging India’s demographic advantage.
Challenges the notion that merely adopting existing models suffices, advocating for home‑grown, context‑specific AI that serves both local and global needs.
Shifts the narrative toward indigenous innovation, prompting the later engine/vehicle analogy and reinforcing the call for India to develop its own AI ‘engines.’
Speaker: Vijay Shekhar Sharma
Building LLMs is like building engines; the real value is in the vehicles (applications) we create—call centers, healthcare, transportation—using those engines.
Uses a vivid analogy to differentiate between foundational technology and its practical deployments, emphasizing the importance of application over mere model creation.
Clarifies the strategic focus for listeners, steering the discussion toward sector‑specific AI deployments and reinforcing the earlier point about building models for real‑world problems.
Speaker: Vijay Shekhar Sharma
It is not a question of AI‑led job reduction; it is a question of AI‑led abundance. Are you riding the wave or being victimized by it?
Reframes a common fear about automation into an opportunity narrative, prompting a mindset shift from threat to potential prosperity.
Introduces a hopeful tone that underpins the later call to action, encouraging participants to view AI as a catalyst for new economic growth rather than a job killer.
Speaker: Vijay Shekhar Sharma
My experience in 2010 with feature‑phone value‑added services taught me that business models can become obsolete; the smartphone—and now AI—represents a similar disruptive shift that we must anticipate.
Draws on personal history to illustrate the pattern of technological disruption, providing credibility and a concrete lesson for the audience.
Serves as a reflective turning point, linking past disruption to present AI trends and reinforcing the urgency of adaptation discussed throughout the talk.
Speaker: Vijay Shekhar Sharma
Join the revolution and change the way India is perceived in the world; the centre of gravity of AI is India.
A rallying call that synthesizes earlier arguments into a unifying, motivational message, aiming to mobilize collective effort.
Concludes the monologue by consolidating all prior points into a decisive call to action, leaving the audience with a clear directive and a sense of national pride.
Speaker: Vijay Shekhar Sharma
Overall Assessment

The discussion, though delivered as a single‑speaker monologue, is structured around several pivotal insights that progressively broadened its scope—from personal AI experiences to national strategy, socioeconomic impact, indigenous model development, and a reframing of automation fears. Each key comment acted as a turning point, redirecting focus to new themes and deepening the narrative, ultimately culminating in a unifying call to action that sought to galvanize India’s AI community and position the country as a global AI leader.

Follow-up Questions
Will you build LLM models or will you build agents on top of it?
Determines strategic focus between creating foundational large language models versus building application‑level AI agents, impacting investment and talent allocation.
Speaker: Vijay Shekhar Sharma
Will you build vehicles or will you build engines?
Metaphorical query emphasizing the need to develop core AI capabilities (engines) before proliferating diverse applications (vehicles).
Speaker: Vijay Shekhar Sharma
Develop a homegrown foundation language model (LLM) in India
Essential for self‑reliance, tailoring AI to Indian languages and contexts, and establishing India as a global AI model creator.
Speaker: Vijay Shekhar Sharma
Create sector‑specific AI models for finance, agriculture, livestock, healthcare, and call centers
Targeted models can solve industry‑unique problems, turning local innovations into globally competitive solutions.
Speaker: Vijay Shekhar Sharma
Use AI to enhance financial inclusion and credit risk assessment
AI can handle complex corner cases, expanding credit access to underserved populations and driving economic growth.
Speaker: Vijay Shekhar Sharma
Apply AI to agriculture and animal husbandry (e.g., cattle health monitoring)
Improves productivity, sustainability, and food security, leveraging AI for critical primary sector challenges.
Speaker: Vijay Shekhar Sharma
Leverage India’s demographic dividend as a ‘demographic technology dividend’ for AI adoption
Young, tech‑savvy population can accelerate AI diffusion, innovation, and economic benefits across the country.
Speaker: Vijay Shekhar Sharma
Investigate AI’s impact on employment, focusing on AI‑led abundance versus job reduction
Understanding socioeconomic effects is vital for policy making, workforce reskilling, and ensuring inclusive growth.
Speaker: Vijay Shekhar Sharma
Develop AI solutions that address Global South challenges, not just local Indian problems
Positions India as a leader in creating AI that solves worldwide issues, enhancing global relevance and market opportunities.
Speaker: Vijay Shekhar Sharma

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion Moderator Amitabh Kant NITI

Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion Moderator Amitabh Kant NITI

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel at the AI Impact Summit 2026 examined how rapid AI advances are reshaping India’s IT services, SaaS models, and broader economic productivity [6-11]. Arundhati Bhattacharya cautioned that market headlines about a 40 % drop in Salesforce’s valuation and a “AI-agent-replacing” SaaS model are overstated, emphasizing that successful SaaS requires workflow understanding, governance, observability and adoption, not just low-code development [12-23]. She added that the current models of working will evolve and firms must stay agile to add value, noting that many players will emerge and the ultimate test is whether AI improves living standards [30-38].


K. Krithivasan explained that AI will shift many engineers from writing code to orchestrating AI systems, but system integrators will remain essential for testing, validation, requirements and cybersecurity, especially as cloud adoption is still only 30-40 % and enterprises must rationalize data estates and train multiple models [46-70]. Salil Parekh highlighted a $300 billion AI services opportunity for Indian firms, citing AI engineering, legacy-modernisation and the company’s Topaz Fabric IP layer that enables clients to work across foundation models and custom agents [79-102]. C. Vijayakumar described HCL Tech’s unique position with a software product line, custom silicon and high revenue per employee, and said the company will focus on building solutions that bridge the gap between foundation models and enterprise needs rather than becoming a hyperscaler [108-127].


On the talent side, Krithivasan reported a recent workshop where 1,500 schoolchildren built 1,500 apps in three hours, illustrating AI’s potential to upskill non-technical youth and the industry’s collaboration with the Ministry of IT on curricula [135-147]. Arundhati expanded this view, arguing that democratizing AI for MSMEs and blue-collar workers requires addressing skilling, access to jobs, timely payments and marketplace platforms that certify and match workers, thereby raising overall quality of life [157-177]. Salil noted that India is already leveraging its digital public infrastructure to roll out AI pilots in agriculture, health and education, with support from chip, data-center and architectural layers to make AI services widely affordable [183-192].


Vijayakumar warned that capturing the projected $350-400 billion AI services market will demand substantially higher R&D spending, citing a trillion-dollar “physical AI” opportunity and the need to build solution labs ahead of demand [200-215]. Both Krithivasan and Vijayakumar agreed that AI is likely to create more jobs than it destroys, though the new roles will emphasize programming fundamentals, critical thinking and the ability to orchestrate multiple AI agents [299-304][283-296]. Salil also stressed the importance of responsible-AI frameworks to ensure ethical model training and deployment [306-311]. Concluding, Amitabh Kant summarized the panel’s optimism that AI will drive a “Vixit Bharat” by 2047, generating diverse employment and helping India reach a $30 trillion economy [312-317].


Keypoints


Major discussion points


AI’s impact on the traditional SaaS model and Indian enterprises – Arundhati Bhattacharya cautioned that market hype (e.g., Salesforce’s 40 % valuation drop) does not automatically invalidate the SaaS model; she emphasized that SaaS success still depends on workflow understanding, governance, observability and delivering concrete customer value, and that the “jury is still out” on whether AI agents will replace it outright [12-23][30-38].


The evolving nature of IT services work – K. Krithivasan argued that AI will not eliminate system-integrator roles but will shift emphasis toward requirements-engineering, context-engineering, validation, security and cloud-adoption; the volume of work will grow rather than shrink, creating “more interesting work” [46-53][58-70].


Infosys’s AI services opportunity and IP strategy – Salil Parekh described a $300 billion AI-services opportunity across six focus areas (e.g., AI engineering, legacy modernization) and highlighted Infosys’s proprietary “Topaz Fabric” IP layer that abstracts foundation models and agents, signalling a move from pure “builder-for-hire” to owning AI stack IP [79-88][98-102].


HCL Tech’s positioning in the AI stack – C. Vijayakumar explained that HCL leverages its product business and custom silicon capabilities to build enterprise-grade solutions that bridge the gap between foundation models and practical use cases, while deliberately avoiding a hyperscaler role and focusing on solution-centric IP [108-118][124-127].


Skilling, democratization of AI and national digital public infrastructure – The panel stressed that AI must be made accessible to blue-collar workers and MSMEs, requiring new curricula, community-level training, and a DPI-style AI infrastructure (agriculture, health, education) to uplift productivity across the country [135-147][157-176][183-192].


Overall purpose / goal of the discussion


The session was convened as the closing panel of the AI Impact Summit 2026 to assess how generative AI will reshape India’s massive IT services ecosystem, to debate the sustainability of existing business models (SaaS, services), to outline strategic responses (new service lines, IP creation, partnerships), and to chart a national skilling and infrastructure roadmap that ensures AI-driven productivity and job creation for both white- and blue-collar segments.


Overall tone and its evolution


– The conversation began with a formal, probing tone, as the moderator posed a challenging market-valuation question to Arundhati [11-13].


– It then shifted to a balanced, analytical tone, with panelists dissecting technical and workforce implications (Krithivasan’s and Salil’s detailed explanations) [46-70][79-88].


– When discussing corporate strategy (Infosys, HCL) the tone became pragmatic and forward-looking, highlighting concrete IP initiatives and partnership models [98-102][124-127].


– The later segment on skilling and public AI infrastructure adopted an optimistic, inclusive tone, emphasizing democratization and national-scale impact [135-147][157-176][183-192].


– The moderator closed with a hopeful, rallying tone, projecting AI as a catalyst for massive job creation and a “Vixit Bharat” future [312-317].


Overall, the discussion moved from cautious skepticism about market hype to confident optimism about India’s capacity to harness AI through strategic innovation, skill development, and public-sector support.


Speakers

Moderator – Session moderator for the AI Impact Summit 2026. Role: Moderator of the panel discussion. [S13]


Amitabh Kant – Host and moderator of the panel. Role: Moderator (referred to as “Mr. Amitabh Kant”). Expertise: Indian IT industry, AI policy. [S6]


Arundhati Bhattacharya – Former SBI CEO and current technology leader. Role: Former Chairman & MD of State Bank of India; now a tech leader focusing on SaaS and AI. Expertise: Banking, technology, SaaS, AI. [S16]


Salil Parekh – CEO of Infosys. Role: Chief Executive Officer, Infosys Ltd. Expertise: IT services, AI services, digital transformation. [S9]


K. Krithivasan – CEO of Tata Consultancy Services (TCS). Role: Chief Executive Officer, TCS. Expertise: IT services, AI-driven workforce transformation. [S11]


C. Vijayakumar – Senior executive of HCLTech. Role: Senior leader (often referred as “C. Vijayakumar”) at HCL Technologies. Expertise: IT services, hardware/AI chips, enterprise AI solutions. [S18]


Navneet Kaul – Audience member who asked a question about AI-created jobs. Role: Audience participant. [S5]


Audience – Various members of the live audience who asked questions (e.g., Mania Sharma, Devika Rao, Kishla, Harswar, etc.). Role: Audience participants. [S1]


Additional speakers:


Christy Varshan – Referred to as “CEO of TCS” early in the discussion (likely a mis-naming of the TCS CEO).


Christy Wilson – Mentioned as “the biggest employer in India,” presumably a senior executive of a large Indian IT firm.


Mania Sharma – CEO of Mono AI, a young entrepreneur seeking mentorship.


Devika Rao – Representative from the University of Leeds, interested in AI-creative education collaborations. [S5]


Kishla – Audience member asking about skill development for current IT employees.


Harswar – Audience member concerned about AI misuse.


Mamanama Venkatana Rasimahati – Software architect and founder of “Startup Sanatana,” advocating culturally-aligned AI.


Rupa Arvindakshan – Leader of Salesforce’s startup community, mentioned as a point of contact for startups.


Full session reportComprehensive analysis and detailed insights

The AI Impact Summit 2026 closed with a panel of senior leaders from India’s IT services sector, moderated by Amitabh Kant. Kant introduced the four panelists – Salil Parekh (CEO, Infosys), K. Krithivasan (CEO, TCS), C. Vijayakumar (CEO, HCL Tech) and Arundhati Bhattacharya (Senior VP, Salesforce) – and set the tone by noting that the industry was “at a point of disruption” [1-11]. He also highlighted that the Indian IT services industry “represents over 300 billion USD in market value and employs millions of professionals” [1-3].


Arundhati Bhattacharya responded to a market-driven narrative that AI agents could wipe out the traditional SaaS model. Amitabh Kant’s opening question referenced the recent ≈ 40 % fall in Salesforce’s market value over the past 12 months [11-13]. Bhattacharya warned that such headlines often over-state the impact because SaaS success depends on more than low-code generation; it requires deep workflow understanding, governance, observability, auditability and genuine customer-value delivery [14-23]. She noted that some of the capital flowing into AI-driven SaaS is “circular money” and that investors must read the fine print [24-28]. While acknowledging that current working models will evolve, she argued that the “jury is still out and may remain so for some time” on whether AI will fundamentally overturn SaaS, and that the ultimate test will be whether AI improves living standards [30-38]. When asked about startup support, she directed interested founders to contact Rupa Arvindakshan, whose details are publicly listed on Salesforce’s website [280-282].


K. Krithivasan, CEO of TCS, shifted the focus to the future of IT-services work. He argued that AI will not eliminate system-integrator roles; instead, engineers will move from writing code to orchestrating AI systems, emphasizing requirements-engineering, context-engineering, validation, cybersecurity and testing of AI-generated outputs [46-53]. He highlighted that cloud adoption in Indian enterprises remains only 30-40 % after a decade, meaning a long-tail of migration, data-estate rationalisation and model training will generate a larger volume of more interesting work rather than a headcount shrinkage [58-70].


Salil Parekh, CEO of Infosys, outlined the company’s view of the AI-services opportunity. He cited a $300 billion market across six focus areas – AI engineering, legacy modernisation, AI factories, AI agents, physical AI and AI-driven analytics – and presented internal data showing aggressive hiring: 20 000 graduates recruited this year [91] and 13 000 added in the first three quarters [92]. Parekh also described Infosys’s proprietary “Topaz Fabric” IP layer, which abstracts foundation models and custom agents, allowing clients to work with any model while retaining Infosys-built capabilities [98-102]. This signals a strategic move from pure “builder-for-hire” to owning a reusable AI stack.


C. Vijayakumar, CEO of HCL Tech, explained his firm’s positioning. He noted that HCL’s software-product business contributes about 10 % of revenue and that the company has built custom silicon – a two-nanometre chip – for a major technology client, giving it a high revenue-per-employee metric [108-113]. HCL’s AI strategy is to bridge the gap between foundation models and enterprise use-cases, developing IP that makes large models scalable for businesses rather than attempting to become a hyperscaler or to build its own foundation models [118-127]. This pragmatic focus aligns with HCL’s decision to stay “solution-centric” while partnering with major solution providers [124-125].


Krithivasan then turned to talent development, describing a workshop in the NCR where 1 500 schoolchildren, most with no technical background, built 1 500 apps in three hours using AI-assisted native-language coding [130-138]. He framed skilling as a “major national challenge” and said the workshop was part of a broader collaboration with the Ministry of IT to design AI curricula for universities [130-138].


Bhattacharya expanded the democratisation theme, arguing that AI must be made accessible to blue-collar workers and MSMEs. She listed the challenges faced by carpenters, plumbers, hospitality staff and Anganwadi workers – skill gaps, lack of job visibility, payment delays and weak community safety nets [158-166]. She suggested AI-driven platforms could certify skills, match workers to opportunities and improve quality of life for both workers and their customers [170-176].


Parekh linked corporate strategy to national policy by describing an emerging digital public infrastructure for AI that mirrors India’s earlier DPI achievements (Aadhaar, UPI). He cited pilot projects in agriculture, health and education that are already being rolled out with support from chip, data-centre and architectural layers, and noted ongoing work with ministries to make AI services affordable and widely available [188-192].


Vijayakumar warned that capturing the projected $350-$400 billion AI-services market will require a substantial increase in R&D intensity. He pointed to a trillion-dollar “physical AI” opportunity, estimating at least $200 billion of services revenue, and argued that Indian firms must invest now in labs, proofs-of-concept and solution-building before demand materialises [200-209][210-215]. He added that outcome-based contracts will eventually fund higher R&D spend, but a proactive investment is needed to stay ahead of the curve [211-215].


All panelists agreed that AI will be a net job creator. Krithivasan asserted that AI will generate many new jobs in India, albeit in different occupational categories [299-304]; Vijayakumar reinforced that while programming fundamentals remain essential, the critical future skill will be orchestrating AI agents, i.e., managing multiple AI agents, applying critical thinking and delivering outcomes at several-fold the speed of manual coding [283-297]; and the moderator closed with optimism that AI will help India become a “Vixit Bharat” by 2047, driving a $30 trillion economy and massive employment growth [312-317].


During the audience segment, Kant repeatedly asked participants to keep questions brief, limit themselves to one per person, and be direct, emphasizing gender-balanced participation [250-260]. Questions from young entrepreneurs Mania Sharma and Devika Rao prompted Bhattacharya to direct them to the Salesforce startup community contact (Rupa Arvindakshan) [274-280]. Queries about future job types and required skills were answered by Krithivasan and Vijayakumar, who both stressed the rise of orchestration, analytical thinking and AI-tool proficiency [283-297][299-304]. Concerns about AI misuse, such as disinformation, led Salil Parekh to reiterate the need for responsible-AI frameworks, governance, cultural alignment and ethical model training [306-311][262-267].


Session transcriptComplete transcript of the session
Moderator

With a big round of applause, kindly welcome the panelists of this last panel of AI Impact Summit 2026. Mr. Salil Pareek, Mr. K. Kritivasan, Mr. C. Vijay Kumar and Ms. Arundhati Bhattacharya. With the moderator, Mr. Amitabh Kant. A big round of applause, ladies and gentlemen, to welcome them all to the stage. Well, it’s over to you, Mr. Amitabh Kant.

Amitabh Kant

So let me welcome these very distinguished leaders of the Indian IT. tech service industry and we have with us Arundhati Bhattacharya who is both a banker and a great tech leader now. The three of them amongst them they are leaders of an industry that represents over 300 billion in market value over 25 lakh crore and they employ millions of Indians. We are actually meeting at a point of disruption. I’m not going to take much time in introducing the panel or I’m not going to take much time in giving my own introduction. I will straight away move to asking them questions and then open it up to all of you so that you can ask the questions. I’ll try and start with the lady in the panel and I’ll also try and end with the lady in the panel Arundhati Bhattacharya was probably the most distinguished of us all so let me, Arundhati, let me try and be as direct as possible so Salesforce has lost roughly about 40 % of its market value in just 12 months a single AI product launch wiped almost 285 billion of SaaS stocks in a day the market is saying that AI agents will replace per seat software subsystems is the market wrong or is the traditional SaaS model genuinely under threat and what does that mean for thousands of Indian enterprises that have built their operations on Salesforce platform?

Arundhati Bhattacharya

First and foremost, thank you very much for asking that question. I’ve been answering that question so many times in the last few days that it’s almost like rehearsed, you know, as to how I should go about it. But having said that, you know, markets will say a lot of things. Not all of it comes true. And when you talk about the SaaS model, it’s not only about vibe coding. It’s not only about creating an application. It’s also about, you know, understanding what the workflows are like. It’s about realizing what the customer’s pain points are and ensuring that you are addressing those particular pain points. It’s about observability about what your agents are doing. It’s about governance.

It’s about auditability. It’s about adoption. There are so many pieces to making something really work in an organization that to just say that because I can vibe code, that means, you know, everything else goes out of the window. I think that’s being a little too, you know, little too hasty about totally, you know, rejecting a way of doing business. Also, I must say that, you know, which I’m not very true is correct, but people have to sometimes pump up values given the kind of money that is going in over there. And by the way, some of that money is circular money that’s going in over there. So I’m not too sure that the market is actually giving the right message that it should.

And like for everything that the market gives people, investors especially are requested to read the fine print. And obviously, you know, exercise their discretion in the matter. Having said that, is it true that the models that we have today will remain exactly the same? They won’t. I’m very clear about one thing, which is that all of the models that we have today, and I mean not the LLM models, but I mean the models of working, the ways of working, whether it be in respect of the SaaS companies or it be in respect of any of the other companies, even intra -companies or any of the other companies, things will change. And we have to be very agile about the way we look at these things and realize where are the changes going to come and how can we ourselves change in tune so as to remain relevant, so as to be able to actually add value to your customer.

End of the day, people who add value are the ones who are going to stay. who are going to survive, who are going to be sustainable. And therefore, adding value is what we need to do. And for adding value, whatever it takes for us to do, we need to do those things. So I think, you know, the jury is out and will remain out for a while because the race is on and we don’t know who’s going to win the race. But this much I do know that, you know, it will not be one individual unit or one individual kind of unit. There will be very many players in this whole thing. But at the end of it all, as long as it improves our standards of living, as long as it gives us results and answers which we never had before, it is for the good of humanity.

And I hope that will definitely happen.

Amitabh Kant

Thank you. Thank you for that very detailed answer to that question. I now turn to Mr. Christy Varshan, the CEO of, TCS. Yes, Mr. Krishnivasan, the industry consensus is that AI will shift work from writing code to orchestrating AI systems. What does TCS look like to you in 2030? What will be the headcount and what will be the revenue per employee? And how is TCS communicating that transition to its workforce and to the country?

K. Krithivasan

Thank you and good afternoon to everyone. See, this is the topic that everyone has been discussing in the last few days and few weeks. And like Arun has explained, the market also has been contemplating. But there will be a few things that will change. Many things may not change or the other way. Like if you look at the role of what most of us do as a system integrator, the role of system integrators come into play because there are complex, complex systems and many of them have a lot of legacy. you it’s not that one day you can have a llm understand everything and auto generate code and the software engineers will go away but not to say like arunthi said there will be more and more productivity that will be brought in and at the same time you need system integrators who can test validate verify what is being generated and so that’s one part of it the second is as you also look at today the role will shift towards more and more requirements engineering context engineering how do you know whether you are building the right system how do you validate a system is doing the right thing does it have cyber security does it do some harm all those things are to be validated you like you may not know all the roles that you will have five years down the line but we don’t envisage a situation where there will be a significant shrinkage of hope Now, the other areas that we have somehow not looked at when we get excited about generating code is for many of the, for instance, cloud came into play about maybe 10 years ago.

But if you ask most organizations, they will tell you that 30 -40 % cloud has been adopted. There is so much to be done. So, this is going to be a long tail. And even within that, you would see many organizations, they have to prepare for deploying or adopting. It’s not a trivial job. They have to get their data estate right. They have to get their applications rationalized, modernized. So, there is a certain amount of work to be done. They need to train their models like Arunthati was saying or somebody was mentioning. You will have some large models, many small models in every enterprise. They have to be trained. And the last part of it, which you are not…

Again, looking at is what is it new that you can do with all these LLMs? There will be many interesting things. Somebody has to build. Somebody has to think through that. So, if you look at another 30, I don’t envisage a time that there is a significant shrinkage of hope. but there will be more volume of work that will be produced. More volume of work. More volume of work that will be produced and more interesting work that will be done.

Amitabh Kant

Thank you. Thank you. So that’s an interesting perspective. I turn to Salil, who is the CEO of Infosys. Salil, one of the very provocative statements made by one of the Bay Area leaders and investor actually was that, which attracted a lot of news coverage, was that services model is dead within five years. Your chairman, Nandan Nilikani, just I was reading, in fact, I went through his interaction, and he said that this is not an opportunity gap, but this is an execution gap and that the real money isn’t clear. He’s cleaning up trillions of dollars of legacy tech debt. who is right? Is Nandan right or is it this Bay Area leader? Who’s right according to you?

Salil Parekh

That’s an easy answer Nandan’s of course right. There’s no question. So simply because he’s your chairman because I think Nandan I’m sure everyone has a view. Nandan is a visionary who has a view on this business for years. I think the way we see it is and we shared this a few days ago there are several areas of opportunity that come from AI services and there are six that we have highlighted recently just a few days ago and those in aggregate we have shared some data are about 300 billion dollars of opportunity over the next several years and then we’ve gone into a little bit of of the detail in each. I’ll give just a couple of examples.

There’s one which is like AI engineering, which is the building of agents, orchestrating, integrating some of the points what Kriti was mentioning. There’s another which you alluded to how Nandan has said it of legacy modernization, which is basically saying there were some things which were 15, 20 years old with large companies. How can we bring it to the more current? And there because of AI agents, the cost is lower, the time is less. And so there’s an easier economic rationale for companies to do it. So as we put all these together, we see these what we call AI services, which will give us the growth. And what I think the point you made, what Nandan said, if we can pivot our company to serve these big six areas for our clients, that execution path.

then the opportunity is good. And again, some data on that, like this year, which will end in March, we have recruited 20 ,000 college graduates. Next year, we are on track and we’ve announced we are recruiting 20 ,000 college graduates. This year, our headcount has increased in the first three quarters by 13 ,000 people. And my sense is that will continue. So what it’s opening up really is new set of opportunities. And there is some productivity benefit that comes with if like specifically in Infosys, but I’m sure in general, if we execute and serve our clients, there will be more opportunity.

Amitabh Kant

So tell me, do you at any stage aspire to own intellectual property in the AI stack or will you remain a builder for hire?

Salil Parekh

So then the approach, I’ll speak a little bit for Infosys, our approach is, we have a lot of opportunities. We have tremendous IP. So like in AI, we built this IP layer called Topaz Fabric. which has the ability for clients to work with any of the foundation models, plus the agents that we have built, that Infosys has built, plus any third party agents. So that’s the layer that we are, let’s say, pretty good at and that we will build and continue to build the IP on. That’s the approach on the IP that we’ve taken.

Amitabh Kant

OK. CVK, let me turn to you because HCL Tech has a software product business. It has a design custom AI chips in Bangalore and you operate across the full stack, actually. Is HCL Tech positioning as an AI builder at any stage? And if so, how far up the stack are you willing to go into models and to compute, into infrastructure? Infrastructure that at any stage will compete with the hyperscalers rather than just partner with them.

C. Vijayakumar

Thank you and good evening everyone. HCL Tech, as you kind of gave some pointers, we are uniquely placed because, first of all, we have a software product business which delivers 10 % of our revenue. And we also have a very deep engineering heritage where we service top 50 of the 100 R &D spenders doing a lot of work, including some cutting edge work. Like we have built a two nanometer custom silicon for one of the technology companies. So we have these unique capabilities. And this also reflected in our high. Highest revenue per employee amongst the IT services companies. So with this backdrop, our AI strategy is one of them is. heavily indexed on building because of course, our core services, we will continue to modernize and evolve our services to be relevant for the future.

And even if it means it takes away some revenue streams, we are proactively doing it. But I think the biggest focus is there are these large language models and the foundational models. They cannot be applied most efficiently for enterprise use cases. There is still a gap between what a foundation model can deliver and what is the ultimate efficiency and innovation that’s possible. So we are really trying to bridge the gap and building IPs that will bridge the gap which helps enterprises to scale AI adoption. And we’re also focused on a lot of specialized services like even Salil mentioned, like physical AI, AI factory. agentic AI. All of these new solutions we are very focused on.

And of course, the partnership ecosystem becomes extremely critical. So we are partnering with almost all the large solution providers. I don’t think we are building anything to become a hyperscaler. I think we missed the bus many, many years ago. And we’re not building models, but we are building solutions which will make the models much more scalable and applicable within enterprises.

Amitabh Kant

Thanks. Thanks. Thanks for that. I just wanted to turn to Mr. Christy Wilson because he’s the biggest employer in India. He employs over 600 ,000 engineers. You know, India produces millions of engineering graduates a year and many of them are trained for exactly the kind of work AI is now going to automate in a very big way. So what should be a skilling strategy of this country? I mean, how do we do reskilling and skilling at an individual level or should we do it at a national country level? I mean, what is the view of India’s leading CEO in terms of skilling and reskilling?

K. Krithivasan

This is, to my mind, a major national challenge. It’s a challenge and an opportunity. Because, in fact, three days ago, here we ran a workshop with about 1 ,500 kids from all the schools in the NCR region. Most of them are, in fact, all of them have non -technical background and many of them could not even speak or not fluent in English. And we taught them how to use their native language, how to do coding. And, in fact, within a span of about three hours, almost 1 ,500. Apps were built. So that’s the power of AI. You can… You can worry about how AI is going to take away the jobs at the entry level. I think AI also enables all these people to develop and imagine new areas where software can make the lives of people better.

And it creates more and more opportunity. You can be afraid and not do anything. I think we should be forward leaning and train as many people as possible. In fact, all three of us are working with the Ministry of IT in creating the curriculum for the students coming up in all these universities.

Amitabh Kant

Wonderful. Wonderful. Thanks. Thanks for that very positive and constructive perspective. Arundhati, just let me turn to you again and ask you if AI is to drive India’s productivity. It cannot remain just a Fortune 500 story. How do we make it? AI tools accessible. to millions of MSME? How do we raise productivity? How do we build for the market? Even if unit economics looks slightly different from enterprise contracts, how do we scale it up in a big way for MSME to make the difference to Indian economy?

Arundhati Bhattacharya

So thank you for that question, because I personally believe that unless we can democratize any technology, it doesn’t really serve the purpose of that country or for that people. And AI is something that is not meant for the white -collar worker alone. In fact, it’s one of the things that can actually empower the blue -collar, just as you were talking, Mr. Kritivasan, it can actually empower the blue -collar workers. Now, if you look at the blue -collar workers, in fact, in NEETI, we did one report taking into account the blue -collar workers like carpenters, plumbers, hospitality workers, Anganwadi workers. So a lot of these, you know, personal… personas we had taken. And what we realized is that they have multiple challenges.

One challenge, of course, is the way that they have been skilled. So they have a skilling challenge. But more than skilling challenge, they have an access challenge in the sense that they may be very well skilled, but they don’t know that the job exists within the village or in the next village. So they have an access challenge. The second thing that they have is the challenge of ensuring that they are getting paid on time. Even that is not available. So they have several challenges of this nature, that of skilling, that of access, that of payments, that of ensuring that they are parts of communities which actually enable them to be supported during times of distress or during times of need.

Many of these are actually challenges that can be solved if we can use AI in the proper way, in a proper marketplace to get the right kind of opportunities to them, the right kind of certifications to them, the right kind of assessment of their skills to them. If all of this can be done, actually speaking, we will be doing the country an enormous favor. And you will find that the quality of life of not only these people, but the quality of life that, you know, that they serve, of the people they serve, people, you know, who are actually taking their services, even that is going to become much, much better. So I think, you know, AI is not something that is meant only for the white collar workers or for people in tier one, tier two cities.

It’s meant for the SMEs. It’s meant for the MSMEs. It is what is going to empower them to get into a league which they were not able to access earlier. It is also meant for the blue collar workers because it can empower all of them.

Amitabh Kant

Salil, we’ve had a India has done something unique in digital public infrastructure It’s been transformational in terms of identity payment, credit You know, the Bank of International Settlement said that India achieved in seven years what it would have taken 50 years to achieve. How do we create a DPI, a digital public infrastructure for artificial intelligence? How do we take computing power to the common citizen? How do we scale? How do we make a difference using the power of AI?

Salil Parekh

Absolutely. I think there’s already work going on. Specifically, there are three big areas where there’s thinking going on. There’s actual projects on the ground in agriculture, in healthcare, in making sure that everything that is being done for education is helping within the country for the citizens. Also in that area, There are examples where we have shown some of it. And now the way that, as you mentioned, the India stack or the digital public infrastructure was created, where essentially it was fully available without exorbitant costs or any cost, is what the approach is being driven today. There are various components of the architecture which are being discussed. And working closely with the ministry, with the government, those will be rolled out.

Today, of course, you have seen that there is tremendous support at the chip layer, at the data center layer, at the infra layer. And now there will be more at the architecture, how it can be distributed. And at least these three big areas, agriculture, education, healthcare, are being looked at today.

Amitabh Kant

Thanks. Thanks. CVK. One last question before I open it up to the floor. You know, the hyperscalers, Microsoft, Google, Amazon, they’re spending close to about $600 billion this year alone on AI infrastructure. Spending almost about close to 50 to 55 % on CapEx. And if AI services opportunity is really in the range of about $350 to $400 billion, can Indian IT companies, can all of you together, capture it without adequate R &D? Does it not require greater level of R &D intensity? And what will it take all of you to put in more resources into R &D for the future?

C. Vijayakumar

Yes. First of all, this big CapEx. Spend also triggers a lot of services spent, like building all these data centers, AI factories. The entire IT infrastructure landscape in the world would get refreshed over the next five to eight years. That itself is a huge services opportunity. Then there is this physical AI. It’s a completely new spend. Today, there is very, very little physical AI deployed in the world. It’s believed that one of the studies by Zeno says it’s a trillion -dollar opportunity. That would mean at least $200 billion of services opportunity. I think we are looking at some very big services opportunities. Even to really encash on these big services opportunities, the companies like us will need to invest in building solutions because it’s not straightforward services.

You need to build solutions, which will mean we will have to put in more money into R &D. Right? Right. Building solutions. building labs and kind of POCs, a lot of pre -work needs to be done. And also there is a lot of opportunities to create solutions which will really make the foundational models much more scalable for enterprise. So I personally believe we should increase the R &D spend. And I do think the industry model will support us because as more and more AI is infused, more outcome -based contracts would come, which also helps us to deliver a higher profitability, which means we can very comfortably invest more in R &D. But the timing of that, we might need to invest a little ahead of the curve before the real benefits come.

Amitabh Kant

Okay, wonderful. Wonderful to hear this perspective. I’m going to open it up to this house. We’ll ask about, I’ll open it up for five questions. Please. Name yourself. just be very direct and blunt. Please don’t ask long winding questions. To the point, very matter of fact all young people, any ladies here will be given preference. The lady there. Yeah, the lady here.

Audience

Hello everyone. Thank you so much for giving me opportunity. No introduction, just name and shoe. Mania Sharma, CEO of Mono AI. Saving my two years to reach out to you from here till here. First question, how can I come and meet you as a young entrepreneur, 27 years with no network? My first question. Saving my lot of marketing money. Second question is, as a young Indian, 27 year old coming from a small town but being in Bangalore for seven years, what is your view or idea that we can be work with you guys and your support and mentorship so that young India can go somewhere else and we can also show the Silicon Valley that 25 year, 27 year standing here in a suit talking to you can do something.

Amitabh Kant

Okay. You know, I’ve allowed two questions because she’s a lady but no two questions. I will ask the five questions and then I’ll open it up for responses. Second, there’s a lady there. Go ahead.

Audience

My name is Devika Rao. I’m from UK University of Leeds. Trying to do this stage for AI creative education and art health and well -being concept and how the case study can be presented in six month time and I would like to co -create and collaborate with you.

Amitabh Kant

Okay. Anyone at the back? Yeah, go ahead. No, no, please there. Yeah, yeah, the blue shirt. Get up and ask.

Audience

My name is Navneet Kaul. and I have a three -part question.

Amitabh Kant

No, just ask one question. Don’t ask three in one. No, no, don’t ask three in one. Ask one question.

Audience

One question.

Amitabh Kant

Yeah.

Navneet Kaul

How will AI create jobs? What kind of jobs and what kind of skills do we need?

Amitabh Kant

You’re asking the question which I’ve already asked.

Audience

I want the panelists to answer very specifically and directly.

Amitabh Kant

All right. Anyone at the back, that side? Yeah, that gentleman there.

Audience

Hello. Kishla here. And my question is, as any employee who is currently in the IT sector, what skills he should plan to develop in the next five years to boost his employability?

Amitabh Kant

Okay. No, no, not front row. Back. I want to go right at the back. Anybody right at the back? Some back venture. huh who’s who’s the backest pitch yeah that gentleman in the black suit yeah go ahead ask yeah yeah shoot yeah please sir

Audience

sir my name is harswar than my question is what can be done to stop misuse of ai for example some people use grok you to unrest people

Amitabh Kant

okay so one last question to that yeah shoot in one line Just shoot.

Audience

Yeah, namaste. Mamanama Venkatana Rasimahati. Intentionally, I’m speaking in Sanskrit. One line. Yeah, I will come, sir. I’m a software architect. I’m a founder of a startup called Startup Sanatana. Though we are talking about so much of AI and AI is the limelight, unless and otherwise we built on AI on our culture, rich culture, tradition, heritage kind of thing, probably it will be a different kind of thing.

Amitabh Kant

Thank you. Thank you. Okay. So we got the question. We got the question. Now I’ll open it up. I’ll starting from we start with Arundhati’s response and then move on. You can you can respond to one of the questions and then move on.

Arundhati Bhattacharya

In respect of the question that this lady. gave and also others who are startups in this particular organization. Salesforce has a very vibrant startup community. The lady who leads it for us, her name is Rupa Arvindakshan. Her coordinates are available on our website. Please get in touch with her. She will get you in touch with our entire community. The community is supported to develop on Salesforce and also, you know, to ensure that you can take your products to market. So all of that is done. So please, you are more than welcome to communicate with us and get whatever support we can definitely give you.

C. Vijayakumar

Maybe I’ll take up the question on what skills are needed in the future. I think there is a big misconception. That software, coding, programming skills are not going to be relevant. I think the fundamental conceptual skills. in software development, programming is very, very essential if you really want to build a long -term career in the software industry. It could be in services or in product companies or AI. All of that requires sound programming skills. The second aspect is, I think, just the critical thinking, analytical skills. How do you really orchestrate work? A lot of standard work that many of you do or initially the younger engineers do, some of that or a lot of it can be done with the AI tools that are there today.

But how can you now think of yourself as an orchestrator and deliver maybe four or five times the output that you would deliver without these tools, right? I think just orchestrating the work with multiple coding agents, I think that’s really a skill. Or while you may take three, four years to manage a small team, but on day one, you have an opportunity to manage several agents to deliver an outcome which is 5x of what you would normally do. That’s just an example. And that’s how you need to think of every role. You can amplify the value that you can create using AI skills.

Amitabh Kant

Mr. Kristinasan.

K. Krithivasan

That was his question once again about will AI take away jobs or will AI? My view is it has been discussed for the last few days. Eventually, AI will create more jobs than destroy jobs. And you would find, but it may not all of them need not be programming jobs. That would be jobs of different categories, different classifications. But on the whole, we find it’s going to create more jobs and create more employment.

Amitabh Kant

Sal.

Salil Parekh

Thanks. I think there was a question really focused on can we build AI with views of our culture or views? Of responsibility. And that’s absolutely essential. we’ve also put together and I know many others have a framework on responsible AI and there were a couple of questions which sort of were on that line. It’s absolutely critical so in fact the way the agents are built, the way the foundation models data, the way they learn all of that needs this approach of responsible AI and that’s the approach that we’ve recommended. Many others are working on it and as an overall industry we should focus on that and that will give us as good an outcome as we can get and even that after that outcome we will have to refine and modify.

Responsible AI is critical in that. Thank you.

Amitabh Kant

So ladies and gentlemen we’ve heard the captains of the industry we are in the midst of disruption as I said but these leaders bring great optimism, they bring hope and according to them actually The wave of AI will end up creating many more jobs for India, but there’ll be jobs of a different kind. And we need to skill ourselves for the new emerging jobs of tomorrow. But with these leaders, I’m absolutely confident that India will ride this wave to greater progress and prosperity as it becomes a Vixit Bharat by 2047. It’ll create many, many more jobs. And these leaders will drive India to a 30 plus trillion dollar economy with jobs in the coming years. Thank you very much, ladies.

Related ResourcesKnowledge base sources related to the discussion topics (39)
Factual NotesClaims verified against the Diplo knowledge base (3)
Confirmedhigh

“The AI Impact Summit 2026 closed with a panel of senior leaders from India’s IT services sector, moderated by Amitabh Kant, who introduced Salil Parekh (CEO, Infosys), K. Krithivasan (CEO, TCS), C. Vijayakumar (CEO, HCL Tech) and Arundhati Bhattacharya (Senior VP, Salesforce).”

The panel composition and moderator are confirmed by the transcript of the closing panel where Amitabh Kant introduced Mr. Salil Pareek, Mr. K. Kritivasan, Mr. C. Vijay Kumar and Ms. Arundhati Bhattacharya as the four panelists [S4].

Additional Contextmedium

“Arundhati Bhattacharya warned that SaaS success depends on more than low‑code generation – it requires auditability, adoption, deep workflow understanding, governance and genuine customer‑value delivery.”

The knowledge base includes remarks emphasizing auditability, adoption and the many pieces needed for an AI solution to work in an organization, echoing Bhattacharya’s warning and adding nuance about practical utility [S22] and about sustainable value creation depending on user adoption [S19].

!
Correctionhigh

“Amitabh Kant’s opening question referenced a ≈ 40 % fall in Salesforce’s market value over the past 12 months.”

Salesforce’s recent market performance described in the knowledge base shows the company’s shares soaring to a record high of $368.7, indicating a strong rise rather than a 40 % decline, contradicting the claim of a steep fall [S110].

External Sources (112)
S1
WS #280 the DNS Trust Horizon Safeguarding Digital Identity — – **Audience** – Individual from Senegal named Yuv (role/title not specified)
S2
Building the Workforce_ AI for Viksit Bharat 2047 — -Audience- Role/Title: Professor Charu from Indian Institute of Public Administration (one identified audience member), …
S3
Nri Collaborative Session Navigating Global Cyber Threats Via Local Practices — – **Audience** – Dr. Nazar (specific role/title not clearly mentioned)
S4
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Panel Discussion Moderator Amitabh Kant NITI — <strong>Moderator:</strong> With a big round of applause, kindly welcome the panelists of this last panel of AI Impact S…
S5
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion Moderator Amitabh Kant NITI — My name is Devika Rao. I’m from UK University of Leeds. Trying to do this stage for AI creative education and art health…
S6
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Panel Discussion Moderator Amitabh Kant NITI — <strong>Moderator:</strong> With a big round of applause, kindly welcome the panelists of this last panel of AI Impact S…
S7
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion Moderator Amitabh Kant NITI — 1327 words | 131 words per minute | Duration: 607 secondss All right. Anyone at the back, that side? Yeah, that gentlem…
S8
Seismic Shift — 1. International Monetary Fund, ‘India’s Economy to Rebound as Pandemic Prompts Reforms’, November 11, 2021, https://www…
S10
Infosys CEO settles insider trading charges — According toIndia’s markets regulator, Infosys CEO Salil Parekhhas settledcharges related to insufficient internal contr…
S11
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Panel Discussion Moderator Amitabh Kant NITI — -K. Krithivasan: CEO of TCS (Tata Consultancy Services), leads company with over 600,000 engineers
S12
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion Moderator Amitabh Kant NITI — Speakers:Arundhati Bhattacharya, K. Krithivasan Speakers:Arundhati Bhattacharya, K. Krithivasan, Salil Parekh Speakers…
S13
Keynote-Olivier Blum — -Moderator: Role/Title: Conference Moderator; Area of Expertise: Not mentioned -Mr. Schneider: Role/Title: Not mentione…
S14
Day 0 Event #250 Building Trust and Combatting Fraud in the Internet Ecosystem — – **Frode Sørensen** – Role/Title: Online moderator, colleague of Johannes Vallesverd, Area of Expertise: Online session…
S15
Conversation: 02 — -Moderator: Role/Title: Event moderator; Area of expertise: Not specified
S16
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Panel Discussion Moderator Amitabh Kant NITI — <strong>Moderator:</strong> With a big round of applause, kindly welcome the panelists of this last panel of AI Impact S…
S17
Building the Next Wave of AI_ Responsible Frameworks & Standards — This panel discussion at the Global AI Summit focused on reimagining responsible AI and balancing rapid innovation with …
S19
Multistakeholder Partnerships for Thriving AI Ecosystems — Dr. Bärbel Koffler emphasized that governments must create frameworks and governance structures to ensure AI benefits ar…
S20
Multistakeholder Partnerships for Thriving AI Ecosystems — Bhattacharya advises against being driven by market capitalizations when creating companies, emphasizing that sustainabl…
S21
Building Inclusive Societies with AI — Evidence:Example of a skilled plumber in a village who might be unaware of good opportunities in neighboring villages E…
S22
https://app.faicon.ai/ai-impact-summit-2026/building-trusted-ai-at-scale-cities-startups-digital-sovereignty-panel-discussion-moderator-amitabh-kant-niti — But if you ask most organizations, they will tell you that 30 -40 % cloud has been adopted. There is so much to be done….
S23
https://dig.watch/event/india-ai-impact-summit-2026/ai-for-social-empowerment_-driving-change-and-inclusion — I just have a follow -up on that, and then I’ll move to Julie. I’ll put it very, I mean, let’s say, a very, very simple,…
S24
Collaborative AI Network – Strengthening Skills Research and Innovation — about those. So obviously, it’s not just creating applications. It’s the same old story of digital transformation, right…
S25
Inclusive AI Starts with People Not Just Algorithms — Hi, my name is I’m founder of an AI company. We work with global higher education institutions. So I actually led my lif…
S26
Pre 8: IGF Youth Track: AI empowering education through dialogue to implementation – Follow-up to the AI Action Summit declaration from youth — Anja Gengo: Yes, I am. Thank you. I hope you can hear me. First of all, thank you so much for such an interesting and ri…
S27
How AI Is Transforming Indias Workforce for Global Competitivene — While acknowledging a transition period, Srikrishna believes AI will ultimately generate more employment opportunities t…
S28
From India to the Global South_ Advancing Social Impact with AI — This comment directly addresses one of the most anxiety-provoking aspects of AI adoption – job displacement. By framing …
S29
Generative AI is enhancing employment opportunities and shaping job quality, says ILO report — A new study conducted by the International Labour Organization (ILO) investigates the consequences of Generative AI on t…
S30
Need and Impact of Full Stack Sovereign AI by CoRover BharatGPT — Sabharwal contends that the traditional hourly pricing model ($20-40 per hour) in Indian IT services will become obsolet…
S31
Need and Impact of Full Stack Sovereign AI by CoRover BharatGPT — “AI will come, jobs will go, mass exodus will happen in corporates”[16]. “What do you mean by the business model will ha…
S32
How AI Is Transforming Indias Workforce for Global Competitivene — -Srikrishna Ramakarthikeyan- (Role/title not clearly specified, but appears to be from IT services sector based on discu…
S33
Driving Indias AI Future Growth Innovation and Impact — Evidence:Generative AI was built without following strict rules initially. Cloud security and data protection are still …
S34
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion Moderator Sidharth Madaan — Madaan reinforces the concept that employment disruption happens at the task level rather than complete job elimination….
S35
IT clients taking cautious approach to costly AI technology, says Infosys executive — IT clientsare keen to adoptAI technology, but the high cost is causing them to take a cautious approach, according to Sa…
S36
European Tech Sovereignty: Feasibility, Challenges, and Strategic Pathways Forward — Economic | Infrastructure European Competitive Advantages and Success Stories Klein argues that Europe shouldn’t try t…
S37
Scaling Trusted AI_ How France and India Are Building Industrial & Innovation Bridges — Thank you. Thank you for inviting me here. So it’s a very valid question. And I will not answer it in a very technical w…
S38
Panel Discussion AI in Digital Public Infrastructure (DPI) India AI Impact Summit — So future of AI I think will depend on the market. We’ll also depend on the people. We’ll also depend on the trust in us…
S39
The Foundation of AI Democratizing Compute Data Infrastructure — This connects AI democratization to broader digital infrastructure development, suggesting that individual data empowerm…
S40
Panel Discussion AI in Digital Public Infrastructure (DPI) India AI Impact Summit — Saibal argues that India is approaching AI with the same ethos as DPI – treating it as shared public infrastructure that…
S41
Collaborative AI Network – Strengthening Skills Research and Innovation — Artificial intelligence | Information and communication technologies for development Garg frames AI itself as a possibl…
S42
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — ## Industry Perspectives: Systems Integration Challenges Eltjo Poort: thank you Isadora yeah and thanks for giving me t…
S43
Creating digital public infrastructure that empowers people | IGF 2023 Open Forum #168 — Mark Irura:To add on to what’s been shared already, the supply and the demand side were mentioned. And on the supply sid…
S44
Secure Finance Risk-Based AI Policy for the Banking Sector — The moderator emphasizes that AI governance should not be viewed through a completely different lens but should be integ…
S45
Ministerial Roundtable — There’s a stark contrast between countries that have achieved near-universal connectivity (like Azerbaijan) and those st…
S46
All hands on deck to connect the next billions | IGF 2023 WS #198 — Expanding internet connectivity is a complex task that requires innovative approaches, responsive to the needs of local …
S47
Fixing Healthcare, Digitally — Traditional models may not fully address the complex gaps and needs in healthcare infrastructure. Moreover, it is crucia…
S48
A digital public infrastructure strategy for sustainable development – Exploring effective possibilities for regional cooperation (University of Western Australia) — In conclusion, DPI is a critical building block for the digital economy and plays a significant role in achieving the SD…
S49
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion Moderator Amitabh Kant NITI — Thank you and good evening everyone. HCL Tech, as you kind of gave some pointers, we are uniquely placed because, first …
S50
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Panel Discussion Moderator Amitabh Kant NITI — When addressing competition with hyperscalers’ massive infrastructure investments, the Indian IT companies positioned th…
S51
Comprehensive Report: “Converging with Technology to Win” Panel Discussion — Economic | Legal and regulatory | Human rights Five hyperscaler firms competing to reach AGI first; financial structure…
S52
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — Larissa Zutter:So this is not quite as concrete as you might want, but I think I want to piggyback off of what was said …
S53
AI Governance Dialogue: Steering the future of AI — – **Civil Society**: Advocacy ensuring frameworks reflect diverse societal needs Doreen Bogdan Martin: Thank you. And w…
S54
Global Digital Governance &amp; Multistakeholder Cooperation for WSIS+20 — The session concluded with time constraints as “the president of Estonia is about to make his remarks,” reflecting the b…
S55
Opening address of the co-chairs of the AI Governance Dialogue — While this transcript captures only the opening remarks of the AI Governance Dialogue, the key comments identified estab…
S56
How AI Is Transforming Indias Workforce for Global Competitivene — While acknowledging a transition period, Srikrishna believes AI will ultimately generate more employment opportunities t…
S57
Comprehensive Discussion Report: The Future of Artificial General Intelligence — Near-term job displacement will likely be offset by new job creation, with current impact mainly on junior-level positio…
S58
Comprehensive Report: Preventing Jobless Growth in the Age of AI — Historical evidence shows that technological advances eliminate some jobs while creating others, with the net effect bei…
S59
Artificial intelligence — The disruptions that AI systems could bring to the labour market are another source of concern. Many studies estimate th…
S60
How Trust and Safety Drive Innovation and Sustainable Growth — Explanation:Despite representing different perspectives (UK regulator, Singapore regulator, and industry), there was une…
S61
Ray Dalio warns of global breakdown behind market turmoil — Billionaire investorRay Daliohas warned that the recent market turbulence is part of a larger global crisis. The turmoil…
S62
AI investment shows strong momentum beyond bubble fears — AI investmentis not showingsigns of a speculative bubble, according to theAlibaba Groupchairman. Instead, he argued at t…
S63
How AI Drives Innovation and Economic Growth — Arguments:First, model evaluation. So AI companies typically do that part. How good is the model output for specific tas…
S64
The open-source gambit: How America plans to outpace AI rivals by democratising tech — A “worker-first AI agenda” is the key social pillar of the Plan. The focus is on helping workers reskill and build capac…
S65
AI: The Great Equaliser? — While the introduction of AI technology may result in job losses in certain sectors, it also creates new job opportuniti…
S66
Shaping the Future AI Strategies for Jobs and Economic Development — Continuous learning and upskilling will be essential for workforce adaptation to rapid technological change across all s…
S67
Generative AI: Steam Engine of the Fourth Industrial Revolution? — Additionally, reskilling the workforce is crucial to fully embrace new technologies. AI, for instance, has the potential…
S68
AI will not replace people – but people who use AI will replace people who do not | IBM’s Report — According toIBM’s report, executives estimate that around 40% of their workforce will need to reskill due to implementin…
S69
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — Rather than following historical patterns of automation that replace workers, AI development should prioritize applicati…
S70
SAP elevates customer support with proactive AI systems — AIhas pushedcustomer support into a new era, where anticipation replaces reaction. SAP has built a proactive model that …
S71
Multistakeholder Partnerships for Thriving AI Ecosystems — An audience member raised concerns about whether AI democratisation would genuinely benefit small enterprises or primari…
S72
Need and Impact of Full Stack Sovereign AI by CoRover BharatGPT — Sabharwal contends that the traditional hourly pricing model ($20-40 per hour) in Indian IT services will become obsolet…
S73
Building the Next Wave of AI_ Responsible Frameworks &amp; Standards — Arundhati Bhattacharya from Salesforce highlighted how her company established an office for humane and ethical use of t…
S74
Need and Impact of Full Stack Sovereign AI by CoRover BharatGPT — “AI will come, jobs will go, mass exodus will happen in corporates”[16]. “What do you mean by the business model will ha…
S75
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Panel Discussion Moderator Amitabh Kant NITI — The industry leaders unanimously rejected predictions that AI would eliminate the services model. Krithivasan noted that…
S76
Inclusive AI Starts with People Not Just Algorithms — 200 years later, we are like, okay, let’s clean it up. Even in the Internet revolution, you know, we have the problems w…
S77
AI Transformation in Practice_ Insights from India’s Consulting Leaders — Krishan positioned this challenge within a broader context, noting that the key lies in focusing on value creation and p…
S78
How AI Is Transforming Indias Workforce for Global Competitivene — -Srikrishna Ramakarthikeyan- (Role/title not clearly specified, but appears to be from IT services sector based on discu…
S79
From Innovation to Impact_ Bringing AI to the Public — Discussion point:Evolution of banking services while maintaining core functions Discussion point:Evolution of work rath…
S80
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Ananya Birla Birla AI Labs — Our first major research vertical is in structured foundation. A field that a recent Forbes article estimates at a $600 …
S81
AI for equality: Bridging the innovation gap — Blair presented evidence from surveys conducted with the World Bank and Intuit of 3,000 women entrepreneurs, showing tha…
S82
IT clients taking cautious approach to costly AI technology, says Infosys executive — IT clientsare keen to adoptAI technology, but the high cost is causing them to take a cautious approach, according to Sa…
S83
India’s AI Future Sovereign Infrastructure and Innovation at Scale — Discussion point:Global talent acquisition for Indian IP development Discussion point:Strategic pivot from services to …
S84
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion Moderator Amitabh Kant NITI — Thank you and good evening everyone. HCL Tech, as you kind of gave some pointers, we are uniquely placed because, first …
S85
Scaling Trusted AI_ How France and India Are Building Industrial & Innovation Bridges — Thank you. Thank you for inviting me here. So it’s a very valid question. And I will not answer it in a very technical w…
S86
Empowering People with Digital Public Infrastructure — Pervinder Johar: Absolutely. So I think our focus is on what we call the physical infrastructure of the world. So whe…
S87
Panel Discussion AI in Digital Public Infrastructure (DPI) India AI Impact Summit — So future of AI I think will depend on the market. We’ll also depend on the people. We’ll also depend on the trust in us…
S88
Collaborative AI Network – Strengthening Skills Research and Innovation — Artificial intelligence | Information and communication technologies for development Garg frames AI itself as a possibl…
S89
Panel Discussion AI in Digital Public Infrastructure (DPI) India AI Impact Summit — Saibal argues that India is approaching AI with the same ethos as DPI – treating it as shared public infrastructure that…
S90
The Foundation of AI Democratizing Compute Data Infrastructure — “It needs to be interoperable and shareable.”[37]. “So I think two characteristics of digital public infrastructure, whi…
S91
Building Inclusive Societies with AI — Impact:This comment fundamentally reframed the discussion from focusing on solutions to focusing on execution mechanisms…
S92
Building Inclusive Societies with AI — These key comments fundamentally shaped the discussion by challenging assumptions, introducing new frameworks, and groun…
S93
Open Forum #66 the Ecosystem for Digital Cooperation in Development — The discussion maintained a consistently collaborative and solution-oriented tone throughout. It began with formal intro…
S94
From Technical Safety to Societal Impact Rethinking AI Governanc — The discussion began with a formal, academic tone but became increasingly critical and urgent throughout. Speakers expre…
S95
Science as a Growth Engine: Navigating the Funding and Translation Challenge — The discussion maintained a consistently thoughtful and collaborative tone throughout. While panelists acknowledged seri…
S96
GermanAsian AI Partnerships Driving Talent Innovation the Future — The discussion maintained a consistently optimistic and collaborative tone throughout. Speakers demonstrated mutual resp…
S97
Debating Education / DAVOS 2025 — The tone was thoughtful and analytical, with panelists offering differing perspectives in a respectful manner. There was…
S98
Comprehensive Summary: The Future of Robotics and Physical AI — The tone was optimistic yet realistic throughout. The panelists demonstrated enthusiasm about recent breakthroughs and n…
S99
Driving Enterprise Impact Through Scalable AI Adoption — The tone was thoughtful and exploratory rather than alarmist, with participants acknowledging both the transformative po…
S100
Revamping Decision-Making in Digital Governance and the WSIS Framework — The discussion maintained a constructive and collaborative tone throughout, with speakers building upon each other’s poi…
S101
Powering the Technology Revolution / Davos 2025 — The tone was generally optimistic and forward-looking, with panelists highlighting opportunities for innovation and prog…
S102
WS #302 Upgrading Digital Governance at the Local Level — The discussion maintained a consistently professional and collaborative tone throughout. It began with formal introducti…
S103
Shaping the Future AI Strategies for Jobs and Economic Development — The discussion maintained an optimistic yet pragmatic tone throughout. While acknowledging significant challenges around…
S104
Towards a Resilient Information Ecosystem: Balancing Platform Governance and Technology — The discussion maintained a professional, collaborative tone throughout, characterized by constructive problem-solving r…
S105
Democratizing AI: Open foundations and shared resources for global impact — The tone was consistently collaborative, optimistic, and forward-looking throughout the discussion. Speakers maintained …
S106
Building Population-Scale Digital Public Infrastructure for AI — The tone is optimistic and collaborative throughout, with speakers sharing concrete examples of successful implementatio…
S107
Building the AI-Ready Future From Infrastructure to Skills — The tone was consistently optimistic and collaborative throughout, with speakers expressing excitement about AI’s potent…
S108
AI Innovation in India — The tone was consistently celebratory, inspirational, and optimistic throughout the discussion. Speakers expressed pride…
S109
The Future of Innovation and Entrepreneurship in the AI Era: A World Economic Forum Panel Discussion — The discussion began with a technology-focused, optimistic tone about AI’s transformative potential but gradually shifte…
S110
Salesforce’s AI tools drive growth — Salesforce sharessoaredto a record high of $368.7 on Wednesday, climbing 11% after surpassing quarterly sales estimates …
S111
Can a layered policy approach stop Internet fragmentation? | IGF 2023 WS #273 — Audience:We will fight to see who goes first. Colin Perkins, University of Glasgow. I guess I want to follow up a little…
S112
AI Infrastructure and Future Development: A Panel Discussion — Economic | Infrastructure Lessin raises concerns from the financial industry about whether the complex financing arrang…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
A
Arundhati Bhattacharya
5 arguments159 words per minute1123 words421 seconds
Argument 1
SaaS resilience through workflow, governance, and value‑add
EXPLANATION
Arundhati argues that the SaaS model’s strength lies beyond simple code generation; it requires deep understanding of customer workflows, governance, auditability, and adoption to deliver real value.
EVIDENCE
She explains that SaaS is not only about vibe coding or creating an application, but also about understanding workflows, addressing customer pain points, ensuring observability, governance, auditability, and adoption, emphasizing that these multiple pieces are essential for success [16-23].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Bhattacharya emphasizes that SaaS success requires deep workflow understanding, governance, auditability, and adoption, as highlighted in the panel transcript <a href="https://dig.watch/event/india-ai-impact-summit-2026/building-trusted-ai-at-scale-cities-startups-digital-sovereignty-panel-discussion-moderator-amitabh-kant-niti/" target="_blank" class="diplo-source-cite" title="Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion Moderator Amitabh Kant NITI" data-source-title="Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion Moderator Amitabh Kant NITI" data-source-snippet="Moderator: With a big round of applause, kindly welcome the panelists of this last panel of AI Impact Summit 2026. Mr. Salil Pareek, Mr. K. Kritivasan, Mr. C. Vijay Kumar and Ms. Arun”>[S4] and reinforced in the discussion summary [S5].
MAJOR DISCUSSION POINT
SaaS success depends on holistic operational capabilities.
Argument 2
Market hype often overstates AI disruption; investors must read fine print
EXPLANATION
She cautions that market narratives frequently exaggerate AI’s impact, with inflated valuations and circular money, and advises investors to scrutinize details before drawing conclusions.
EVIDENCE
Arundhati notes that markets say many things, not all true, that people pump up values due to large money flows, some of which is circular, and urges investors to read the fine print and exercise discretion [14-28].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
She warns that market valuations can be inflated by circular money flows and advises investors to scrutinize details, matching her comments on avoiding market-cap-driven decisions [S20] and observations of market manipulation [S5].
MAJOR DISCUSSION POINT
Skepticism toward AI market hype.
DISAGREED WITH
Implicit market narrative (as referenced by Amitabh Kant)
Argument 3
AI must be accessible to improve livelihoods of blue‑collar and MSME sectors
EXPLANATION
She stresses that AI should not be limited to white‑collar workers; it can empower blue‑collar workers and MSMEs by addressing their specific challenges.
EVIDENCE
Arundhati states that AI is not just for white-collar workers, it can empower blue-collar workers, and cites a report covering carpenters, plumbers, hospitality and Anganwadi workers, highlighting the need for democratization [158-166].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Her point about AI empowering blue-collar workers and MSMEs is echoed by the panel’s identification of challenges for carpenters, plumbers, and other workers and the role of AI marketplaces [S5], as well as a concrete plumber example [S21].
MAJOR DISCUSSION POINT
Democratizing AI for broader workforce.
AGREED WITH
Salil Parekh, Audience (Kishla)
Argument 4
Address skilling, job‑access, and payment challenges through AI‑enabled marketplaces
EXPLANATION
She outlines how AI‑driven marketplaces can solve blue‑collar workers’ problems of skill validation, job discovery, timely payments, and community support.
EVIDENCE
She describes challenges such as skilling, access to jobs, timely payments, and community support, and argues that AI-enabled marketplaces can provide certifications, skill assessments, and job matching to improve quality of life for workers and their customers [167-172].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The discussion details how AI-driven marketplaces can provide certifications, skill assessments, and timely payments for blue-collar workers, supporting her claim [S5] and the plumber case study [S21].
MAJOR DISCUSSION POINT
AI as a solution for blue‑collar ecosystem challenges.
AGREED WITH
K. Krithivasan, Vijayakumar C., Amitabh Kant
Argument 5
Salesforce ecosystem offers mentorship and community contacts for startups
EXPLANATION
Arundhati points to Salesforce’s vibrant startup community and provides a direct contact for entrepreneurs seeking support.
EVIDENCE
She mentions that Salesforce has a vibrant startup community led by Rupa Arvindakshan, whose coordinates are on the website, and encourages startups to get in touch for development and market access support [274-280].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The panel notes Salesforce’s vibrant startup community and provides contact details for mentorship through Rupa Arvindakshan <a href="https://dig.watch/event/india-ai-impact-summit-2026/building-trusted-ai-at-scale-cities-startups-digital-sovereignty-panel-discussion-moderator-amitabh-kant-niti/" target="_blank" class="diplo-source-cite" title="Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion Moderator Amitabh Kant NITI" data-source-title="Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion Moderator Amitabh Kant NITI" data-source-snippet="Moderator: With a big round of applause, kindly welcome the panelists of this last panel of AI Impact Summit 2026. Mr. Salil Pareek, Mr. K. Kritivasan, Mr. C. Vijay Kumar and Ms. Arun”>[S4].
MAJOR DISCUSSION POINT
Startup mentorship via corporate ecosystem.
K
K. Krithivasan
5 arguments178 words per minute773 words259 seconds
Argument 1
System integrators remain essential; role moves to requirements and context engineering
EXPLANATION
Krithivasan asserts that despite AI code generation, system integrators will still be needed to validate, test, and ensure security, shifting their focus toward requirements and context engineering.
EVIDENCE
He explains that system integrators are needed because of complex legacy systems, to test, validate, and verify AI-generated code, and that the role will shift toward requirements engineering, context engineering, and security validation [51-53].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Krithivasan stresses that system integrators are still needed for legacy complexity, a view confirmed by the panel’s consensus that the services model will evolve rather than disappear <a href="https://dig.watch/event/india-ai-impact-summit-2026/building-trusted-ai-at-scale-cities-startups-digital-sovereignty-panel-discussion-moderator-amitabh-kant-niti/" target="_blank" class="diplo-source-cite" title="Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion Moderator Amitabh Kant NITI" data-source-title="Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion Moderator Amitabh Kant NITI" data-source-snippet="Moderator: With a big round of applause, kindly welcome the panelists of this last panel of AI Impact Summit 2026. Mr. Salil Pareek, Mr. K. Kritivasan, Mr. C. Vijay Kumar and Ms. Arun”>[S4].
MAJOR DISCUSSION POINT
Evolving role of system integrators.
AGREED WITH
Salil Parekh, Vijayakumar C.
DISAGREED WITH
C. Vijayakumar
Argument 2
No major headcount shrink; volume and complexity of work will increase
EXPLANATION
He predicts that AI will not cause a significant reduction in workforce size; instead, the amount and sophistication of work will grow.
EVIDENCE
Krithivasan states that he does not envisage a significant shrinkage of headcount, but rather a larger volume of work and more interesting work being produced [68-70].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
He predicts stable headcount with increased work volume, consistent with broader observations that AI will generate more jobs than it eliminates [S27].
MAJOR DISCUSSION POINT
Workforce size remains stable, workload expands.
AGREED WITH
Salil Parekh, Vijayakumar C.
Argument 3
AI can rapidly up‑skill large numbers; partnership with Ministry of IT to create curricula
EXPLANATION
He highlights a national initiative, collaborating with the Ministry of IT, to develop curricula that can quickly up‑skill large populations for AI‑driven jobs.
EVIDENCE
Krithivasan describes a recent workshop with 1,500 non-technical schoolchildren, teaching coding in native languages, and notes that all three panelists are working with the Ministry of IT to create curricula for university students [135-147].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
He describes a workshop with the Ministry of IT that taught coding to 1,500 non-technical students, illustrating rapid up-skilling efforts [S5].
MAJOR DISCUSSION POINT
National AI up‑skilling collaboration.
AGREED WITH
Arundhati Bhattacharya, Vijayakumar C., Amitabh Kant
Argument 4
Hands‑on workshops show AI’s potential to empower non‑technical youth
EXPLANATION
He provides evidence that short, practical workshops can enable thousands of non‑technical participants to build apps, demonstrating AI’s empowering potential.
EVIDENCE
He recounts that in a three-hour session, 1,500 participants built apps, showcasing AI’s ability to quickly up-skill and empower people without prior technical background [136-142].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The same workshop demonstrated that participants could build apps in three hours, showcasing AI’s empowerment potential [S5].
MAJOR DISCUSSION POINT
Practical AI training for youth.
AGREED WITH
Arundhati Bhattacharya, Vijayakumar C., Amitabh Kant
Argument 5
AI will create more jobs than it destroys; new roles will differ from traditional programming
EXPLANATION
Krithivasan argues that AI will be a net job creator, though the nature of those jobs will shift away from conventional programming tasks.
EVIDENCE
He states that AI will create more jobs than it destroys, and that many of the new jobs will not be programming-centric, reflecting a change in job classifications [300-304].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
He argues AI will be a net job creator, supported by reports that generative AI improves employment prospects and drives economic growth [S27], [S28], [S29].
MAJOR DISCUSSION POINT
AI as a net job creator.
AGREED WITH
Arundhati Bhattacharya, Vijayakumar C., Amitabh Kant
S
Salil Parekh
6 arguments149 words per minute789 words316 seconds
Argument 1
Services model is alive; AI creates $300 bn opportunity via AI engineering, legacy modernization, etc.
EXPLANATION
Salil contends that the services model remains viable, with AI opening roughly $300 billion of opportunities across six identified areas such as AI engineering and legacy modernization.
EVIDENCE
He cites that Infosys sees about $300 bn of AI services opportunity over the next years, highlighting AI engineering and legacy modernization as examples where AI agents lower cost and time, creating economic rationale for companies [82-89].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Parekh cites Infosys’s estimate of a $300 bn AI services opportunity across AI engineering, legacy modernization, and other areas <a href="https://dig.watch/event/india-ai-impact-summit-2026/building-trusted-ai-at-scale-cities-startups-digital-sovereignty-panel-discussion-moderator-amitabh-kant-niti/" target="_blank" class="diplo-source-cite" title="Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion Moderator Amitabh Kant NITI" data-source-title="Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion Moderator Amitabh Kant NITI" data-source-snippet="Moderator: With a big round of applause, kindly welcome the panelists of this last panel of AI Impact Summit 2026. Mr. Salil Pareek, Mr. K. Kritivasan, Mr. C. Vijay Kumar and Ms. Arun”>[S4].
MAJOR DISCUSSION POINT
AI‑driven growth for services model.
AGREED WITH
K. Krithivasan, Vijayakumar C.
Argument 2
Aggressive hiring and execution on AI services will drive growth
EXPLANATION
He points to Infosys’s large recruitment drives and headcount growth as evidence of its commitment to capture AI services opportunities.
EVIDENCE
Salil mentions recruiting 20,000 college graduates this year, a similar target for next year, and a 13,000 increase in headcount in the first three quarters, indicating continued expansion [91-95].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
He points to Infosys’s recruitment of 20,000 graduates and a 13,000 headcount increase as evidence of aggressive hiring to capture AI services demand <a href="https://dig.watch/event/india-ai-impact-summit-2026/building-trusted-ai-at-scale-cities-startups-digital-sovereignty-panel-discussion-moderator-amitabh-kant-niti/" target="_blank" class="diplo-source-cite" title="Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion Moderator Amitabh Kant NITI" data-source-title="Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion Moderator Amitabh Kant NITI" data-source-snippet="Moderator: With a big round of applause, kindly welcome the panelists of this last panel of AI Impact Summit 2026. Mr. Salil Pareek, Mr. K. Kritivasan, Mr. C. Vijay Kumar and Ms. Arun”>[S4].
MAJOR DISCUSSION POINT
Talent acquisition to fuel AI services.
Argument 3
Infosys’ Topaz Fabric IP layer enables use of any foundation model and custom agents
EXPLANATION
He describes Infosys’s proprietary Topaz Fabric, which allows clients to work with any foundation model and integrate custom or third‑party agents, representing a strategic IP asset.
EVIDENCE
He explains that Topaz Fabric is an IP layer that lets clients use any foundation model, combines Infosys-built agents and third-party agents, and that Infosys will continue to build on this IP [100-102].
MAJOR DISCUSSION POINT
Proprietary AI integration platform.
Argument 4
Deploy AI‑driven projects in agriculture, health, education using DPI principles
EXPLANATION
Salil outlines ongoing AI projects in key sectors, leveraging India’s digital public infrastructure (DPI) model to make AI services widely accessible.
EVIDENCE
He notes three big areas-agriculture, healthcare, education-where AI projects are being deployed, following the DPI approach of low-cost, widely available services, with components being rolled out in partnership with ministries [184-192].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
He outlines AI deployments in agriculture, health, and education following India’s Digital Public Infrastructure (DPI) model, as described in the panel discussion <a href="https://dig.watch/event/india-ai-impact-summit-2026/building-trusted-ai-at-scale-cities-startups-digital-sovereignty-panel-discussion-moderator-amitabh-kant-niti/" target="_blank" class="diplo-source-cite" title="Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion Moderator Amitabh Kant NITI" data-source-title="Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion Moderator Amitabh Kant NITI" data-source-snippet="Moderator: With a big round of applause, kindly welcome the panelists of this last panel of AI Impact Summit 2026. Mr. Salil Pareek, Mr. K. Kritivasan, Mr. C. Vijay Kumar and Ms. Arun”>[S4].
MAJOR DISCUSSION POINT
Sectoral AI deployment via DPI.
Argument 5
Leverage chip, data‑center, and architectural layers to make AI power common‑citizen ready
EXPLANATION
He emphasizes the need to build AI infrastructure across hardware, data‑center, and architectural layers to democratize AI access for citizens.
EVIDENCE
Salil references support at the chip layer, data-center layer, and infrastructure layer, and mentions ongoing work on architecture to distribute AI capabilities to the common citizen [190-192].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
He mentions building AI capability across chip, data-center, and architectural layers to democratize AI for citizens, noted in the discussion <a href="https://dig.watch/event/india-ai-impact-summit-2026/building-trusted-ai-at-scale-cities-startups-digital-sovereignty-panel-discussion-moderator-amitabh-kant-niti/" target="_blank" class="diplo-source-cite" title="Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion Moderator Amitabh Kant NITI" data-source-title="Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion Moderator Amitabh Kant NITI" data-source-snippet="Moderator: With a big round of applause, kindly welcome the panelists of this last panel of AI Impact Summit 2026. Mr. Salil Pareek, Mr. K. Kritivasan, Mr. C. Vijay Kumar and Ms. Arun”>[S4].
MAJOR DISCUSSION POINT
Infrastructure stack for AI democratization.
Argument 6
AI development must follow responsible‑AI frameworks and reflect cultural values
EXPLANATION
He asserts that responsible AI principles and cultural considerations are essential when building agents and training models.
EVIDENCE
Salil states that responsible AI is critical, that agents and foundation models must be built with responsible AI approaches, and that the industry should adopt such frameworks to ensure good outcomes [306-311].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
He stresses adherence to responsible-AI frameworks and cultural considerations, aligning with the responsible AI assessment tool (RAISE Index) and calls for culturally aware AI governance [S17], [S19].
MAJOR DISCUSSION POINT
Ethical and cultural responsibility in AI.
AGREED WITH
Arundhati Bhattacharya, Audience (Kishla)
C
C. Vijayakumar
5 arguments136 words per minute831 words365 seconds
Argument 1
Focus on product business and bridging foundation models to enterprise, not becoming a hyperscaler
EXPLANATION
Vijayakumar explains that HCL will concentrate on its product business and on creating solutions that make foundation models usable for enterprises, rather than competing with hyperscalers.
EVIDENCE
He says HCL is uniquely placed with a software product business, builds custom silicon, and will focus on bridging foundation models to enterprise use cases, explicitly stating they are not becoming a hyperscaler and will not build models themselves [118-126].
MAJOR DISCUSSION POINT
Strategic positioning away from hyperscaling.
AGREED WITH
K. Krithivasan, Salil Parekh, Vijayakumar C.
DISAGREED WITH
K. Krithivasan
Argument 2
Leverage custom silicon and high revenue‑per‑employee to build scalable AI solutions
EXPLANATION
He highlights HCL’s capabilities, such as custom two‑nanometer silicon and the highest revenue per employee among Indian IT services, as foundations for scalable AI offerings.
EVIDENCE
Vijayakumar notes HCL’s 10 % revenue from product business, deep engineering heritage, a two-nanometer custom silicon project, and the highest revenue per employee among IT services firms [110-115].
MAJOR DISCUSSION POINT
Competitive advantage through hardware and efficiency.
Argument 3
Significant R&D investment needed to build solutions, labs, and physical‑AI offerings
EXPLANATION
He argues that capturing the AI services market will require substantial R&D spending to develop solutions, labs, and emerging physical‑AI products.
EVIDENCE
He describes how large CapEx spend will generate services demand, cites a trillion-dollar physical AI opportunity, and stresses the need for building solutions, labs, and pre-work, concluding that R&D spend must increase [200-215].
MAJOR DISCUSSION POINT
R&D as a prerequisite for AI services capture.
Argument 4
Outcome‑based contracts will fund higher R&D spend ahead of market benefits
EXPLANATION
He suggests that as AI‑infused services grow, outcome‑based contracts will generate higher profitability, enabling firms to invest more in R&D before full market returns materialize.
EVIDENCE
Vijayakumar notes that outcome-based contracts will help deliver higher profitability, which in turn will allow comfortable investment in R&D, though timing may require early spending [214-215].
MAJOR DISCUSSION POINT
Financial model supporting early R&D.
Argument 5
Core programming plus orchestration, critical thinking, and AI‑tool mastery are essential
EXPLANATION
He emphasizes that while programming remains fundamental, future success will hinge on critical thinking, orchestration of AI agents, and the ability to amplify output using AI tools.
EVIDENCE
He states that programming is essential for long-term software careers, highlights critical thinking and analytical skills, and describes orchestrating multiple coding agents to achieve 5× output as a key future skill [284-297].
MAJOR DISCUSSION POINT
Skill set evolution for AI‑augmented software work.
AGREED WITH
K. Krithivasan, Arundhati Bhattacharya, Vijayakumar C., Amitabh Kant
A
Amitabh Kant
2 arguments131 words per minute1327 words607 seconds
Argument 1
India is at a pivotal AI disruption point that demands proactive policy and industry engagement.
EXPLANATION
Kant observes that the country is currently experiencing a major disruption driven by AI, implying that coordinated action from both the public and private sectors is essential to harness the opportunity.
EVIDENCE
He explicitly states, “We are actually meeting at a point of disruption,” signalling the need for strategic response [8].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Kant’s statement about a disruption point is echoed by the panel’s emphasis on coordinated policy and multi-stakeholder frameworks for AI development <a href="https://dig.watch/event/india-ai-impact-summit-2026/building-trusted-ai-at-scale-cities-startups-digital-sovereignty-panel-discussion-moderator-amitabh-kant-niti/" target="_blank" class="diplo-source-cite" title="Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion Moderator Amitabh Kant NITI" data-source-title="Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion Moderator Amitabh Kant NITI" data-source-snippet="Moderator: With a big round of applause, kindly welcome the panelists of this last panel of AI Impact Summit 2026. Mr. Salil Pareek, Mr. K. Kritivasan, Mr. C. Vijay Kumar and Ms. Arun”>[S4], [S19].
MAJOR DISCUSSION POINT
AI-driven disruption as a catalyst for strategic action.
Argument 2
AI will be a primary engine of employment and economic growth, propelling India toward a $30+ trillion economy and a “Vixit Bharat” by 2047.
EXPLANATION
Kant concludes that the wave of AI will generate far more jobs than it eliminates, driving unprecedented economic expansion and positioning India as a leading global economy by mid‑century.
EVIDENCE
He summarises the panel’s view that AI will create many jobs, boost productivity, and help India achieve a $30 + trillion economy and the vision of a “Vixit Bharat” by 2047 [312-317].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
His projection aligns with analyses that AI will boost employment and drive massive economic growth, supporting a $30+ trillion outlook [S27], [S28], [S29].
MAJOR DISCUSSION POINT
AI as a catalyst for massive job creation and macro‑economic growth.
A
Audience
2 arguments162 words per minute340 words125 seconds
Argument 1
Young entrepreneurs need structured mentorship and ecosystem support to turn AI ideas into viable ventures.
EXPLANATION
An audience member highlights the difficulty of accessing networks and guidance, arguing that a formal mentorship channel would enable emerging innovators to scale their AI projects.
EVIDENCE
Mania Sharma, a 27-year-old entrepreneur, asks for direct contact and support, noting she has “no network” and seeks mentorship to engage with the panelists and the broader AI ecosystem [225-229].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The audience’s request for mentorship matches the panel’s acknowledgment of the need for structured startup support and the existence of Salesforce’s mentorship network <a href="https://dig.watch/event/india-ai-impact-summit-2026/building-trusted-ai-at-scale-cities-startups-digital-sovereignty-panel-discussion-moderator-amitabh-kant-niti/" target="_blank" class="diplo-source-cite" title="Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion Moderator Amitabh Kant NITI" data-source-title="Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion Moderator Amitabh Kant NITI" data-source-snippet="Moderator: With a big round of applause, kindly welcome the panelists of this last panel of AI Impact Summit 2026. Mr. Salil Pareek, Mr. K. Kritivasan, Mr. C. Vijay Kumar and Ms. Arun”>[S4].
MAJOR DISCUSSION POINT
Mentorship and ecosystem support for early‑stage AI entrepreneurs.
Argument 2
AI solutions should be rooted in local cultural heritage and values to ensure relevance and acceptance.
EXPLANATION
A participant argues that building AI without incorporating India’s cultural traditions risks producing solutions that are disconnected from societal context, advocating for culturally‑aware AI design.
EVIDENCE
Kishla states that unless AI is built on “our culture, rich culture, tradition, heritage,” it will be a “different kind of thing,” emphasizing the need for cultural integration in AI development [262-267].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The call for culturally rooted AI reflects the panel’s discussion on integrating cultural values into responsible AI design and broader calls for AI frameworks that respect local heritage [S19], [S17].
MAJOR DISCUSSION POINT
Cultural relevance and ethical grounding of AI systems.
Agreements
Agreement Points
AI will be a net job creator, generating more employment than it eliminates, though the nature of jobs will shift toward new AI‑augmented roles.
Speakers: Arundhati Bhattacharya, K. Krithivasan, Vijayakumar C., Amitabh Kant
AI must be accessible to improve livelihoods of blue‑collar and MSME sectors AI will create more jobs than it destroys; new roles will differ from traditional programming Core programming plus orchestration, critical thinking, and AI‑tool mastery are essential AI will create many more jobs for India, driving a $30+ trillion economy
All speakers agree that AI will expand employment opportunities in India, especially for blue-collar and MSME workers, even though many of the new roles will require orchestration, critical thinking and AI-tool proficiency rather than traditional coding [158-172][300-304][284-297][312-317].
POLICY CONTEXT (KNOWLEDGE BASE)
Multiple studies and policy discussions highlight AI as a net job creator, with India’s AI workforce strategy emphasizing more jobs than losses and the need for upskilling [S56][S57][S58][S65][S66].
The traditional IT services model and system‑integrator role remain vital; AI will augment rather than replace these functions, and headcount is not expected to shrink dramatically.
Speakers: K. Krithivasan, Salil Parekh, Vijayakumar C.
System integrators remain essential; role moves to requirements and context engineering No major headcount shrink; volume and complexity of work will increase Services model is alive; AI creates $300 bn opportunity via AI engineering, legacy modernization, etc. Focus on product business and bridging foundation models to enterprise, not becoming a hyperscaler
Krithivasan stresses that system integrators will still be needed and that headcount will not fall sharply [51-53][68-70]; Salil highlights a $300 bn AI services opportunity that keeps the services model alive [82-89]; Vijayakumar adds that HCL will concentrate on bridging foundation models to enterprises rather than trying to become a hyperscaler [118-126].
POLICY CONTEXT (KNOWLEDGE BASE)
Industry panels note that system integrators remain essential, with HCL and Infosys positioning themselves as bridges between foundation models and enterprise applications rather than pursuing hyperscaler scale [S49][S50][S42].
Upskilling, capacity building and education are essential to prepare the workforce for AI‑driven transformation.
Speakers: K. Krithivasan, Arundhati Bhattacharya, Vijayakumar C., Amitabh Kant
AI can rapidly up‑skill large numbers; partnership with Ministry of IT to create curricula Hands‑on workshops show AI’s potential to empower non‑technical youth Address skilling, job‑access, and payment challenges through AI‑enabled marketplaces Core programming plus orchestration, critical thinking, and AI‑tool mastery are essential
Krithivasan describes a workshop that taught 1,500 non-technical students to build apps and notes collaboration with the Ministry of IT on curricula [135-142]; Arundhati outlines the skilling, access and payment challenges faced by blue-collar workers and proposes AI-enabled marketplaces to solve them [163-172]; Vijayakumar reiterates that programming fundamentals remain crucial while new orchestration skills are needed [284-297]; Amitabh’s question on national skilling strategy underscores the shared focus on capacity development [132-134].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy frameworks such as the US ‘worker-first AI agenda’ and various AI governance reports stress reskilling, capacity building and education as core to AI adoption [S64][S65][S66][S67][S68].
AI development should be responsible, inclusive and culturally grounded, ensuring that technology serves broader societal needs.
Speakers: Salil Parekh, Arundhati Bhattacharya, Audience (Kishla)
AI development must follow responsible‑AI frameworks and reflect cultural values AI must be accessible to improve livelihoods of blue‑collar and MSME sectors AI solutions should be rooted in local cultural heritage and values to ensure relevance and acceptance
Salil calls for responsible-AI practices and cultural alignment in building agents and models [306-311]; Arundhati stresses democratizing AI for blue-collar workers and MSMEs [157-166]; Kishla argues that AI should be built on India’s cultural heritage to be meaningful [262-267].
POLICY CONTEXT (KNOWLEDGE BASE)
Inclusive AI governance discussions call for culturally grounded, responsible AI development, reflected in multistakeholder dialogues and inclusive AI initiatives [S52][S53][S54][S55][S48].
Similar Viewpoints
Both leaders emphasize the need for broad ecosystem partnerships—Salil through collaboration with ministries and public‑sector DPI initiatives, Vijayakumar through partnerships with major solution providers—to scale AI solutions effectively [188-190][124-125].
Speakers: Salil Parekh, Vijayakumar C.
Deploy AI‑driven projects in agriculture, health, education using DPI principles Partnering with almost all the large solution providers
Unexpected Consensus
Both Infosys and HCL choose to remain solution‑builders rather than pursue hyperscaler ambitions or develop their own large foundation models.
Speakers: Salil Parekh, Vijayakumar C.
Infosys’s Topaz Fabric IP layer enables use of any foundation model and custom agents I don’t think we are building anything to become a hyperscaler… not building models
Despite being large IT services firms, both Salil and Vijayakumar state that their strategy is to create proprietary IP and integration layers (Topaz Fabric) and to focus on building solutions, explicitly rejecting the pursuit of hyperscaler status or in-house model development [99-102][125-126].
POLICY CONTEXT (KNOWLEDGE BASE)
Panel discussions with HCL leadership confirm their strategy to stay as solution-builders and avoid building large foundation models, contrasting with hyperscaler ambitions [S49][S50][S51].
Overall Assessment

The panel reached strong consensus on four core themes: (1) AI will be a net creator of jobs, especially for blue‑collar and MSME workers; (2) the traditional services and system‑integrator model will persist and even expand with AI engineering opportunities; (3) large‑scale upskilling and capacity‑building are essential to equip the workforce for new AI‑augmented roles; (4) AI must be developed responsibly, inclusively and with cultural relevance. These agreements cut across the digital economy, capacity development, AI governance and social development domains.

High consensus – the speakers largely reinforce each other’s positions, indicating a shared vision that policy, industry investment and education should focus on inclusive, responsible AI deployment rather than fearing displacement.

Differences
Different Viewpoints
Future role of system integrators versus product‑focused AI solution building
Speakers: K. Krithivasan, C. Vijayakumar
System integrators remain essential; role moves to requirements and context engineering Focus on product business and bridging foundation models to enterprise, not becoming a hyperscaler
Krithivasan argues that despite AI code generation, system integrators will still be needed to test, validate and ensure security of complex legacy environments, with a shift toward requirements and context engineering [51-53]. Vijayakumar counters that HCL will concentrate on building product solutions that make foundation models usable for enterprises, emphasizing bridging gaps rather than traditional system-integration services, and explicitly states they are not becoming a hyperscaler or building models themselves [118-126].
POLICY CONTEXT (KNOWLEDGE BASE)
Debates on the future of system integrators versus product-focused AI firms are captured in system integration challenge reports and industry panels on bridging models and applications [S42][S49][S50].
Extent of market hype and valuation of AI‑driven SaaS disruption
Speakers: Arundhati Bhattacharya, Implicit market narrative (as referenced by Amitabh Kant)
Market hype often overstates AI disruption; investors must read fine print AI agents will replace per‑seat software subsystems – market suggests traditional SaaS model is under threat
Arundhati cautions that market narratives frequently exaggerate AI impact, noting inflated valuations and circular money, and urges investors to scrutinize details [14-28]. Amitabh’s question frames the market view that AI agents could replace traditional SaaS per-seat models, implying a significant threat to the SaaS business model [11]. This reflects a disagreement between Arundhati’s skeptical view of market hype and the market-driven narrative of imminent SaaS disruption.
POLICY CONTEXT (KNOWLEDGE BASE)
Analysts differentiate between genuine AI investment momentum and hype, noting concerns about valuation of AI-driven SaaS and bubble risks [S62][S63][S61].
Unexpected Differences
Contrasting views on the survivability of the traditional services model
Speakers: Salil Parekh, Bay Area leader (referenced by Amitabh Kant)
Services model is alive; AI creates $300 bn opportunity via AI engineering, legacy modernization, etc. Services model is dead within five years (Bay Area leader’s claim)
Amitabh cites a Bay Area leader who claimed the services model would die in five years [75-78]. Salil directly refutes this by stating the services model remains viable and outlines a $300 bn AI services opportunity [82-89]. The disagreement is unexpected because it pits a high-profile external prediction against the internal confidence of an industry leader.
POLICY CONTEXT (KNOWLEDGE BASE)
The survivability of the traditional services model is contested, with some reports emphasizing continued relevance of services firms while others highlight pressure from hyperscalers [S42][S49][S50][S51].
Overall Assessment

The panel shows broad consensus that AI will be a net creator of jobs and economic growth, but there are notable disagreements on implementation pathways: the role of system integrators versus product‑centric solution building, the degree to which market hype should be trusted regarding SaaS disruption, and whether the traditional services model is still viable. These divergences reflect differing strategic priorities among Indian IT firms and between industry insiders and external market narratives.

Moderate – while the overarching goals (AI‑driven growth, job creation, democratization) are shared, the speakers differ on key strategic approaches, which could lead to fragmented policy recommendations and varied investment strategies across the sector.

Partial Agreements
All speakers concur that AI will generate net employment and drive economic expansion, but they diverge on the nature of the future jobs and the pathways to achieve this outcome: Krithivasan emphasizes new, non‑programming job categories [300-304]; Vijayakumar stresses the need for programming fundamentals combined with orchestration and critical thinking skills [284-297]; Salil highlights large‑scale hiring and AI services opportunities as the growth engine [91-95]; Amitabh frames AI as the catalyst for a $30 + trillion economy and massive job creation [312-317].
Speakers: Amitabh Kant, K. Krithivasan, C. Vijayakumar, Salil Parekh
AI will be a primary engine of employment and economic growth, propelling India toward a $30+ trillion economy and a “Vixit Bharat” by 2047 AI will create more jobs than it destroys; new roles will differ from traditional programming Core programming plus orchestration, critical thinking, and AI‑tool mastery are essential Aggressive hiring and execution on AI services will drive growth
Takeaways
Key takeaways
The SaaS model remains resilient; success depends on workflow integration, governance, observability, and delivering concrete value, not just on low‑code or AI code generation. Market hype around AI‑driven disruption is often overstated; investors should scrutinize valuations and fine‑print. The core role of system integrators will shift from manual coding to requirements engineering, context engineering, validation, and orchestration of AI agents, without a major headcount reduction. The traditional services model is alive and can unlock a $300 bn+ AI services opportunity through AI engineering, legacy modernization, and other high‑value offerings. Infosys is building proprietary AI IP (Topaz Fabric) that abstracts foundation models and custom agents, positioning itself as both a builder and a platform provider. HCL Tech will focus on bridging foundation models to enterprise use cases and building scalable AI solutions, leveraging its product business and custom silicon, but will not attempt to become a hyperscaler. National skilling and reskilling are critical; AI can rapidly up‑skill large numbers, and TCS is collaborating with the Ministry of IT to develop curricula and run hands‑on workshops. Democratizing AI for MSMEs and blue‑collar workers is essential; AI‑enabled marketplaces can address skill, access, and payment challenges for these segments. India’s Digital Public Infrastructure (DPI) model will be extended to AI, with pilot projects in agriculture, health, and education, supported by chip, data‑center, and architectural layers. Capturing the AI services market will require increased R&D investment to build solutions, labs, and physical‑AI offerings; outcome‑based contracts can fund higher R&D spend. AI is expected to create more jobs than it destroys, but new roles will emphasize programming fundamentals, AI‑tool orchestration, critical thinking, and analytical skills. Responsible AI frameworks and cultural alignment are seen as non‑negotiable for trustworthy AI deployment. Start‑ups can tap into the Salesforce ecosystem for mentorship and community support.
Resolutions and action items
Infosys will continue aggressive hiring (20,000 graduates announced, 13,000 added in FY) to staff AI services and build Topaz Fabric IP. Infosys will expand execution on the six identified AI service areas to capture the $300 bn opportunity. TCS will focus on expanding requirements/context engineering, validation, cybersecurity, and cloud rationalization services as AI adoption grows. HCL Tech will prioritize building IP that bridges foundation models to enterprise workloads and will deepen partnerships with major solution providers. TCS (and other firms) will work with the Ministry of IT to develop AI curricula for universities and run large‑scale workshops for non‑technical youth. Infosys and other industry players will adopt responsible‑AI frameworks and embed cultural considerations into model training and deployment. Salesforce will provide a point of contact (Rupa Arvindakshan) for start‑ups seeking mentorship and ecosystem support. Industry consensus to increase R&D spend to develop AI solutions, labs, and physical‑AI offerings ahead of market demand.
Unresolved issues
Exact impact of AI on Salesforce’s market valuation and whether the SaaS model is fundamentally threatened. Specific headcount and revenue‑per‑employee targets for TCS by 2030. Detailed roadmap and funding mechanisms for a nationwide AI‑focused Digital Public Infrastructure. Concrete mechanisms to prevent misuse of AI (e.g., disinformation, malicious prompting). How Indian IT firms can collectively compete with hyperscalers in AI infrastructure without becoming hyperscalers themselves. Precise curriculum content and scaling strategy for national AI skilling and reskilling programs. Metrics and timelines for measuring the success of AI democratization for MSMEs and blue‑collar workers.
Suggested compromises
Acknowledgement that AI will not eliminate the SaaS business model but will require augmentation with workflow, governance, and value‑add capabilities. Balancing the view that AI will not cause massive headcount cuts with the need to upskill existing staff for orchestration roles. Combining proprietary IP development (Infosys Topaz Fabric) with openness to third‑party foundation models, rather than pursuing a pure build‑or‑buy stance. Emphasizing both aggressive R&D investment and reliance on outcome‑based contracts to fund that investment.
Thought Provoking Comments
Markets will say a lot of things, but the SaaS model is not just about code generation; it involves understanding workflows, governance, auditability, and adoption. AI‑generated code alone cannot replace these essential components.
She reframes the hype around AI‑driven code generation by highlighting the broader ecosystem needed for SaaS success, challenging the notion that AI will make traditional SaaS obsolete.
Shifted the conversation from a market‑value panic to a more nuanced view of SaaS resilience, prompting other panelists to discuss the continuing relevance of system integrators and the need for new skill sets.
Speaker: Arundhati Bhattacharya
System integrators will still be needed because enterprises have complex legacy environments. The future will focus more on requirements engineering, context engineering, validation, cybersecurity, and testing of AI‑generated outputs.
He identifies concrete areas where human expertise remains critical, countering the fear that AI will eliminate software engineering jobs.
Introduced the theme of role transformation rather than job loss, leading Salil and others to elaborate on new service opportunities and the importance of up‑skilling.
Speaker: K. Krithivasan
We see about $300 billion of AI services opportunity over the next few years across six domains – AI engineering, legacy modernization, AI factories, etc. – and we are scaling headcount (20 k graduates this year, 13 k added in Q3) to capture it.
Provides a data‑driven, optimistic outlook that the services model is not dead but evolving into high‑value AI‑centric offerings.
Set a positive tone for the panel, framing AI as a growth engine and prompting discussion on IP creation (Topaz Fabric) and recruitment strategies.
Speaker: Salil Parekh
AI must be democratized; it should empower blue‑collar workers and MSMEs, not just white‑collar professionals. We need to solve skilling, access, payment, and community support challenges for these groups.
Broadens the AI conversation to inclusive economic development, highlighting societal impact beyond large enterprises.
Steered the dialogue toward policy and public‑infrastructure considerations, leading Salil to talk about AI‑focused digital public infrastructure.
Speaker: Arundhati Bhattacharya
In a workshop with 1,500 non‑technical kids, we taught them to code in their native language and they built 1,500 apps in three hours – showing AI’s power to enable anyone to create software.
Demonstrates a tangible example of AI lowering entry barriers, reinforcing the argument that AI can be a catalyst for mass up‑skilling.
Supported Arundhati’s point on democratization and sparked interest in national‑level curriculum development, influencing the later discussion on DPI.
Speaker: K. Krithivasan
Physical AI represents a trillion‑dollar opportunity; to capture it, Indian IT firms must increase R&D spend now, building labs and POCs before the market matures.
Highlights a less‑discussed frontier (hardware‑centric AI) and stresses proactive investment, adding depth to the conversation about future revenue streams.
Prompted the panel to acknowledge the need for higher R&D intensity, linking back to Salil’s IP strategy and the broader question of competing with hyperscalers.
Speaker: C. Vijayakumar
Programming fundamentals remain essential, but the key future skill will be orchestration – managing multiple AI agents, critical thinking, and delivering outcomes at 5× the traditional speed.
Clarifies the evolving skill set required, countering the myth that coding will become obsolete, and provides a concrete direction for workforce development.
Guided the audience Q&A toward concrete skill recommendations, influencing Krithivasan’s later comment that AI will create more jobs than it destroys.
Speaker: C. Vijayakumar
We are already building a digital public infrastructure for AI—similar to the India Stack—targeting agriculture, healthcare, and education, with support at chip, data‑center, and architecture layers.
Positions AI as a national public good, extending the discussion from corporate strategy to country‑wide implementation.
Created a turning point that linked corporate initiatives to government policy, reinforcing the narrative of inclusive, large‑scale AI deployment.
Speaker: Salil Parekh
Overall Assessment

The discussion pivoted around three core insights: (1) AI will transform—not eliminate—existing SaaS and services models, as emphasized by Arundhati and Krithivasan; (2) the workforce will evolve, requiring new orchestration and validation skills, a point reinforced by Vijayakumar and Krithivasan’s skilling examples; and (3) India can leverage AI as a public‑good infrastructure, a vision articulated by Salil. These comments collectively shifted the tone from alarmist market speculation to a constructive, forward‑looking roadmap, prompting the panel to explore concrete opportunities (new service domains, IP development, R&D investment) and inclusive strategies (MSME empowerment, national DPI). The interplay of these thought‑provoking remarks shaped a narrative of optimism, responsibility, and strategic action for India’s AI future.

Follow-up Questions
What is the actual impact of AI agents on the traditional SaaS business model and market valuations?
Understanding whether AI agents truly threaten SaaS revenues is crucial for investors, enterprises and policy makers.
Speaker: Arundhati Bhattacharya
What are the projected headcount and revenue per employee for TCS in 2030, and how will the transition to AI orchestration be communicated to the workforce?
Concrete metrics are needed for workforce planning and to manage employee expectations during the AI‑driven shift.
Speaker: Amitabh Kant, K. Krithivasan
How can Indian IT firms close the execution gap identified by Nandan Nilekani and capture the $300‑$400 billion AI services opportunity?
Bridging the execution gap determines whether the large market potential can be realized by Indian service providers.
Speaker: Amitabh Kant, Salil Parekh
What IP strategies should Indian IT services companies adopt to own parts of the AI stack rather than remain builders for hire?
Owning AI IP could create sustainable competitive advantage and new revenue streams for Indian firms.
Speaker: Amitabh Kant, Salil Parekh
Is it feasible for Indian IT firms like HCL to become hyperscalers, and what would be required in terms of investment and capabilities?
Assessing the possibility of moving up the stack informs long‑term strategic decisions and capital allocation.
Speaker: Amitabh Kant, C. Vijayakumar
What specific national‑level skilling and reskilling curricula are being developed with the Ministry of IT, and how will they be scaled across the country?
A clear curriculum and scaling plan are essential to address the massive up‑skilling challenge for millions of graduates.
Speaker: K. Krithivasan
How can AI tools be democratized for MSMEs, considering unit economics and scalability?
Making AI affordable and usable for small businesses is key to broad‑based productivity gains in the Indian economy.
Speaker: Arundhati Bhattacharya
What architecture and governance model will underpin a Digital Public Infrastructure for AI in India?
A national AI infrastructure requires a defined technical architecture, data policies and governance to be effective and inclusive.
Speaker: Salil Parekh
What level of R&D intensity (budget, talent, timelines) is required for Indian IT firms to capture the projected AI services market?
Quantifying R&D needs helps firms plan investments and ensures they are not left behind in the AI race.
Speaker: C. Vijayakumar
What types of new jobs will AI create in India, and what specific skills will be in demand?
Identifying emerging job categories and skill requirements guides education, training programs and career planning.
Speaker: Navneet Kaul, K. Krithivasan, C. Vijayakumar
What measures can be implemented to prevent misuse of AI, such as disinformation or unrest?
Developing safeguards and policy frameworks is critical to ensure AI benefits society without causing harm.
Speaker: Harswar (audience)
How can AI solutions be built that reflect Indian culture, heritage, and values?
Culturally aligned AI can improve adoption, relevance, and ethical compliance within the Indian context.
Speaker: Mamanama Venkatana Rasimahati (audience)
How can startups engage with large enterprise ecosystems like Salesforce for mentorship and market access?
Clear pathways for startup collaboration can accelerate innovation and broaden the AI ecosystem.
Speaker: Mania Sharma (audience), Arundhati Bhattacharya

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion

Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion

Session at a glanceSummary, keypoints, and speakers overview

Summary

The summit’s “Adoption and Acceleration of Artificial Intelligence” panel brought together leaders from philanthropy, finance, and government to discuss how AI can be deployed equitably and responsibly worldwide [5-8][9-14]. Moderator Rudra Chaudhry framed the discussion around the tension between policy and large-scale adoption, citing recent calls from India’s prime minister and France’s president for responsible diffusion [22-27].


Rwandan Minister Paula Ingabire explained that Rwanda adopts an adaptive, use-case-driven regulatory posture, building rules only after concrete AI applications reveal specific risks, rather than imposing abstract frameworks [35-44]. She emphasized that partnerships must include capacity-building, co-development, and data-sovereignty safeguards such as a national data hub and a pre-existing data protection law [45-53][54].


When asked about a global AI compact, Ingabire affirmed its feasibility but stressed that standards must be contextualized to diverse cultural and linguistic settings and tied to the concrete problems nations aim to solve [61-66]. Tara Lyons noted that while the fundamental policy questions raised during the Obama administration-fairness, transparency, interoperability-remain unchanged, the field has shifted from theoretical debate to applied challenges faced by deploying organizations [71-80]. She argued that the hardest issues are now human and institutional, requiring trustworthy, responsibly scaled AI that delivers real value to users rather than purely technical breakthroughs [82-88].


John Palfrey, representing the MacArthur Foundation, reiterated that AI should serve humanity, calling for stable regulatory regimes that keep humans at the centre and for philanthropy to fund civil-society voices that can shape those rules [95-99][101-108]. He highlighted that the foundation has mobilised over a billion dollars in AI-focused philanthropy, underscoring the sector’s role in supporting research, governance, and inclusive innovation [110-121].


Tara added that the finance sector’s long-standing risk-management experience offers a model for use-case-level governance, and she advocated for greater regulatory harmonisation to enable consistent global deployment [156-169][170-174]. Both speakers called for broader multi-stakeholder participation, urging that future panels include representatives from retail, energy, and manufacturing to showcase concrete value creation for citizens [179-183].


Ingabire described Rwanda’s concrete AI benefits in health, education, and agriculture-improving diagnosis, lesson planning, and farmer data services-while noting that financial sustainability will be measured through service quality rather than direct OPEX returns [129-146]. She also stressed the importance of measuring impact, expanding South-South cooperation, and hosting future summit activities in Kigali to ensure African voices shape the emerging AI governance landscape [205-216].


The discussion concluded that coordinated, use-case-specific regulation, inclusive partnerships, and sustained philanthropic and financial support are essential to realise AI’s promise while managing its risks on a global scale [55-60][156-174][179-186].


Keypoints


Major discussion points


Rwanda’s adaptive, use-case-driven regulatory model and capacity-building partnerships – The minister explained that Rwanda prioritises identifying high-impact AI use cases first and then crafts specific regulations, rather than imposing abstract rules, and that partnerships are structured to transfer skills and ensure local ownership [35-44][45-53].


Feasibility of a global AI compact that respects diverse contexts – Rudra asked whether a worldwide agreement on AI risks is possible, and Paula responded that a compact can exist but must embed non-negotiable shared standards while allowing contextual adaptation for each nation’s problems [59-66].


Human-centred AI, trust and responsible diffusion – Both John and Tara stressed that AI should serve people, not the opposite, and that the hardest challenges are not technical but about making AI trustworthy and useful in everyday life, which is essential for broad adoption [95-98][71-80][82-88].


Philanthropy and multi-stakeholder collaboration as a catalyst for responsible AI – John highlighted the need for sustained philanthropic funding to give civil society a voice, to support research and implementation, and to bridge gaps between innovation and regulation [101-110][120-121].


Regulatory harmonisation and risk-management at scale – Tara described how a global financial institution manages AI risk at the use-case level, stresses the importance of sector-specific oversight, and calls for cross-border regulatory alignment to enable safe, large-scale deployment [156-169][172-174].


Overall purpose / goal of the discussion


The panel was convened to explore how AI can be adopted and accelerated responsibly worldwide, sharing concrete experiences (e.g., Rwanda’s approach, financial-sector risk management) and debating the need for common governance frameworks, funding mechanisms, and multi-stakeholder collaboration that can translate policy into real-world impact.


Overall tone


The conversation began with a formal, courteous opening and an optimistic framing of AI’s potential. As the dialogue progressed, the tone became more probing and analytical, with speakers questioning feasibility (global compact) and highlighting challenges (trust, regulation, financing). Throughout, the tone remained constructive and forward-looking, ending on a collaborative note that invited continued cooperation and concrete next steps (e.g., South-South cooperation, future summit venues).


Speakers

John Palfrey – President of the John D. and Catherine T. MacArthur Foundation; law professor; expertise in philanthropy, AI policy, and law. [S1]


Rudra Chaudhry – Vice President of Observer Research Foundation; moderator of the panel; expertise in AI policy and governance.


Speaker 1 – Opening host/moderator of the summit; specific role or title not provided.


Terah Lyons – Managing Director and Global Head of AI and Data Policy at JPMorgan Chase; expertise in AI policy, finance, and risk management.


Paula Ingabire – Minister of ICT and Innovation, Rwanda; expertise in digital governance, AI adoption, and data sovereignty. [S12]


Additional speakers:


Stephen Bird – Global Head of Thematic Research at Morgan Stanley; expertise in investment research and AI market assessment.


Full session reportComprehensive analysis and detailed insights

Opening & Panel Introduction – Speaker 1 thanked the host, invoked the “AI for all” vision, announced a lost-rupee card, and introduced the panel: John Palfrey (MacArthur Foundation), Terah Lyons (JPMorgan Chase), Her Excellency Paula Ingabire (Rwanda), and moderator Rudra Chaudhry (Observer Research Foundation). Stephen Bird was named in the introduction but did not speak. [1-18]


Framing the Discussion – Rudra opened his 25-minute segment by describing the tension between policy and large-scale adoption and citing recent calls from India’s prime minister and France’s president for responsible AI diffusion in the Global South. He then asked how Rwanda balances governance with population-scale deployment. [19-34]


Rwanda’s Adaptive Regulatory Approach – Paula explained that Rwanda follows an “adaptive” strategy built around concrete use-cases. The government first identifies applications that can deliver the greatest societal benefit, then crafts use-case-specific regulations that evolve as evidence accumulates. Partnerships are required to co-develop solutions and train Rwandan staff, creating a closed loop between capacity-building and regulation. Rwanda is also establishing a national data hub and has enacted a data-protection and privacy law to safeguard data sovereignty. [35-54]


Possibility of a Global AI Compact – Rudra asked whether a global AI compact is realistic. Paula affirmed its feasibility, stressing that any agreement must contain non-negotiable shared standards while allowing cultural, linguistic and contextual adaptation for each nation’s specific problems. [55-66]


Historical Perspective on AI Policy (Obama Era) – Terah traced the origins of modern AI policy to the Obama administration, which first raised issues of fairness, transparency, bias mitigation and interoperability. She noted that the field has moved from theoretical debate to applied challenges. [67-78]


Current Hard Problems – Human & Institutional – Terah argued that the toughest challenges now are human and institutional: building trust, ensuring responsible scaling, and delivering real value to organisations and end-users rather than pursuing purely technical breakthroughs. [79-88]


Philanthropy & Human-Centred Regulation – John stressed that AI must serve humanity and called for a stable, human-centred regulatory regime to prevent the technology from being treated as “magical” or ungovernable. [89-99]


Funding the Ecosystem – John outlined the philanthropic sector’s contribution, citing the $500 million “Humanity AI” fund and a comparable commitment to the AI Collaborative, together amounting to over $1 billion for AI-for-humanity projects that support governance, research and inclusive innovation. [100-121]


Finance-Sector Experience & Need for Harmonisation – Terah described JPMorgan’s roughly $20 billion annual technology spend and a decade-long AI deployment journey that has progressed from analytics to large-language and agentic models. She highlighted sector-specific risk-management expertise and the importance of regulatory harmonisation across jurisdictions for multinational operators seeking “census-scale” deployment while maintaining consistent safeguards. [122-174]


Rwanda’s Value-Based Impact Metrics – Paula argued that AI value should be measured in health, education and agriculture outcomes rather than pure monetary ROI. She cited decision-support tools for community health workers, AI-enhanced lesson-planning for teachers, and data services for farmers that boost productivity and income. She emphasized that over 70 % of Rwanda’s population are youth, who are being trained to develop and maintain these solutions, reinforcing local ownership and trust. [175-190]


Future Directions & Requests


Terah expressed a desire to see more “real-economy” deployers (retail, energy, manufacturing) featured on future panels. [191-193]


John suggested that collaborations between philanthropy and frontier AI labs would be “exciting,” but did not commit to a specific partnership. [194-198]


Paula invited the summit organisers to consider hosting a future meeting in Kigali to deepen South-South cooperation and amplify African perspectives, and called for the development of impact-measurement metrics that quantify AI’s benefits across sectors. [199-207]


Rudra closed the session by thanking the panelists and the organisers for the discussion. [208-210]


Key Consensus Points – All participants endorsed: (a) an adaptive, use-case-specific regulatory approach anchored in human-centred values; (b) partnership models that embed capacity-building and data-sovereignty; (c) a flexible global compact with core non-negotiable standards; (d) the need for clear, sustainable financing mechanisms, whether philanthropic, commercial or OPEX-based; and (e) the development of systematic impact-measurement metrics to inform evidence-based policy. [35-54][55-66][79-88][100-121][122-174][175-190]


In sum, the panel highlighted that responsible AI diffusion depends on adaptive regulation, locally grounded partnerships, a shared yet culturally sensitive global framework, sustainable financing, and robust impact measurement. The forward-looking agenda calls for continued multi-stakeholder engagement, regulatory harmonisation, and South-South collaboration to ensure AI delivers equitable, trustworthy benefits worldwide while avoiding a false binary between regulation and innovation. [208-210]


Session transcriptComplete transcript of the session
Speaker 1

Thank you so much, Your Excellency, Eta Bush, for your valuable insights and for elevating the summit. And it’s really interesting to listen to the perspectives of countries like Sweden, because when we talk of AI for all and global cooperation, the role of each and every country becomes very, very important. Ladies and gentlemen, before I move on, I need to announce that there’s a rupee card which we found. If somebody has lost this rupee card, though I don’t know how much money is there, but if you’ve lost this rupee card, kindly come to me and collect it from me. Thank you. And ladies and gentlemen, now we move to the next panel discussion, which is on adoption and acceleration of artificial intelligence.

The panelists joining us represent some of the most thoughtful voices on how AI is being built and adopted around the world. Mr. John Palfrey is the president of the John D. and Catherine T. MacArthur Foundation, one of the world’s most influential philanthropies, where he has championed the idea that technology must serve the public interest. His perspective on how AI can be deployed equitably, not just efficiently, is essential to the conversation. Ms. Tara Lyons is the managing director and global head of AI and data policy at JPMorgan Chase. AI at one of the world’s largest financial institutions, she is navigating the frontier where AI meets regulation, risk and responsible deployment, ensuring that AI in finance is not just powerful, but trustworthy.

Her Excellency Paula Njibar is the minister of ICT and innovation for the government of Rwanda. Under her leadership, Rwanda has emerged as one of Africa’s most ambitious digital economies, proving that visionary governance can leapfrog traditional development pathways. And we also have Mr. Stephen Bird as the panelist, who is the global head of thematic research at Morgan Stanley, bringing the investor’s lens to the question of which AI bets are real and which are hype. And this discussion will be moderated by Mr. Rudra Chaudhry, Vice President of Observer Research Foundation. Ladies and gentlemen, please join me in welcoming Mr. John Palfrey, Ms. Tara Leons, Her Excellency Paula Ngibar, and also Mr. Rudra Chaudhry. Please kindly come to the stage for this very interesting conversation, a panel on adoption and acceleration of AI.

Mr. Bird will be joining us very soon. Thank you.

Rudra Chaudhry

All right. Hi, everyone. There’s a good bit of distance between me and the panelists, which might be a good thing. We’ll see. We’ve got about 25 minutes, so I’m going to keep it quite swift. The general panel is about policy on the one side, adoption on the other. And I wonder if that’s actually the case. Yesterday in the inaugural, the prime minister made very clear that adoption is a huge opportunity for India and other parts of the global south. But we have to do it responsibly. President Macron made a very similar pitch in his inaugural speech. And I want to start with that framing. And I want to come to you, Minister. Rwanda is a fascinating country in general.

But you’re particularly fascinating on the African continent because you were way ahead of the AI curve in a sense. You invested in a startup ecosystem. You were looking at scale before many of us thought of use case scales. Give us a sense of how Rwanda. Manages these minefields between governance policy on the one side and adoption at population scale on the other.

Paula Ingabire

Thank you very much, Rudy, and great to see you all. I think for us, the decision has always been clear around how we leverage technology as a country to drive socioeconomic development. And so AI, like many other technologies that we’ve experimented with as a country, we took the same posture. And so the idea was figuring out how we leverage this particular technology to address societal challenges. And there were certain trade -offs that we had to make. When it comes to governance, it was a posture around, rather than try to focus more on regulating, we’d rather figure out where do we see AI creating the biggest benefits and gains for society. And then we’re able to build regulations according to the use cases that we’re implementing.

And so the regulatory posture that we take then is more adaptive. And it’s one where it’s evidenced best because we’re already building use cases, using that today. And so we’re able to determine what kind of regulations are needed, and they’re very specific to the problems that we are solving. as opposed to trying to create a very abstract regulatory framework, which may not necessarily address whatever risks and concerns that we foresee. The second one has always been on partnerships because that’s been key. The level of development, digital development that we’ve achieved as a country is thanks to the various partners that we’ve been able to attract into Rwanda. But partnerships, we also look at it very closely to determine how do we make sure that these partnerships are helping us to build capacity.

So, for example, we’re not going to acquire a foreign solution, invite them to train on our data and just leave us with an application. We want them to be able to train our people, co -develop this with our people so that at least we have the skill set and the mastery of what we’re trying to deploy, which will then create that closed loop around the regulatory environment that we put in place. And last, again, I think it’s a conversation that we’ve had throughout this week around sovereignty, thinking about data sovereignty. By design, we’re building our national data hub. and we’re really making sure we understand, you know, what are the guardrails that we put in place.

We don’t want to wait for a crisis to start, you know, worrying about who is using our data, what are they accessing that for. And so we started with already putting in place the data protection and privacy law that governs how you collect, use, and process data. And that has been the foundation through which we can then start to ensure that everything that we do from a data sovereignty perspective, we’re doing it by design.

Rudra Chaudhry

So I’m going to come back to the question on the benefits of AI for all of you and for you, Minister, in a minute. You know, this entire summit process started with Bletchley, where I think the general philosophy was that can we come to some kind of a global compact when it comes to risk and risk aversion, when it comes to early warning systems. The institutional outcomes was these AI safety institutes that were built out. Can I ask a challenging question? Is, from your perspective, is a global compact on something like AI actually possible? Or are there norms that we should generally be thinking about and fitting into our national jurisdictions?

Paula Ingabire

So I believe a global compact is possible. However, it has to reflect the different contexts, cultural, linguistic, everything. And so to a certain extent, what you’re looking at is what are some of those shared standards that we all subscribe to as countries, which are non -negotiables for everyone that is building and deploying AI products and solutions. And then obviously, you then get to contextualize it to whatever problems that you’re solving for. And so, again, it’s going to come back to what are nations deploying AI to solve for? And how do we make sure that these standards are reflective of what we’re looking to adopt through the global compact?

Rudra Chaudhry

Dara, if I could come to you. You were… You’re leading AI at J .P. Morgan. you’ve been in the Obama administration in a very different office on science and technology and policy, way before the AI wave kind of hit us, although people have been working for AI for three decades now. Just give us a sense, just before I come to the immediate, take us a sense back to those second term of the Obama administration. Give us a sense of how were you thinking about AI?

Terah Lyons

Well, I would say that era was the first in which global governments started considering AI policy questions at all. And honestly, a lot of the same questions were being asked then as are being asked now. The question of global governance that the minister just spoke to, I think, was top of mind then as it is today. Questions of standards generation and interoperability were certainly part of the conversation. Issues of fairness, transparency, bias mitigation. sort of localization and other questions were all very much germane. So, you know, in many respects, the field has completely transformed, especially from a commercial perspective, given the level of investment that we’re seeing globally in the last five years, especially. But in many other respects, the foundational questions remain the same that policymakers were considering over 10 years ago.

And those questions, I think, are applicable in a lot of different directions. You know, I think one of the big differences in the current moment is that I really feel like we’ve moved from an era where these conversations have been more theoretical to an era in which they are much more applied and made much more real by the questions being asked by organizations like ours, for example, as AI deploying entities. Where the, you know, the issues of applied AI organizations are really where the rubber meets the road when it comes to these governance issues that we’re talking about from the stage and that policymakers have been considering for the last decade.

Rudra Chaudhry

so I think if I talk to most people who’ve been for the first three summits and I talk to them about this summit there’s a lot of argument about there’s a lot of energy there’s a lot of discussion on use cases diffusion getting this out to humanity getting it out to people and now we have to work downstream and upstream and figure out how best to do the diffusion piece let me ask you a question is you’ve been here for three four days for the summit era what’s really struck you in terms of the diffusion argument the adoption argument and then if you put your policy regulatory lens to it what are you thinking right now

Terah Lyons

well I actually don’t think the hardest questions in this field maybe this is a controversial answer but I’ll try it on for size here I don’t think the hardest questions in this field are technical right now I think they are questions of human issues and institutional issues. And I hear that no matter where I am, talking to clients and other large enterprises, speaking to governments globally, whether in New York, California, Brussels, or Delhi here this week, where the hard problem really isn’t frontier advancement right now. It’s actually making this technology useful to real organizations and making it helpful to real people in their everyday lives. And core to that set of issues are the governance questions that have been so top of mind here at the summit, I think.

And questions of how we scale responsibly, how we engender trust in the technology, because in order for AI to be useful, it has to be applied. And in order for it to be applied and widely adopted, it needs to be trusted. And so these are, I think, are cornerstones of what we need to be thinking about when we’re actually thinking about the frontier of AI in many ways.

Rudra Chaudhry

John, you run one of the most important organizations in the world, and you’re a largest philanthropic organization in the world. If there are students there, you should corner John afterwards for all sorts of things, if there are professors in the audience. But you’ve also got a very strong legal background. So the same question to Tara is, when you think of diffusion, when you think of impact use cases, and you think of what Paula said, which is we have to be adaptive about the regulatory architecture, where are you at?

John Palfrey

Rudy, thank you. And first, let me please, on behalf of MacArthur Foundation, congratulate our hosts in India. What a wonderful global stage to be on, to be having this important conversation. The point of view that I come from as a law professor and as leader of a philanthropy MacArthur Foundation is, of course, that we need to make the technology, the AI, work for humans and to put humans at the center. And I’ve been delighted on this main stage and throughout the summit to hear that as the focus here in India and, of course, around the world. And I think the way to do that is not to treat the AI as something magical and separate, but rather connected to all of the things that we’re trying.

So whether it’s lifting people out of poverty or improving health care or… Thank you. a bank providing capital as needed, we need a stable regulatory regime that makes that possible and puts humans at the center rather than just seeking to advance the technology at all costs and then treating it as something magical and other than forms of mathematics, forms of science that we have been able through human history to regulate so that it serves humans, not for its own sake.

Rudra Chaudhry

From your perspective in terms of philanthropy, but also from the perspective perhaps of peers that you talk to, is the current moment with the verb for adoption, the verb for getting this out to people, changing the way you’re thinking about grantees, partners, and the philosophical way in which you’re thinking about releasing money?

John Palfrey

Yes and no. I think there are some constants in philanthropy that are very important and maybe more important than ever in this moment. You think about the amount of capital that is flowing towards AI and its development, mostly of course by the private sector. I think there are some constants in philanthropy that are very important in the private sector, sometimes by sovereign wealth funds and so forth. What we need to ensure is that civil society has a voice. And of course, again, I credit our hosts for including civil society in this conversation and continuing to do that from Bletchley to today and onward. And the civil society world doesn’t come for free. Somebody has to pay for it, right?

And philanthropy has been historically the source of funding that. And I’m very impressed by the Indian philanthropic environment that is developing. We’re excited to partnership with the Center for Exponential Change and others who are developing homegrown both philanthropy as well as ideas that are coming from India to the rest of the world. But if we don’t invest in civil society, there will be many, many fewer voices able to bring the kind of sensibility that we’re talking about to the world. It doesn’t come without actually thinking about it carefully. So no, we are thinking that long -term capital that is for academia, that is for organizations. And I think about, of course, the Observer Research Foundation, which you’re involved in, Partnership for AI, for which Tara was the founding ED.

These organizations, along with academia, are going to be able to bring the kind of sensibility that we’re talking about to the sensibility that we’re talking about to the world. And I think about the fact that we have to be world. And I think about the fact that we have to be able to bring the kind of sensibility that we’re talking about to the world. And I think about the fact that we have to be able to bring the kind of sensibility that we’re talking about to the world. And I think about the fact that we have to be able to bring the kind of sensibility that we have to be able to bring the kind of in a stable long -term way by philanthropy.

We’ve been able with colleagues to raise half a billion dollars for humanity AI and effort in the US, close to that amount for current AI led by Martin Tisnay and AI Collaborative for global efforts. So we’re over a billion dollars in commitments between these two efforts, but we have to be

Rudra Chaudhry

Minister, let me ask you a question on, you talked about the benefits of AI in Rwanda. Can you open that box up for us a little bit? You know, one of the arguments has been, is that, and there are a lot of arguments about how is this stuff going to pay for itself? Use case and diffusion is all great, but is there an OPEX model or a revenue model for beneficial deployment? It needs to be sustainable over a period of time. And there’s another argument which says, when people actually start using things that are useful, and they see value in it, the rest will follow. What are your citizens in Rwanda feeling in terms of value?

Paula Ingabire

So I’ll defer a little bit because I think value cannot just be seen in monetary terms and how are we going to have the return on investment? How do we just sustain this financially? It’s a good metric to use for sure. But I think the way we are looking and when I look at the use cases that we’ve already identified, one, it speaks to our government’s decision to make sure that we are delivering better services to our citizens. So whether it’s healthcare, whether it’s making sure that we’re giving quality education to our students in Rwanda, whether it’s making sure that a majority of our population, which is made up of farmers, have access to the right data and extension services that then ensure that they have a growth.

And productivity, which will translate essentially also in them being able to have more income and getting out of poverty and building wealth for their families. But a starting point for us has always been. what problem are we trying to solve? And is AI the best way to solve for this? Or is it a combination of AI and many other technologies that can solve for that? We’re a country that has been on a journey of digital transformation for more than 20 years. And so we’ve already started to see the benefit of that. So when I look at the education use cases, we are ranging from being able to facilitate teachers with assessment tools that can help with faster and better assessment.

We’re looking at AI solutions that support with better lesson planning. And so if you’re able to have better lesson planning, you’re able to deliver quality education and make sure that it’s similar across the country, then I think those are benefits that one can easily quantify. For the health sector, we’re looking at our frontline health workers or the community health workers delivering primary health care, giving them decision support tools that enable them to have better diagnosis, and at the same time to reduce the burden of the health care system. So we’re looking at AI solutions that support with better backlog of their in the health care references. him. Essentially, that’s also going to translate into less wastage, into better care, but also even bringing down the cost of care per person, if you look at it that way.

So for our people, they’re very optimistic. Obviously, like any other country, everyone has to wonder, okay, there’s lots of data that you’re going to be using. Some of it, a lot of it is going to be personal data. What guardrails are we putting in place? We have the data protection and privacy law that I talked about earlier. But the most important thing, even for people that they need, is how are we building capacity in -country? So that a lot of these things are not solutions we are acquiring from elsewhere, but we also have more than 70 % of our population that are in the youth bracket. It means these are already people that are very excited about technology, that if you train them the right way, they’ll also be part of building these solutions.

And so I think there’s a lot of optimism on what it can do. it doesn’t mean we’re shying away from what the risks are we think that’s why we’re doing everything by design use case by use case trying to understand for each use case that we are deploying what could be the risks that could be unique to that particular application and how we addressing it

Rudra Chaudhry

no i think that’s fantastic i think the way you’re thinking about disaggregated risk rather than just one big banner sticker on top is perhaps the way we all need to go and as we think about how is this use case risky but how is it actually useful and adds value in different ways um so that’s fantastic um keeping an eye on the clock um tara i just want to talk a little bit about deployment and scale um we all love diffusion we want this stuff out to everybody how do we get it right when it comes to deployment and scale because none of this is going to be easy it’s going to require some kind of a sustainable financial model it’s going to require a lot of time and a lot of time and a lot of time and a lot of time and a lot of time and a lot of work across the board and across borders so just give us someone who works on scale and deployment give us a viewpoint

Terah Lyons

Sure. And maybe just a few words on census scale in our context here at JPMorgan Chase. We operate in over 100 countries globally. We spend close to $20 billion a year on technology. And we are investing really, really deeply in AI. So, you know, I think to answer your question, one of the paradigms from which we come to this issue is certainly from the unique risk management capabilities of finance and regulated banks specifically. We’ve been using AI technologies at the use case level for over 10 years, you know, starting first with more traditional analytic techniques, moving into the era of machine learning models, now introducing large language models, looking in the direction of agentic capabilities and beyond. And I think underscoring one of the points that John raised earlier, which I think is important here.

You know, the sort of risk management posture and considering what effective governance and controls looks like in order to scale in the way that you’re describing is something we have built muscles to do before. We know how to do this pretty well. And one of the superpowers, I think, that we have is sector -specific lens on regulation and oversight. I think that also speaks to some of the great points that the minister just made with respect to really evaluating risk at the use case level. You know, make this conversation about risk management grounded and practical in ways that address the real ways in which AI is getting deployed at the level of individual use cases.

And then making rules of the road that are applicable to that specific context. I think that’s really crucial. The other kind of piece of the equation is, and this speaks to the point I made at the top about our global operations. I think that’s really crucial. We really need regulatory harmonization to the extent possible in order to allow for consistency of rules across borders. And I think that there’s been a lot of really, really rich conversation this week at the summit about sovereign AI as a part of the global governance conversation. I think that that has its own unique and important goals, and I think it needs to be held in the same sort of space as a realization that we also need to be considering what a global baseline looks like, what clarity enables for global operators so that they can really get responsibility at scale right.

Rudra Chaudhry

I’m going to ask you one question before I come back. What would you like to see going ahead? From this summit, the baton has been handed to Switzerland, and from Switzerland, there’s possibly another likely candidate. But what would you like this summit process to do in an institutional setting, perhaps, to keep these conversations going?

Terah Lyons

Well, I think that John’s earlier point about the need for multi -stakeholder diversity is really key. I think that looking across sectors, government, civil society, and industry is deeply important, and making sure all those voices are at the table is critical. I think a sub -point there, from my perspective, is that I would like to see more deployers sitting in seats like this one. We are one of the largest financial institutions in the world, and we use AI in really, really deep ways, as I mentioned before. But I want to see folks from retail, energy, I want to see people from manufacturing, I want to see folks who really represent the real economy sitting on stages like this one next year in Switzerland and speaking to how we deliver real value in the hands of customers and citizens every day using these technologies.

Rudra Chaudhry

And John, very quickly to you, I’m going to ask you a cheeky question. The kind of philanthropy I think that we require now in AI is for MacArthur to be working with a frontier lab. That’s working with a local lab that’s deploying. Is that in your imagination?

John Palfrey

Sure, Enredi, thank you. And I think it’s an exciting idea of going from here to Switzerland and imagining what could come next. And I think what could come next for philanthropy is absolutely an important piece of the story. And I think if you think about the way in which technology works, it often begets innovation in other sectors. So I think what’s exciting is that the technology itself can inform the way we practice philanthropy in ways you suggest, but it also can figure out how to regulate better. And it turns out, of course, regulation is not just against innovation. In fact, regulation sometimes prompts further innovation, and then this wonderful cycle can continue. So my sort of key point on this would be to say, let’s not have a false binary.

Either you regulate or you innovate. Let’s figure out the way that the regulation and the governance drives innovation. And I think that’s an exciting idea, not just for governments, as the minister said, or for banks. It’s true for philanthropy, too, which can improve its work a little bit along the way, too.

Rudra Chaudhry

No, bang on. And Minister, last word to you. We would love to see the summit. hosted in Kigali. From your vantage point, and a lot of this is about South -South cooperation, a lot of it has been about global cooperation. What would you like to see between now and Switzerland? What can we all actively do to make this more palpable by the time we get to Zurich or Geneva or Davos or wherever it is?

Paula Ingabire

I think it’s great that since we started with the Birchley Park convenings, we’re looking at safety, governance, and now it’s about impact, execution, implementation. It would be great that we start to quantify what that impact has looked like and also to create a way where these exchanges are truly happening. And I couldn’t agree more. If we have more of the people that are building and deploying some of these solutions here, we could have some of the communities that have either benefited positively or negatively here so we can have their voices. So as we go ahead with how large -scale adoption of this technology is going to be, I think it’s going to be a very, very important thing.

is going to happen across the world. We’re taking into consideration this conversation. And I think the last one for me is to make sure we have more voices coming from the African continent and elsewhere, so that we can sort of balance between where are we seeing the biggest impact? Is it in emerging economies? Is it in the middle economies or the big ones? And what could be the nuances as we continue to deploy massively? And I think to do that, we need to take this to the African continent sooner rather than later. And we’re happy to host you.

Rudra Chaudhry

There you are. Good offer there. Minister, John, Tara, thank you so much. Thank you for being with us at the Impact Summit. And back to the organizers. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (23)
Factual NotesClaims verified against the Diplo knowledge base (5)
Confirmedhigh

“John Palfrey is the president of the John D. and Catherine T. MacArthur Foundation.”

The panel description identifies Mr. John Palfrey as the president of the MacArthur Foundation, confirming his role as stated in the report [S19].

Confirmedhigh

“Rwanda has enacted a data‑protection and privacy law to safeguard data sovereignty.”

Rwanda’s implementation of a data-protection and privacy law is documented in the knowledge base, confirming the report’s statement [S23] and [S22].

Additional Contextmedium

“Rwanda’s adaptive regulatory approach is built around concrete use‑cases, partnerships, and co‑creation with stakeholders.”

Additional details describe Rwanda’s emphasis on co-creation with beneficiaries and experts, and its flexible stance on emerging measures, providing nuance to the adaptive strategy mentioned [S108] and [S109].

Confirmedmedium

“The moderator opened a 25‑minute segment to frame the discussion.”

The opening remarks note a 25-minute timeframe for the panel, confirming the report’s timing detail [S1].

Confirmedmedium

“The moderator cited recent calls from India’s prime minister and France’s president for responsible AI diffusion in the Global South.”

The knowledge base references discussions on how France and India are building AI-related industrial and innovation bridges, supporting the claim that leaders from those countries have made recent calls on responsible AI [S102].

External Sources (114)
S1
https://dig.watch/event/india-ai-impact-summit-2026/building-trusted-ai-at-scale-cities-startups-digital-sovereignty-panel-discussion — The panelists joining us represent some of the most thoughtful voices on how AI is being built and adopted around the wo…
S2
Building Trusted AI at Scale – Keynote Anne Bouverot — -John Palfrey: Representative from the MacArthur Foundation (mentioned by Anne Bouverot but did not speak in this transc…
S3
FOSTERING FREEDOM ONLINE — – Deibert, Ronald, John Palfrey, Rafal Rohozinski and Jonathan Zittrain (eds). April 2010. Access Controlled: The Shapin…
S4
https://dig.watch/event/india-ai-impact-summit-2026/building-trusted-ai-at-scale-cities-startups-digital-sovereignty-panel-discussion — Her Excellency Paula Njibar is the minister of ICT and innovation for the government of Rwanda. Under her leadership, Rw…
S5
The reality of science fiction: Behind the scenes of race and technology — ‘Every desireis an endand every endis a desirethenthe end of the worldis a desire of the worldwhat type of end do you de…
S6
AI for Safer Workplaces &amp; Smarter Industries Transforming Risk into Real-Time Intelligence — <strong>Naveen GV:</strong> out a long, lengthy form of information for that to be processed much later by another human…
S7
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S8
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S9
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S10
How the Global South Is Accelerating AI Adoption_ Finance Sector Insights — – John Tass-Parker- Terah Lyons – Terah Lyons- Harshil Mathur
S11
The Power of Satellites in Emergency Alerting and Protecting Lives — Alexandre Vallet: Thank you very much Dr. Zavazava. Thank you very much both of you for this introductory remark. I will…
S12
Reinventing Digital Inclusion / DAVOS 2025 — – Paula Ingabire: Minister of Innovation, Technology and Innovation of Rwanda A major theme of the discussion was the l…
S13
AI: Lifting All Boats / DAVOS 2025 — – Paula Ingabire: Minister of Information, Communication Technology and Innovation of Rwanda Paula Ingabire: Maybe Vij…
S14
UNECA Role in the Internet Ecosystem in Africa | IGF 2023 Open Forum #110 — Hon. Paula Ingabire, Minister of Information and Communications Technology (ICT)
S15
Artificial intelligence (AI) – UN Security Council — The global focus on Artificial Intelligence (AI) capacity-building efforts has been a significant topic of discussion am…
S16
Democratizing AI: Open foundations and shared resources for global impact — Bernard Maissen: Yes, thank you. Hello, everybody, dear panelists. Nina, thank you for giving me the floor. In the globa…
S17
Open Forum #33 Building an International AI Cooperation Ecosystem — International Cooperation and Multi-stakeholder Approach Klauweiter argues that since AI governance is a global problem…
S18
AI: The Great Equaliser? — Rwanda has been digitising various functions and services for nearly two decades, and most government services are now a…
S19
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Panel Discussion — “And so the regulatory posture that we take then is more adaptive”[1]. “And then we’re able to build regulations accordi…
S20
Global AI Policy Framework: International Cooperation and Historical Perspectives — I think that’s my understanding. I think now we need to see as a human civilization. Obviously, cultures are very differ…
S21
Accelerating Structural Transformation and Industrialization in Developing Countries: Navigating the Future with Advanced ICTs and Industry 4.0 — **Local Capacity Building**: The priority of developing local expertise over simply importing advanced equipment. 3. **…
S22
Opportunities of Cross-Border Data Flow-DFFT for Development | IGF 2023 WS #224 — Policies play a crucial role in creating a conducive data ecosystem. Rwanda’s implementation of a data protection and pr…
S23
Thinking Big on Digital Inclusion — Data protection and privacy are essential considerations in the digitisation process. Rwanda has implemented data protec…
S24
Keeping AI in check — Societies should not be forgetful of the fact that technology is a product of the human mind and that the most intellige…
S25
Building Public Interest AI Catalytic Funding for Equitable Compute Access — India is proving that you can design AI ecosystems that are both globally competitive and globally competitive. And loca…
S26
Press Conference: Closing the AI Access Gap — An important aspect of the alliance’s work is the creation of relevant international frameworks and public-private partn…
S27
WS #83 the Relevance of Dpgs for Advancing Regional DPI Approaches — The SADC region’s cross-border financial inclusion project demonstrates this principle, focusing on solving real problem…
S28
Agenda item 5 : Day 4 Afternoon session — A central point of discussion was the “Needs-Based Capacity Building Catalogue,” proposed by the Philippines. This propo…
S29
Agenda item 6 — Chair:Thank you, UNIDIR, for your statement and also for all the work that you do. Friends, it’s ten minutes to one, and…
S30
Panel Discussion Data Sovereignty India AI Impact Summit — This comment introduces a powerful paradigm shift from a deficit mindset to an asset-based approach. Instead of focusing…
S31
Indias AI Leap Policy to Practice with AIP2 — The discussion revealed tensions between global harmonization and local adaptation needs. Adams argued against one-size-…
S32
UNSC meeting: Artificial intelligence, peace and security — Malta:Thank you, President. And I thank the UK Presidency for holding today’s briefing on this highly topical issue. I a…
S33
Day 0 Event #173 Building Ethical AI: Policy Tool for Human Centric and Responsible AI Governance — Chris Martin: Hiya, how are you doing? Check, check. Is that better? Cool. Again, hello. Welcome. My name is Chri…
S34
Democratizing AI Building Trustworthy Systems for Everyone — And I think that’s critical to ensure that if you want to democratize and ensure that GlobalSoft is integral to that, an…
S35
Local, Everywhere: The blueprint for a Humanitarian AI transformation — Trust:AI developed and governed by humanitarian organisations, rather than opaque commercial platforms, can be aligned w…
S36
How to make AI governance fit for purpose? — This comment elevated the discussion to a more philosophical level, moving beyond technical regulatory approaches to con…
S37
Bridging the AI innovation gap — The speaker stressed that all stakeholders—government, industry, academia, and civil society—have important roles in sha…
S38
Planetary Limits of AI: Governance for Just Digitalisation? | IGF 2023 Open Forum #37 — Forming alliances in global digital governance is crucial. Initiatives such as the Coalition for Digital Environmental S…
S39
AI/Gen AI for the Global Goals — Need for multi-stakeholder collaboration including governments, private sector, and civil society
S40
Secure Finance Risk-Based AI Policy for the Banking Sector — India’s regulatory thinking reflects this balance, encouraging experimentation while reinforcing institutional responsib…
S41
WS #98 Towards a global, risk-adaptive AI governance framework — 4. The potential need for sector-specific and use case-specific governance rather than one-size-fits-all approaches. Ti…
S42
Technology Rewiring Global Finance: A Panel Discussion Summary — – Jayee Koffey- Changpeng Zhao ING operates in 35 countries and faces different regulations. Examples include MiCA cryp…
S43
WS #283 AI Agents: Ensuring Responsible Deployment — Government Perspectives and Regulatory Approaches Lazanski points out that regulatory frameworks are emerging different…
S44
Building Population-Scale Digital Public Infrastructure for AI — To address this challenge, the Gates Foundation is investing in “scaling hubs” in Rwanda, Nigeria, Senegal, and soon Ken…
S45
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — ## Introduction and Context Setting ## Military and Dual-Use Applications Virginia Dignam: Thank you very much, Isador…
S46
The Foundation of AI Democratizing Compute Data Infrastructure — “So we are identifying agriculture, education, healthcare, and some more.”[83]. “So inspire them that they can really do…
S47
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Panel Discussion — “The general panel is about policy on the one side, adoption on the other”[52]. “…we have to work downstream and upstr…
S48
From principles to practice: Governing advanced AI in action — Strong consensus on fundamental principles including multi-stakeholder collaboration, trust as prerequisite for adoption…
S49
Setting the Rules_ Global AI Standards for Growth and Governance — So consensus around the need to do it, consensus around the fact that it’s hard, but it’s important for consumers and bu…
S50
Why science metters in global AI governance — The discussion maintained a consistently serious, collaborative, and optimistic tone throughout. Speakers emphasized urg…
S51
Catalyzing Global Investment in AI for Health_ WHO Strategic Roundtable — Verified AI extends beyond accuracy to encompass complete transparency in decision-making processes. Brey advocated for …
S52
AI as critical infrastructure for continuity in public services — So the participation of the community into that, in ensuring that the innovation and the policy level align with the nee…
S53
Digital Public Infrastructure, Policy Harmonisation, and Digital Cooperation – AI, Data Governance,and Innovation for Development — Participants emphasized the importance of involving diverse stakeholders in policy development, including marginalized g…
S54
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — The speakers demonstrated remarkable consensus on the need for alternative approaches to AI development that prioritize …
S55
Indias AI Leap Policy to Practice with AIP2 — The speakers demonstrated strong consensus on fundamental prerequisites for AI diffusion: skills development, clear gove…
S56
Closing remarks – Charting the path forward — Mentioned key sectors such as health, education, and agriculture as areas where communities should be empowered to innov…
S57
AI for agriculture Scaling Intelegence for food and climate resiliance — All these have been put in the one platform. You can just make a – presently it is working in English and Hindi, but in …
S58
Scaling Innovation Building a Robust AI Startup Ecosystem — Very high level of consensus with unanimous praise for STPI’s multifaceted support and shared recognition of technology’…
S59
Conversational AI in low income &amp; resource settings | IGF 2023 — Rajendra Pratap Gupta supports using voice-based data through Conversational AI to increase the accuracy and volume of h…
S60
How the Global South Is Accelerating AI Adoption_ Finance Sector Insights — And in terms of regulation, Reserve Bank’s approach has been largely tech neutral. It’s tech agnostic in some sense, bec…
S61
Secure Finance Risk-Based AI Policy for the Banking Sector — And these systems are fueled by vast data sets drawn from public and proprietary sources. On this foundation operate lar…
S62
WS #98 Towards a global, risk-adaptive AI governance framework — Sulafah Jabarti: OK, so I guess we all agree that AI has been reshaping the economy and the society all over the world…
S63
Building Sovereign and Responsible AI Beyond Proof of Concepts — Valuable AIextends beyond financial metrics to consider real-world benefits and measurable improvements in people’s live…
S64
Comprehensive Report: Preventing Jobless Growth in the Age of AI — -Sharing Productivity Benefits: Labor representative Liz Shuler raised concerns about ensuring workers receive fair shar…
S65
Swiss AI Initiatives and Policy Implementation Discussion — Risk quantification should be done in monetary terms to enable data-driven investment decisions and compare potential be…
S66
AUDA-NEPAD White Paper: Regulation and Responsible Adoption of AI in Africa Towards Achievement of AU Agenda 2063 — AI is well-positioned to improve businesses and banking by automating processes. This is enabled by enhancing the capaci…
S67
Technology Regulation and AI Governance Panel Discussion — All three speakers acknowledge that regulatory reform approaches must be adapted to each country’s specific context, ins…
S68
Wrap up — These key comments fundamentally reframed the discussion from typical technology policy debates to deeper philosophical …
S69
State of Play: AI Governance / DAVOS 2025 — While all speakers advocate for some form of regulation, they differ in their specific approaches. Krishna proposes a ri…
S70
Main Session 2: The governance of artificial intelligence — Mashologu advocates for context-aware regulatory innovation that includes regulatory sandboxes, human interlock mechanis…
S71
Global AI Policy Framework: International Cooperation and Historical Perspectives — High level of consensus on fundamental principles and approaches, with differences mainly in emphasis and specific imple…
S72
State of play of major global AI Governance processes — Its flexibility and adaptability are praised for bridging institutional, cultural, and regional practices. A cooperative…
S73
Agenda item 6 — Chair:Thank you, UNIDIR, for your statement and also for all the work that you do. Friends, it’s ten minutes to one, and…
S74
WS #98 Towards a global, risk-adaptive AI governance framework — Focus on use case and sector-specific governance rather than blanket regulations
S75
AI is here. Are countries ready, or not? | IGF 2023 Open Forum #131 — Alain Ndayishimiye:Yes thank you moderator once again let me take the opportunity to greet everyone whatever you are in …
S76
Agenda item 5 : Day 4 Afternoon session — A central point of discussion was the “Needs-Based Capacity Building Catalogue,” proposed by the Philippines. This propo…
S77
WS #83 the Relevance of Dpgs for Advancing Regional DPI Approaches — From the African perspective, Desire Kachenje highlighted that DPI development is government-driven but ecosystem-enable…
S78
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Panel Discussion — So I believe a global compact is possible. However, it has to reflect the different contexts, cultural, linguistic, ever…
S79
Setting the Rules_ Global AI Standards for Growth and Governance — Esther Tetruashvily responded by describing OpenAI’s efforts to evaluate model performance across various languages and …
S80
Global AI Policy Framework: International Cooperation and Historical Perspectives — Baumann argues for a balanced approach that establishes shared global norms while allowing flexibility for countries to …
S81
UNSC meeting: Artificial intelligence, peace and security — Malta:Thank you, President. And I thank the UK Presidency for holding today’s briefing on this highly topical issue. I a…
S82
Local, Everywhere: The blueprint for a Humanitarian AI transformation — Trust:AI developed and governed by humanitarian organisations, rather than opaque commercial platforms, can be aligned w…
S83
Closing remarks — This is a profound philosophical insight that reframes the entire trust discussion around AI. Rather than focusing on ma…
S84
Shaping AI’s Story Trust Responsibility &amp; Real-World Outcomes — But to me, there’s no question that if you are, and when you are introducing agentic technology, you need to take the re…
S85
Welcome Address — “How to make AI machine -centric and human -centric?”[33]. “Friends, the future of work will be inclusive, trusted, and …
S86
AI in Action: When technology serves humanity — Across these domains (conservation, disaster response, language preservation, small business, and agriculture), technolo…
S87
Evolving AI, evolving governance: from principles to action | IGF 2023 WS #196 — By utilizing a mix of tools and methods, it is possible to effectively address identified issues. Stakeholder cooperatio…
S88
Bridging the AI innovation gap — The speaker stressed that all stakeholders—government, industry, academia, and civil society—have important roles in sha…
S89
AI/Gen AI for the Global Goals — Need for multi-stakeholder collaboration including governments, private sector, and civil society
S90
Workshop 1: AI &amp; non-discrimination in digital spaces: from prevention to redress — Multi-stakeholder collaboration involving equality bodies, civil society, affected communities, and regulators is essent…
S91
Planetary Limits of AI: Governance for Just Digitalisation? | IGF 2023 Open Forum #37 — Forming alliances in global digital governance is crucial. Initiatives such as the Coalition for Digital Environmental S…
S92
Secure Finance Risk-Based AI Policy for the Banking Sector — India’s regulatory thinking reflects this balance, encouraging experimentation while reinforcing institutional responsib…
S93
Technology Rewiring Global Finance: A Panel Discussion Summary — – Jayee Koffey- Changpeng Zhao ING operates in 35 countries and faces different regulations. Examples include MiCA cryp…
S94
Lightning Talk #107 Irish Regulator Builds a Safe and Trusted Online Environment — Importance of cross-border regulatory coordination
S95
What is it about AI that we need to regulate? — What is it about AI that we need to regulate?The discussions across the Internet Governance Forum 2025 sessions revealed…
S96
AI for food systems — LJ Rich: Thank you so much, Seizo Onoe, for your opening remarks. And now we’ll turn to our fabulous panelists. Ladies a…
S97
Agenda item 5: discussions on substantive issues contained inparagraph 1 of General Assembly resolution 75/240 part 2 — – Australia (mentioned but did not speak) – Brazil (mentioned but did not speak) – China (mentioned but did not speak)…
S98
Opening of the session — – Albania (mentioned but did not speak in this transcript) – Brazil (mentioned but did not speak in this transcript) -…
S99
The Global Economic Outlook — – Borge: World Economic Forum executive (mentioned but did not speak)
S100
Launch / Award Event #223 Affordable Access for Education and Health Aa4edu — – **Jonathan Moringani** – Basic Internet Foundation, rapporteur (mentioned in introduction but did not speak)
S101
Agenda item 5: discussions on substantive issues contained inparagraph 1 of General Assembly resolution 75/240 (continued)/ part 6 — Chair (Ambassador Gafoor) This comment is insightful because it frames the entire discussion around the delicate nature…
S102
Scaling Trusted AI_ How France and India Are Building Industrial &amp; Innovation Bridges — The discussion maintained a consistently optimistic and collaborative tone throughout, characterized by mutual respect b…
S103
Closure of the session — Echoing France’s input, they agreed on the need for a consistent institutional dialogue structure to address crucial cyb…
S104
(Day 6) General Debate – General Assembly, 79th session: morning session — Ernest Rwamucyo – Rwanda: At the outset, I would like to congratulate Ambassador Philemon Young on assuming the presid…
S105
Open Forum #26 High-level review of AI governance from Inter-governmental P — Thelma Quaye: Thank you very much. Good evening, everybody. So I’d like to clarify, Smart Africa is not a multinatio…
S106
Multistakeholder Dialogue on National Digital Health Transformation — Sean Blaschke: Thanks, Leah. I’m going to try to apply the same architecture framework to legislation, policy, complia…
S107
WSIS Action Line C7 E-environment — Anita Batamuliza from the Rwanda Utilities Regulatory Authority, who chairs an East African collaboration working group,…
S108
Ad Hoc Consultation: Thursday 1st February, Afternoon session — During a recent conference, the Rwandan representative took the stage to address a topic which, although unspecified, se…
S109
Fixing Healthcare, Digitally — Co-creation is another key aspect highlighted in the analysis. In order to ensure effective implementation and regulatio…
S110
How Trust and Safety Drive Innovation and Sustainable Growth — You always ask me the tough questions. I think, first of all, the harms question, because I think that’s relevant to the…
S111
Summary — Stakeholders follow the following principles when dealing with issues relating to protection against cyber risks. The go…
S112
How AI Drives Innovation and Economic Growth — Artificial intelligence | Financial mechanisms | Social and economic development Kremer explains that while private com…
S113
Agenda item 5: discussions on substantive issues contained in paragraph 1 of General Assembly resolution 75/240 (continued)/4/OEWG 2025 — Israel: Good morning and thank you, Chair. We will present in brief, for the sake of time, some main points of our nat…
S114
Regional perspectives on digital governance | IGF 2023 Open Forum #138 — Luis Barbosa:Yeah. I’m thinking again about what Nibal was saying. I think there is a path that international organizati…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
Speaker 1
1 argument142 words per minute396 words167 seconds
Argument 1
AI for all requires every nation’s active participation; global cooperation is the cornerstone of equitable AI development
EXPLANATION
The speaker emphasizes that achieving AI that benefits everyone depends on the involvement of all countries. Global cooperation is presented as essential to ensure AI development is fair and inclusive.
EVIDENCE
In the opening remarks the speaker thanks the audience and highlights the importance of hearing perspectives from countries like Sweden, noting that when discussing “AI for all” and global cooperation, the role of each country becomes “very, very important” [1-3].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for worldwide cooperation is echoed in discussions about AI governance as a global problem that requires an inclusive UN platform [S17] and in calls for coordinated capacity-building across nations [S15]; cultural and contextual differences that must be reconciled are highlighted in analyses of global AI policy frameworks [S20].
MAJOR DISCUSSION POINT
Opening Remarks on Global Cooperation and “AI for All”
P
Paula Ingabire
6 arguments188 words per minute1412 words449 seconds
Argument 1
Adaptive, use‑case‑specific regulation is more effective than abstract rules (Paula Ingabire)
EXPLANATION
Paula argues that Rwanda prefers regulations that are shaped by concrete AI use‑cases rather than broad, abstract rules. This adaptive approach allows the government to tailor rules to the specific risks and benefits of each application.
EVIDENCE
She explains that Rwanda focuses on identifying where AI creates the biggest societal benefits and then builds regulations specific to those use cases, describing the regulatory posture as “adaptive” and evidence-based because it is informed by ongoing pilots [40-44].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Panel remarks describe Rwanda’s regulatory posture as “more adaptive” and built around concrete use-cases, matching the description in the Building Trusted AI at Scale discussion [S19].
MAJOR DISCUSSION POINT
Governance vs. Adoption and the Feasibility of a Global AI Compact
AGREED WITH
John Palfrey, Terah Lyons
DISAGREED WITH
John Palfrey
Argument 2
A global AI compact is possible but must accommodate cultural, linguistic, and contextual differences (Paula Ingabire)
EXPLANATION
Paula states that a worldwide AI agreement can work, provided it respects the diverse cultural, linguistic and contextual realities of each nation. Shared standards would exist, but they would be adapted to local problem‑solving needs.
EVIDENCE
She says a global compact is possible but must reflect different contexts, and that shared non-negotiable standards should be contextualised to the specific problems each nation tackles with AI [61-65].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Analyses of global AI policy stress the importance of respecting cultural and linguistic diversity when shaping international agreements [S20], and emphasize that inclusive platforms like the UN are needed to accommodate all nations [S17].
MAJOR DISCUSSION POINT
Governance vs. Adoption and the Feasibility of a Global AI Compact
AGREED WITH
Speaker 1, Rudra Chaudhry, John Palfrey
Argument 3
Partnerships should prioritize co‑development and local skill‑building rather than simply importing foreign solutions (Paula Ingabire)
EXPLANATION
Paula stresses that Rwanda seeks partnerships that involve joint development and capacity building, not just the acquisition of ready‑made foreign tools. The goal is to ensure Rwandan staff acquire the expertise to own and maintain AI solutions.
EVIDENCE
She gives the example that Rwanda will not simply acquire a foreign solution and have it trained on local data; instead, partners must train Rwandan people and co-develop the technology so that local capacity is built [48-49].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Studies on digital inclusion in developing countries highlight that building local expertise is prioritized over importing technology, reinforcing the co-development approach [S21].
MAJOR DISCUSSION POINT
Partnerships, Capacity Building, and Data Sovereignty
AGREED WITH
John Palfrey, Rudra Chaudhry
Argument 4
Rwanda is establishing a national data hub and robust data‑protection law to ensure data sovereignty by design (Paula Ingabire)
EXPLANATION
Paula describes Rwanda’s proactive steps to safeguard data sovereignty, including the creation of a national data hub and the enactment of a data protection and privacy law. These measures are intended to set guardrails before any crisis emerges.
EVIDENCE
She notes that Rwanda is building a national data hub and has already put in place a data protection and privacy law that governs data collection, use, and processing, forming the foundation for data-sovereignty-by-design [51-54].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Rwanda’s implementation of a data-protection and privacy law is cited as a key step toward data sovereignty and responsible digitisation [S22][S23].
MAJOR DISCUSSION POINT
Partnerships, Capacity Building, and Data Sovereignty
AGREED WITH
John Palfrey, Terah Lyons
Argument 5
Sustainable diffusion requires clear OPEX/revenue models and demonstrable citizen value in sectors like health, education, and agriculture (Paula Ingabire)
EXPLANATION
Paula argues that AI deployments must be financially sustainable and show tangible benefits to citizens in key sectors. She links value creation in health, education and agriculture to improved livelihoods and poverty reduction.
EVIDENCE
She outlines use-cases such as AI-enabled teacher assessment tools, lesson-planning support, and decision-support for community health workers, explaining how these improve service quality, reduce costs and increase farmer productivity, thereby creating measurable benefits [129-146].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for a sustainable financial model to support AI diffusion across sectors is highlighted in the Building Trusted AI at Scale discussion [S19].
MAJOR DISCUSSION POINT
Diffusion, Scale, Trust, and Sustainable Business Models
AGREED WITH
Rudra Chaudhry, Terah Lyons, John Palfrey
DISAGREED WITH
John Palfrey, Rudra Chaudhry
Argument 6
South‑South cooperation and greater African representation are essential; hosting future meetings in Kigali would amplify emerging‑economy perspectives (Paula Ingabire)
EXPLANATION
Paula calls for more African participation and suggests that future summits be hosted in Kigali to showcase South‑South collaboration. She believes this would bring African voices to the forefront of AI impact discussions.
EVIDENCE
She remarks that it would be great to quantify impact, involve communities that have benefited (or not), and stresses the need for more African representation, offering Rwanda as a host for upcoming gatherings [205-216].
MAJOR DISCUSSION POINT
Future Institutional Cooperation and Summit Continuity
J
John Palfrey
6 arguments240 words per minute844 words210 seconds
Argument 1
AI must be governed to serve humans; a stable regulatory regime is essential to prevent treating AI as a magical, ungovernable force (John Palfrey)
EXPLANATION
John stresses that AI should be regulated like any other technology, with humans at the centre, rather than being viewed as a mysterious, untouchable force. A stable regulatory framework is needed to align AI development with human welfare.
EVIDENCE
He states that AI should not be treated as “magical”, but connected to human goals such as lifting people out of poverty, improving health care, and providing capital, and calls for a stable regulatory regime that puts humans first [95-99].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Panelists call for a stable regulatory regime that puts humans at the centre rather than treating AI as a mysterious force [S19].
MAJOR DISCUSSION POINT
Governance vs. Adoption and the Feasibility of a Global AI Compact
AGREED WITH
Terah Lyons, Paula Ingabire
DISAGREED WITH
Paula Ingabire
Argument 2
Philanthropic collaborations with local labs and civil‑society organisations are crucial for building capacity and ensuring inclusive AI deployment (John Palfrey)
EXPLANATION
John proposes that philanthropy should partner with frontier labs and civil‑society groups to build local capacity and promote inclusive AI. Such collaborations can also inform better regulation and innovation cycles.
EVIDENCE
He responds positively to the idea of working with a frontier lab, noting that technology can inform philanthropy practice and regulation, and that regulation can spur further innovation, rejecting a false binary between the two [188-198].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Philanthropy’s role in partnering with frontier labs and civil-society groups to build local capacity is discussed in reports on catalytic funding and collaborative models in India [S25] and in the summit dialogue on philanthropy-driven innovation [S19].
MAJOR DISCUSSION POINT
Partnerships, Capacity Building, and Data Sovereignty
AGREED WITH
Paula Ingabire, Rudra Chaudhry
Argument 3
Philanthropy must provide long‑term capital to empower civil‑society voices and ensure AI serves the public interest (John Palfrey)
EXPLANATION
John argues that civil society needs sustained funding, which philanthropy historically provides, to keep a human‑centric perspective in AI development. Without such support, fewer voices would be able to influence AI governance.
EVIDENCE
He notes that civil society does not come for free, that philanthropy has historically funded it, and that the Indian philanthropic environment is promising, highlighting partnerships with local organisations [104-109].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Long-term, catalytic funding from philanthropic foundations is presented as essential for sustaining civil-society engagement in AI governance [S25] and is reinforced by observations of philanthropy’s historic support for civil society [S19].
MAJOR DISCUSSION POINT
Philanthropy, Funding, and the Role of Civil Society
AGREED WITH
Rudra Chaudhry, Paula Ingabire
DISAGREED WITH
Paula Ingabire, Rudra Chaudhry
Argument 4
Over a billion dollars have been pledged by philanthropic initiatives to support AI for humanity, underscoring the sector’s commitment (John Palfrey)
EXPLANATION
John cites the scale of philanthropic funding dedicated to AI for humanity, indicating a strong commitment from the sector. He mentions specific fundraising achievements that together exceed a billion dollars.
EVIDENCE
He reports that colleagues have raised half a billion dollars for Humanity AI in the US and a similar amount for a global AI effort, totaling over a billion dollars in commitments [120-121].
MAJOR DISCUSSION POINT
Philanthropy, Funding, and the Role of Civil Society
AGREED WITH
Paula Ingabire, Rudra Chaudhry, Terah Lyons
Argument 5
Engaging civil‑society organisations, think‑tanks, and academic partners creates the sensibility needed for responsible AI governance (John Palfrey)
EXPLANATION
John emphasizes that involving civil society, think‑tanks and academia brings the necessary sensibility to AI governance. These actors help ensure that AI development aligns with broader societal values.
EVIDENCE
He credits the summit for including civil society, mentions partnerships with the Center for Exponential Change and other Indian initiatives, and highlights the role of organizations like the Observer Research Foundation and Partnership for AI in shaping responsible AI [105-108][114-118].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The inclusion of civil-society, think-tanks and academic partners in AI governance discussions is highlighted as a way to bring necessary sensibility and diverse perspectives [S19].
MAJOR DISCUSSION POINT
Philanthropy, Funding, and the Role of Civil Society
Argument 6
Avoid a false binary between regulation and innovation; instead, let governance frameworks stimulate further AI breakthroughs (John Palfrey)
EXPLANATION
John argues that regulation and innovation are not opposing forces; instead, well‑designed governance can drive further AI advances. He calls for moving beyond a simplistic dichotomy.
EVIDENCE
He states “let’s not have a false binary. Either you regulate or you innovate. Let’s figure out the way that the regulation and the governance drives innovation” [195-198].
MAJOR DISCUSSION POINT
Future Institutional Cooperation and Summit Continuity
R
Rudra Chaudhry
3 arguments197 words per minute1177 words357 seconds
Argument 1
Global norms are needed, but they must be integrated into national jurisdictions rather than imposed top‑down (Rudra Chaudhry)
EXPLANATION
Rudra questions whether a global AI compact can work and suggests that any global norms should be adapted within national legal frameworks rather than being forced from above.
EVIDENCE
He asks a challenging question about the feasibility of a global compact and whether norms should be thought of as fitting into national jurisdictions [59-60].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Commentary on AI governance stresses that global norms should be adapted within national legal frameworks rather than imposed from above, echoing concerns raised about top-down approaches [S17] and the need to reconcile cultural differences [S20].
MAJOR DISCUSSION POINT
Governance vs. Adoption and the Feasibility of a Global AI Compact
AGREED WITH
Speaker 1, Paula Ingabire, John Palfrey
Argument 2
Moderators stress the need for a realistic, financially viable deployment strategy that balances speed with responsibility (Rudra Chaudhry)
EXPLANATION
Rudra emphasizes that scaling AI requires sustainable financial models and careful pacing, warning that deployment must be both responsible and economically feasible.
EVIDENCE
He remarks that diffusion will need a sustainable financial model, time, and cross-border work, and calls for a viewpoint on scale and deployment [155-158].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Discussions on AI diffusion emphasize the requirement for sustainable financial models that balance rapid deployment with responsible oversight [S19].
MAJOR DISCUSSION POINT
Diffusion, Scale, Trust, and Sustainable Business Models
AGREED WITH
Paula Ingabire, Terah Lyons, John Palfrey
DISAGREED WITH
Paula Ingabire, John Palfrey
Argument 3
Continuous engagement beyond the summit—through concrete impact metrics and ongoing exchanges—will keep momentum alive (Rudra Chaudhry)
EXPLANATION
Rudra calls for institutionalising follow‑up mechanisms, such as impact measurement and regular exchanges, to ensure the summit’s outcomes are sustained until the next meeting.
EVIDENCE
He asks what the summit process should do in an institutional setting to keep conversations going and mentions the need for concrete impact metrics and ongoing exchanges [175-178].
MAJOR DISCUSSION POINT
Future Institutional Cooperation and Summit Continuity
AGREED WITH
Paula Ingabire, John Palfrey
T
Terah Lyons
4 arguments165 words per minute1020 words369 seconds
Argument 1
Foundational policy questions (fairness, transparency, bias, standards) raised a decade ago remain central today (Terah Lyons)
EXPLANATION
Terah notes that many of the AI policy concerns first raised ten years ago—fairness, transparency, bias mitigation, standards—are still the core issues being debated today.
EVIDENCE
She recounts that the early Obama-era discussions already covered fairness, transparency, bias, standards, and that these foundational questions remain central after a decade [71-78].
MAJOR DISCUSSION POINT
Governance vs. Adoption and the Feasibility of a Global AI Compact
Argument 2
The hardest challenges are human and institutional, not technical; building trust is prerequisite for wide adoption (Terah Lyons)
EXPLANATION
Terah argues that the most difficult problems in AI are not technical but relate to human and institutional factors, especially trust. Trust is essential for scaling AI responsibly.
EVIDENCE
She states that the hardest questions are human and institutional, that AI must be useful to real organisations, and that trust and responsible scaling are cornerstones for adoption [82-88].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Analyses of AI’s societal impact underline that technology is a product of human design and that trust and human responsibility are central to responsible deployment [S24].
MAJOR DISCUSSION POINT
Diffusion, Scale, Trust, and Sustainable Business Models
AGREED WITH
John Palfrey, Paula Ingabire
Argument 3
Financial institutions bring sector‑specific risk‑management expertise that can guide responsible scaling and regulatory harmonisation across borders (Terah Lyons)
EXPLANATION
Terah explains that JPMorgan Chase’s experience in risk management and regulated finance equips it to help scale AI responsibly, and she stresses the need for regulatory harmonisation for global operators.
EVIDENCE
She describes JPMorgan’s 10-year use-case level AI experience, its risk-management posture, sector-specific regulatory insight, and the importance of regulatory harmonisation across borders [156-174].
MAJOR DISCUSSION POINT
Diffusion, Scale, Trust, and Sustainable Business Models
AGREED WITH
Paula Ingabire, Rudra Chaudhry, John Palfrey
Argument 4
The next summit should institutionalise multi‑stakeholder dialogue, bringing more deployers from industry, energy, manufacturing, etc., to the table (Terah Lyons)
EXPLANATION
Terah calls for the next summit to broaden participation beyond finance, inviting representatives from retail, energy, manufacturing and other real‑economy sectors to share deployment experiences.
EVIDENCE
She says she would like to see more deployers from sectors like retail, energy, and manufacturing sitting on future panels, noting JPMorgan’s deep AI use and the need for diverse industry voices [179-183].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for broader, multi-stakeholder participation in AI governance, including industry sectors beyond finance, are reflected in discussions about inclusive UN platforms and global cooperation frameworks [S17].
MAJOR DISCUSSION POINT
Future Institutional Cooperation and Summit Continuity
Agreements
Agreement Points
All speakers emphasized the necessity of a regulatory framework that is adaptive, use‑case specific and grounded in human‑centred values rather than treating AI as a magical, ungovernable force.
Speakers: Paula Ingabire, John Palfrey, Terah Lyons
Adaptive, use‑case‑specific regulation is more effective than abstract rules (Paula Ingabire) AI must be governed to serve humans; a stable regulatory regime is essential to prevent treating AI as a magical, ungovernable force (John Palfrey) Financial institutions bring sector‑specific risk‑management expertise that can guide responsible scaling and regulatory harmonisation across borders (Terah Lyons)
Paula described Rwanda’s adaptive, evidence-based regulatory posture built around concrete AI pilots [40-44]; John called for a stable regulatory regime that keeps humans at the centre and rejects the notion of AI as magical [95-99]; Terah highlighted JPMorgan’s sector-specific risk-management experience and the need for regulatory harmonisation to scale responsibly [161-169].
POLICY CONTEXT (KNOWLEDGE BASE)
This consensus mirrors the risk-adaptive and context-aware regulatory approaches advocated in recent global AI governance reports, such as the risk-adaptive framework discussed at the AI Governance Forum [S62] and the emphasis on context-specific sandboxes in OECD/UN discussions [S70][S67][S72].
Broad consensus that partnerships must prioritize co‑development, local capacity building and engagement of civil‑society to ensure inclusive and sustainable AI deployment.
Speakers: Paula Ingabire, John Palfrey, Rudra Chaudhry
Partnerships should prioritize co‑development and local skill‑building rather than simply importing foreign solutions (Paula Ingabire) Philanthropic collaborations with local labs and civil‑society organisations are crucial for building capacity and ensuring inclusive AI deployment (John Palfrey) Continuous engagement beyond the summit—through concrete impact metrics and ongoing exchanges—will keep momentum alive (Rudra Chaudhry)
Paula stressed that partners must train Rwandan staff and co-develop solutions instead of just delivering ready-made tools [48-49]; John welcomed the idea of philanthropy working with frontier labs and civil-society to build capacity and inform regulation [188-198]; Rudra called for institutional follow-up and impact measurement to sustain collaboration [175-178].
POLICY CONTEXT (KNOWLEDGE BASE)
The emphasis on co-development and local capacity aligns with the Gates Foundation’s “scaling hubs” model for digital public infrastructure in Africa, which foregrounds partnership with governments and civil society [S44], and with broader calls for inclusive policy-making in digital public infrastructure initiatives [S53][S46].
All participants recognized the importance of global cooperation and a worldwide AI compact that respects cultural and contextual diversity while establishing shared non‑negotiable standards.
Speakers: Speaker 1, Paula Ingabire, Rudra Chaudhry, John Palfrey
AI for all requires every nation’s active participation; global cooperation is the cornerstone of equitable AI development (Speaker 1) A global AI compact is possible but must accommodate cultural, linguistic, and contextual differences (Paula Ingabire) Global norms are needed, but they must be integrated into national jurisdictions rather than imposed top‑down (Rudra Chaudhry) AI must be governed to serve humans; a stable regulatory regime is essential to prevent treating AI as a magical, ungovernable force (John Palfrey)
Speaker 1 highlighted the need for every country’s participation in AI for all [2]; Paula affirmed that a global compact can work if it reflects diverse contexts [61-65]; Rudra questioned whether global norms should fit national jurisdictions [59-60]; John reinforced the need for governance that serves humanity, aligning with a globally shared but locally adapted approach [95-99].
POLICY CONTEXT (KNOWLEDGE BASE)
The call for a worldwide AI compact reflects the growing consensus on international standards seen in the “Setting the Rules” report and UN-led multilateral cooperation, which stress shared non-negotiable standards while respecting cultural diversity [S49][S50][S71][S72].
There was unanimous agreement that sustainable financial models and clear value‑demonstration are essential for scaling AI diffusion across sectors such as health, education and agriculture.
Speakers: Paula Ingabire, Rudra Chaudhry, Terah Lyons, John Palfrey
Sustainable diffusion requires clear OPEX/revenue models and demonstrable citizen value in sectors like health, education, and agriculture (Paula Ingabire) Moderators stress the need for a realistic, financially viable deployment strategy that balances speed with responsibility (Rudra Chaudhry) Financial institutions bring sector‑specific risk‑management expertise that can guide responsible scaling and regulatory harmonisation across borders (Terah Lyons) Over a billion dollars have been pledged by philanthropic initiatives to support AI for humanity, underscoring the sector’s commitment (John Palfrey)
Paula outlined use-cases delivering measurable benefits and the need for OPEX models [129-146]; Rudra warned that diffusion must be backed by sustainable financial models [122-127]; Terah cited JPMorgan’s $20 billion annual tech spend and risk-management experience as a basis for scaling responsibly [158-160][161-169]; John reported more than a billion dollars of philanthropic commitments to AI for humanity [120-121].
POLICY CONTEXT (KNOWLEDGE BASE)
Sustainable financial models and value demonstration echo the financing strategies outlined in the Gates scaling hubs [S44], the need for sustainable diffusion models discussed in the “Building Trusted AI at Scale” panel [S47], and sector-specific pilots in health, education and agriculture documented in WHO roundtables and India’s AI Leap policy [S51][S55][S56][S57][S58].
All speakers agreed that building trust and ensuring AI serves human needs are prerequisites for widespread adoption.
Speakers: John Palfrey, Terah Lyons, Paula Ingabire
AI must be governed to serve humans; a stable regulatory regime is essential to prevent treating AI as a magical, ungovernable force (John Palfrey) The hardest challenges are human and institutional, not technical; building trust is prerequisite for wide adoption (Terah Lyons) Rwanda is establishing a national data hub and robust data‑protection law to ensure data sovereignty by design (Paula Ingabire)
John argued that AI should be regulated to keep humans at the centre and avoid mystifying the technology [95-99]; Terah emphasized that trust and human-institutional issues are the biggest hurdles and essential for scaling [82-88]; Paula described Rwanda’s data-protection law and national data hub as foundations for trust and sovereignty [51-54].
POLICY CONTEXT (KNOWLEDGE BASE)
Building trust and human-centred AI is a core principle in the “From principles to practice” consensus on safety-by-design and transparency [S48], WHO’s push for “glass-box” AI in health [S51], and community-centric approaches in public-service continuity [S52][S59].
Consensus on the need for ongoing monitoring, impact measurement and institutional mechanisms to keep the momentum of the summit alive.
Speakers: Rudra Chaudhry, Paula Ingabire, John Palfrey
Continuous engagement beyond the summit—through concrete impact metrics and ongoing exchanges—will keep momentum alive (Rudra Chaudhry) It would be great that we start to quantify what that impact has looked like and also to create a way where these exchanges are truly happening (Paula Ingabire) Philanthropy must provide long‑term capital to empower civil‑society voices and ensure AI serves the public interest (John Palfrey)
Rudra called for institutional follow-up and impact metrics to sustain dialogue [175-178]; Paula echoed the need to quantify impact and maintain exchanges [205-207]; John highlighted the role of long-term philanthropic capital in supporting civil-society and sustained effort [104-109].
POLICY CONTEXT (KNOWLEDGE BASE)
The need for monitoring and institutional oversight is highlighted in multi-stakeholder governance recommendations that call for impact-measurement frameworks and long-term oversight, as seen in AI governance roadmaps and the Digital Public Infrastructure policy-harmonisation work [S48][S53][S71].
Similar Viewpoints
Both stress that effective AI deployment requires partnerships that build local capacity and involve civil‑society, rather than merely importing external solutions [48-49][188-198].
Speakers: Paula Ingabire, John Palfrey
Partnerships should prioritize co‑development and local skill‑building rather than simply importing foreign solutions (Paula Ingabire) Philanthropic collaborations with local labs and civil‑society organisations are crucial for building capacity and ensuring inclusive AI deployment (John Palfrey)
Both link financial sustainability with sector‑specific risk management, arguing that responsible scaling depends on clear economic models and robust risk controls [161-169][129-146].
Speakers: Terah Lyons, Paula Ingabire
Financial institutions bring sector‑specific risk‑management expertise that can guide responsible scaling and regulatory harmonisation across borders (Terah Lyons) Sustainable diffusion requires clear OPEX/revenue models and demonstrable citizen value in sectors like health, education, and agriculture (Paula Ingabire)
All three acknowledge the need for a global framework on AI that respects national contexts and is built through cooperative, multilateral effort [2][61-65][59-60].
Speakers: Speaker 1, Paula Ingabire, Rudra Chaudhry
AI for all requires every nation’s active participation; global cooperation is the cornerstone of equitable AI development (Speaker 1) A global AI compact is possible but must accommodate cultural, linguistic, and contextual differences (Paula Ingabire) Global norms are needed, but they must be integrated into national jurisdictions rather than imposed top‑down (Rudra Chaudhry)
Unexpected Consensus
Finance sector and government aligning on sector‑specific regulatory risk‑management as a cornerstone for AI scaling.
Speakers: Terah Lyons, Paula Ingabire
Financial institutions bring sector‑specific risk‑management expertise that can guide responsible scaling and regulatory harmonisation across borders (Terah Lyons) Adaptive, use‑case‑specific regulation is more effective than abstract rules (Paula Ingabire)
It is notable that a senior banking executive and a government minister independently converged on the idea that regulation should be tailored to concrete use-cases and managed through sector-specific risk frameworks, highlighting a rare cross-sectoral alignment [161-169][40-44].
POLICY CONTEXT (KNOWLEDGE BASE)
Alignment of finance and government on risk-management mirrors the tech-neutral, consumer-protection-focused regulatory stance of the Reserve Bank in the Global South and the risk-based AI policy frameworks for banking outlined by the Financial Stability Board [S60][S61].
Philanthropy and government both emphasizing the need for measurable impact and long‑term funding rather than one‑off projects.
Speakers: John Palfrey, Paula Ingabire
Over a billion dollars have been pledged by philanthropic initiatives to support AI for humanity, underscoring the sector’s commitment (John Palfrey) It would be great that we start to quantify what that impact has looked like and also to create a way where these exchanges are truly happening (Paula Ingabire)
While philanthropy traditionally focuses on grant-making, John’s emphasis on large-scale, long-term capital aligns with Paula’s call for impact quantification, revealing an unexpected shared focus on measurable, sustained outcomes [120-121][205-207].
POLICY CONTEXT (KNOWLEDGE BASE)
The focus on measurable impact and long-term funding resonates with the Gates Foundation’s multi-year investment model for AI hubs [S44], the call for sustainable financing in diffusion discussions [S47], and the emphasis on quantifiable societal benefits in responsible AI initiatives [S63].
Overall Assessment

The panel displayed a strong convergence around four core themes: (1) the need for adaptive, use‑case‑driven regulation anchored in human‑centred values; (2) the importance of partnership models that build local capacity and involve civil‑society; (3) the requirement for global cooperation that respects national contexts; and (4) the necessity of sustainable financial mechanisms and impact measurement to drive responsible diffusion.

High consensus – most speakers reiterated similar points from different angles, indicating a shared understanding that responsible AI deployment hinges on coordinated governance, capacity building, inclusive global frameworks, and financially sustainable models. This broad agreement suggests that future policy initiatives are likely to prioritize adaptive regulation, multi‑stakeholder partnerships, and measurable impact tracking.

Differences
Different Viewpoints
Regulatory approach: adaptive, use‑case‑specific regulation vs. a stable, overarching regulatory regime
Speakers: Paula Ingabire, John Palfrey
Adaptive, use‑case‑specific regulation is more effective than abstract rules (Paula Ingabire) AI must be governed to serve humans; a stable regulatory regime is essential to prevent treating AI as a magical, ungovernable force (John Palfrey)
Paula argues that Rwanda prefers an adaptive, evidence-based regulatory posture that is built around concrete AI use cases, whereas John stresses that AI should be governed by a stable, predictable regulatory framework that treats the technology like any other and avoids the myth of it being “magical” [40-44][95-99].
POLICY CONTEXT (KNOWLEDGE BASE)
The tension between adaptive, use-case-specific regulation and a stable overarching regime is reflected in divergent regulatory philosophies presented at recent AI governance panels, including risk-based versus light-touch approaches and the advocacy for regulatory sandboxes [S67][S69][S70][S62].
Funding and diffusion models for AI deployment
Speakers: Paula Ingabire, John Palfrey, Rudra Chaudhry
Sustainable diffusion requires clear OPEX/revenue models and demonstrable citizen value in sectors like health, education, and agriculture (Paula Ingabire) Philanthropy must provide long‑term capital to empower civil‑society voices and ensure AI serves the public interest (John Palfrey) Moderators stress the need for a realistic, financially viable deployment strategy that balances speed with responsibility (Rudra Chaudhry)
Paula emphasizes that AI’s value should not be reduced to monetary ROI and points to societal benefits in health, education and agriculture, while John calls for long-term philanthropic capital to fund civil-society and sustain AI for humanity, and Rudra insists on clear OPEX/revenue models and sustainable financing for large-scale diffusion [129-146][104-109][120-121][155-158].
POLICY CONTEXT (KNOWLEDGE BASE)
Debates over funding and diffusion models echo the various financing experiments described in the Gates scaling hubs, the sustainable financial model discourse in trusted AI panels, and startup-ecosystem support mechanisms aimed at scaling AI across sectors [S44][S47][S58][S63].
Unexpected Differences
Monetary vs. non‑monetary valuation of AI benefits
Speakers: Paula Ingabire, Rudra Chaudhry, John Palfrey
Sustainable diffusion requires clear OPEX/revenue models and demonstrable citizen value in sectors like health, education, and agriculture (Paula Ingabire) Moderators stress the need for a realistic, financially viable deployment strategy that balances speed with responsibility (Rudra Chaudhry) Philanthropy must provide long‑term capital to empower civil‑society voices and ensure AI serves the public interest (John Palfrey)
Paula explicitly states that AI value cannot be judged solely in monetary terms and stresses societal impact, whereas both Rudra and John foreground financial sustainability and the need for concrete revenue or funding models. The tension between a primarily societal-value narrative and a financially-driven sustainability narrative was not anticipated given the common focus on development outcomes [129-146][155-158][104-109].
POLICY CONTEXT (KNOWLEDGE BASE)
The monetary versus non-monetary valuation debate is illustrated by the Swiss AI initiative’s call for monetary risk quantification [S65] and contrasting perspectives that stress broader societal impact beyond financial metrics, as discussed in responsible AI value frameworks [S63][S64].
Overall Assessment

The discussion revealed three main axes of disagreement: (1) the design of regulatory frameworks – whether they should be adaptive and use‑case‑specific or stable and uniform; (2) the financing of AI diffusion – societal‑value‑driven models versus explicit OPEX/revenue or philanthropic funding; and (3) the measurement of AI’s value – non‑monetary societal benefits versus monetary sustainability metrics. While participants share a common vision of responsible, inclusive AI, they diverge on the mechanisms to achieve it.

Moderate to high disagreement. The divergent regulatory philosophies and funding expectations could impede coordinated action unless a hybrid model is negotiated that blends adaptive oversight with baseline stability and aligns philanthropic, governmental, and private financing while respecting both societal impact and financial viability. These tensions have significant implications for the implementation of AI policies, cross‑border cooperation, and the ability to sustain large‑scale AI deployments across diverse economies.

Partial Agreements
All three agree that some form of global AI governance framework is necessary, but Paula stresses contextual flexibility, Rudra warns against top‑down imposition, and Terah notes that the same core policy questions persist over time, indicating differing views on how the compact should be shaped and operationalised [61-65][59-60][71-78].
Speakers: Paula Ingabire, Rudra Chaudhry, Terah Lyons
A global AI compact is possible but must accommodate cultural, linguistic, and contextual differences (Paula Ingabire) Global norms are needed, but they must be integrated into national jurisdictions rather than imposed top‑down (Rudra Chaudhry) Foundational policy questions (fairness, transparency, bias, standards) raised a decade ago remain central today (Terah Lyons)
All agree that broader, multi‑stakeholder participation is essential for responsible AI, but John focuses on civil‑society and philanthropy, Terah on industry deployers across sectors, and Paula on African and South‑South voices, showing convergence on the goal but divergence on the composition of the stakeholder pool [105-108][179-183][205-216].
Speakers: John Palfrey, Terah Lyons, Paula Ingabire
Philanthropic collaborations with local labs and civil‑society organisations are crucial for building capacity and ensuring inclusive AI deployment (John Palfrey) The next summit should institutionalise multi‑stakeholder dialogue, bringing more deployers from industry, energy, manufacturing, etc., to the table (Terah Lyons) South‑South cooperation and greater African representation are essential; hosting future meetings in Kigali would amplify emerging‑economy perspectives (Paula Ingabire)
Takeaways
Key takeaways
Adaptive, use‑case‑specific regulation is preferred over abstract, one‑size‑fits‑all rules. A global AI compact is feasible but must accommodate cultural, linguistic and contextual differences and be implemented through national jurisdictions. AI governance should keep humans at the centre; a stable regulatory regime is essential to prevent treating AI as a magical, ungovernable force. Foundational policy concerns (fairness, transparency, bias, standards) raised a decade ago remain central today. Partnerships must prioritize co‑development and local capacity building rather than merely importing foreign solutions. Rwanda is building a national data hub and has enacted data‑protection and privacy legislation to ensure data sovereignty by design. The hardest challenges are human and institutional – building trust, establishing sustainable business models, and aligning risk‑management practices. Financial institutions bring sector‑specific risk‑management expertise that can guide responsible scaling and support regulatory harmonisation across borders. Philanthropy plays a critical role in providing long‑term capital for civil‑society participation and for AI‑for‑humanity initiatives; over $1 billion has been pledged by philanthropic efforts. Future summit processes should institutionalise multi‑stakeholder dialogue, include more deployers from diverse industries, and increase South‑South cooperation and African representation. Regulation and innovation should not be seen as a false binary; governance frameworks can stimulate further AI breakthroughs.
Resolutions and action items
Rwanda will continue to develop its national data hub and enforce its data‑protection and privacy law for AI deployments. The summit organisers are invited to consider hosting a future meeting in Kigali to amplify African and emerging‑economy perspectives. Participants (especially from finance, industry and philanthropy) will seek to bring more real‑world deployers (retail, energy, manufacturing) into future summit panels. Philanthropic bodies (e.g., MacArthur Foundation) will pursue collaborations with frontier labs and local research institutions to support responsible AI development. A call for developing concrete impact‑measurement metrics for AI deployments was made, to be pursued before the next summit.
Unresolved issues
Specific mechanisms and enforcement structures for a global AI compact remain undefined. Detailed sustainable OPEX/revenue models for AI deployments in sectors such as health, education and agriculture were discussed but not concretised. How to achieve practical regulatory harmonisation across jurisdictions without stifling innovation remains an open question. The exact process for scaling AI responsibly while ensuring trust and managing risk at the global level was not fully resolved. Further clarification is needed on how civil‑society organisations will be funded and integrated into AI governance frameworks.
Suggested compromises
Adopt shared, non‑negotiable global standards while allowing nations to contextualise them for specific use‑cases. Regulation should be adaptive and evidence‑based, built around deployed use‑cases rather than imposed top‑down. View regulation and innovation as complementary; use governance frameworks to drive further AI breakthroughs. Balance the need for rapid diffusion with responsible, financially viable deployment models that include capacity‑building components.
Thought Provoking Comments
Rather than try to focus more on regulating, we’d rather figure out where we see AI creating the biggest benefits and gains for society, and then build regulations that are specific to those use‑cases. Our regulatory posture is adaptive and evidence‑based, not an abstract framework.
She reframes AI governance from a top‑down, one‑size‑fits‑all model to a use‑case driven, adaptive approach, highlighting how regulation can evolve alongside deployment.
This comment set the foundation for the discussion on how Rwanda balances policy and adoption. It prompted follow‑up questions about global standards versus national flexibility and influenced other speakers to stress context‑specific risk assessment and the need for adaptable frameworks.
Speaker: Paula Ingabire (Minister of ICT and Innovation, Rwanda)
I don’t think the hardest questions in this field are technical right now; they are human and institutional issues. The real challenge is making the technology useful to real organisations and engendering trust so it can be widely adopted.
Lyons shifts the focus from technical breakthroughs to societal, trust, and institutional challenges, arguing that the frontier is now about practical, trustworthy deployment.
Her point redirected the conversation from abstract policy to concrete adoption hurdles, leading the panel to explore trust, risk management, and the practicalities of scaling AI responsibly.
Speaker: Terah Lyons (Managing Director, Global Head of AI and Data Policy, JPMorgan Chase)
I believe a global compact is possible, but it has to reflect different cultural, linguistic and contextual realities. We need shared non‑negotiable standards, with room for nations to adapt them to the specific problems they are solving.
She acknowledges the desirability of a universal framework while emphasizing the necessity of flexibility, bridging the gap between global governance aspirations and national sovereignty.
This answer opened a nuanced debate on how universal norms can coexist with local adaptation, influencing later remarks about regulatory harmonisation and data sovereignty.
Speaker: Paula Ingabire (Minister of ICT and Innovation, Rwanda)
We need to make AI work for humans and put humans at the centre, not treat AI as something magical. A stable regulatory regime that serves people is essential, otherwise we advance technology for its own sake.
Palfrey foregrounds a human‑centric ethic for AI, linking philanthropy, policy and societal outcomes, and warning against technology‑driven hype.
His human‑first framing reinforced the earlier adaptive‑regulation theme and steered the discussion toward the role of civil society and philanthropy in shaping responsible AI deployment.
Speaker: John Palfrey (President, MacArthur Foundation)
We really need regulatory harmonisation to the extent possible so that there is consistency of rules across borders. A global baseline would give operators clarity and enable responsible scaling at census‑scale.
Lyons highlights the practical necessity of cross‑border regulatory alignment for multinational AI deployment, connecting governance with operational scalability.
This comment linked Rwanda’s data‑sovereignty efforts with the broader need for international rule‑making, prompting the panel to consider how global standards can support large‑scale, cross‑jurisdictional AI use.
Speaker: Terah Lyons (JPMorgan Chase)
I would like to see more deployers from retail, energy, manufacturing and other parts of the real economy sitting on panels like this, so we hear how AI delivers value to everyday customers and citizens.
She calls for broader stakeholder representation beyond finance and government, emphasizing the importance of voices from the ‘real economy’ in shaping AI policy.
This suggestion broadened the agenda for future summits, resonating with Paula’s invitation to host the next meeting in Kigali and reinforcing the need for South‑South and multi‑sector participation.
Speaker: Terah Lyons (JPMorgan Chase)
By design we are building a national data hub and have already put in place a data protection and privacy law. We are tackling data‑sovereignty proactively, not waiting for a crisis.
She demonstrates a proactive, design‑by‑default approach to data governance, illustrating how Rwanda integrates sovereignty concerns into AI rollout.
This concrete example of pre‑emptive regulation gave weight to the earlier abstract discussion on adaptive policy, and it was referenced later when participants talked about risk‑by‑use‑case and the importance of guardrails.
Speaker: Paula Ingabire (Minister of ICT and Innovation, Rwanda)
Overall Assessment

The discussion was shaped by a series of pivotal remarks that moved the conversation from high‑level aspirations to concrete, actionable ideas. Paula Ingabire’s advocacy for adaptive, use‑case‑driven regulation and proactive data sovereignty set the tone for a pragmatic governance narrative. Terah Lyons’ emphasis on human and institutional challenges, together with her calls for regulatory harmonisation and broader stakeholder representation, redirected the focus toward trust, scalability, and inclusive policymaking. John Palfrey’s human‑centric framing reinforced the ethical underpinnings of these arguments, while the dialogue on a flexible global compact highlighted the tension between universal standards and national contexts. Collectively, these comments created turning points that deepened the analysis, introduced new dimensions (trust, cross‑border consistency, South‑South cooperation), and shaped a forward‑looking agenda for future summits.

Follow-up Questions
Is a global AI compact feasible, and what shared standards and contextual adaptations are needed for national jurisdictions?
Understanding the possibility of a global compact is crucial for coordinated risk management and governance across countries.
Speaker: Rudra Chaudhry (asked to Paula Ingabire)
How can adaptive, use‑case‑specific regulatory frameworks be designed and implemented effectively?
Rwanda’s approach of building regulations around specific AI deployments suggests a need for research on best practices for adaptive regulation.
Speaker: Paula Ingabire
What concrete guardrails and technical measures are required to ensure data sovereignty and privacy in national data hubs?
While Rwanda has a data protection law, details on enforcement and technical safeguards remain unclear and need further study.
Speaker: Paula Ingabire
What sustainable OPEX or revenue models can support large‑scale, beneficial AI deployments in developing economies?
Identifying financially viable models is essential for long‑term AI adoption beyond pilot projects.
Speaker: Rudra Chaudhry (asked to Paula Ingabire)
How can the impact of AI adoption be quantitatively measured and reported across sectors and regions?
Quantifying impact would enable better assessment of AI’s benefits and guide future investments.
Speaker: Paula Ingabire
What steps are needed to achieve regulatory harmonization for AI across borders while respecting sovereign AI initiatives?
Cross‑border consistency can reduce compliance burdens for multinational operators and facilitate responsible scaling.
Speaker: Tara Lyons
How can future summit panels include a broader range of AI deployers (e.g., retail, energy, manufacturing) to reflect real‑economy perspectives?
Incorporating diverse industry voices will enrich policy discussions with practical deployment insights.
Speaker: Tara Lyons
What models of partnership between philanthropy and frontier AI labs can accelerate responsible AI deployment in low‑resource settings?
Exploring collaborative frameworks can leverage philanthropic resources to drive innovation while ensuring governance.
Speaker: John Palfrey
What mechanisms can strengthen South‑South cooperation for AI governance, capacity building, and deployment?
Facilitating collaboration among emerging economies can accelerate inclusive AI adoption and share best practices.
Speaker: Paula Ingabire
What effective capacity‑building strategies can equip Rwanda’s youth with AI skills to develop and maintain local solutions?
Building a skilled domestic workforce is critical for sustainable AI ecosystems and reduces reliance on external vendors.
Speaker: Paula Ingabire
How can risk assessment frameworks balance use‑case‑specific risks with overarching ethical standards?
Developing nuanced risk models can prevent over‑broad regulation while protecting against specific harms.
Speaker: Paula Ingabire & Tara Lyons
What are citizens’ perceptions of AI‑driven public services, and how do trust and perceived value influence adoption?
Understanding public sentiment is vital for designing AI solutions that are accepted and widely used.
Speaker: Paula Ingabire

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion Moderator Sidharth Madaan

Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion Moderator Sidharth Madaan

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel opened by describing a “defining moment” for work, with AI creating new productivity and jobs while also generating anxiety about disruption to white-collar work [1-3]. Deepak Bagla warned that there is no existing playbook for the coming transition and that the next five years will be the toughest period of disruption, urging preparation for possible job loss and emphasizing the need for early AI and tinkering education at schools [12-16][21-25][27-29]. He stressed that reskilling will be essential as workers must learn new tasks to stay relevant in the evolving labour market [27-29].


Radhika presented IMF-ILO research showing that only 3-6 % of jobs globally face a high likelihood of full automation, while about 20 % have some tasks automatable, creating opportunities to boost productivity [34-42]. She argued that policy must both protect the small fraction of workers displaced through social-protection measures and enable the majority to augment their work with generative AI, linking productivity gains to higher wages, demand and job creation [44-48][49-52]. Sanjeev Bikhchandani noted that, contrary to media hype, Naukri’s hiring data have not yet shown a slowdown, but he cautioned that the future remains uncertain and recommended that individuals learn three new AI platforms each quarter to remain employable [57-59][64-68]. He illustrated this with his own experience of being the only PC-literate employee in 1989, arguing that early technology adoption can protect jobs and that AI adoption will be similarly decisive [75-85].


Prashant Warier explained that in healthcare AI will primarily upskill radiologists and primary-care providers by automating image interpretation, symptom triage and test recommendation, while regulatory approval and liability concerns will keep doctors central to decision-making for the next decade [95-104][108-119]. He gave examples of AI-driven note-taking and integrated data platforms that support clinicians, suggesting AI will act as a supportive tool rather than a replacement [104-124]. The discussion then turned to education, with Deepak noting that AI is eroding the perceived value of long degree programmes and that younger, task-oriented workers may enter the labour market earlier, challenging traditional academic pathways [140-142]. Sanjeev added that elite credentials still signal ability and commitment, but many roles still require experience, leadership and interpersonal skills that cannot be replaced by AI alone [144-149].


Radhika highlighted that the conversation so far covers only about 10 % of India’s workforce, stressing that the informal, agricultural and micro-enterprise sectors-where gig and temporary work dominate-risk being left behind unless AI adoption, digital infrastructure and social protection are extended to them [161-170][172-173]. She called for updated labour regulations and platform-economy safeguards to ensure decent work for non-standard employment arrangements [174-178]. In rapid-fire closing remarks, the panelists agreed that success by 2030 would mean coordinated action among government, academia and industry, inclusive AI-driven productivity gains, and a net increase in jobs, especially for the informal sector [175][176][177][178]. The discussion concluded that while AI will reshape work, its impact can be managed through reskilling, policy innovation and broad-based inclusion, positioning India to reap the “delta multiplier” of AI [175][178].


Keypoints

Major discussion points


Uncertainty about the pace and shape of AI-driven disruption – Bagla stresses that there is “no playbook” and that the next five years will be “the toughest times of disruption,” with no clear view beyond ten years [12-16][27-29].


Automation will affect jobs unevenly; only a small share faces full displacement – Radhika cites ILO research showing only 3-4 % of jobs globally (≈6 % in high-income countries) have a high likelihood of total automation, while about 20 % will see some tasks automated, creating opportunities to boost productivity [34-42][44-49][52-53].


Proactive upskilling and continuous AI literacy are essential for employability – Sanjeev advises learning three new AI platforms each quarter (≈12 per year) as a practical way to stay relevant, drawing parallels with the early PC era where early adopters secured jobs [65-68][81-85].


Sector-specific implications: healthcare as an illustrative case – Prashant explains that AI will mainly augment doctors (e.g., radiology interpretation, note-taking, decision support) rather than replace them, but regulatory clearance and liability issues will shape adoption [95-104][108-119][120-124].


The informal/gig economy and broader labour policy need inclusive AI strategies – Radhika highlights that most Indian workers are in agriculture or micro-SMEs, where AI adoption is limited; she calls for updated labour regulations, social protection, digital infrastructure, and financing to ensure these workers are not left behind [161-170][172-176][178].


Overall purpose / goal of the discussion


The panel convened to assess how generative AI will reshape the future of work in India and globally, to surface the uncertainties and potential disruptions, and to identify concrete actions-ranging from reskilling and education reforms to sector-specific adoption and policy redesign-that can harness AI’s productivity gains while protecting vulnerable workers.


Overall tone and its evolution


– The conversation opens with a cautious, anxiety-laden tone, acknowledging “growing anxiety” about disruption [3][4].


– It quickly shifts to a analytical, evidence-based tone, with Radhika presenting data on automation exposure and policy needs [34-42].


– Mid-discussion the tone becomes pragmatic and solution-oriented, as Sanjeev offers concrete upskilling advice and historical analogies [65-68][81-85].


– When addressing healthcare, the tone is optimistic yet realistic, emphasizing AI as a supportive tool while noting regulatory constraints [95-104][108-119].


– The final segment adopts an inclusive, forward-looking tone, stressing the need to bring informal workers into the AI transition and calling for coordinated action among government, academia, and industry [161-176][178].


Overall, the discussion moves from concern to constructive optimism, ending with a shared vision of an inclusive, AI-enabled future of work.


Speakers

Deepak Bagla


– Area of Expertise: Artificial Intelligence, Innovation, Education


– Role: Mission Director, Atal Innovation Mission (AIM)


– Title: Mission Director, Atal Innovation Mission [S4][S5]


Radhika


– Area of Expertise: Labor Economics, AI Impact Research


– Role: Researcher


– Title: Affiliated with Podar International School (as per external source) [S2][S3]


Prashant Warier


– Area of Expertise: AI Policy & Governance (inferred from panel participation)


– Role: Panelist / Expert (no specific title provided)


Speaker 1


– Area of Expertise: Event Moderation / Facilitation (inferred)


– Role: Moderator / Host of the panel discussion


– Title: Event Moderator (no specific organizational title) [S6][S7][S8]


Sanjeev Bikhchandani


– Area of Expertise: Employment Platforms, Digital Recruitment, AI in HR


– Role: Founder, InfoEdge; Operator of Naukri.com


– Title: Founder & Chairman, InfoEdge Ltd.; Founder of Naukri.com [S9]


Additional speakers:


Dipali – Mentioned as a team member working with Deepak Bagla on AI and tinkering initiatives (no further details).


Jiv – Referenced briefly by Speaker 1 (“jiv this will allow me…”); no role or title identified.


Nadeka – Addressed by Speaker 1 regarding gig workers and labor laws; no role or title identified.


Unnamed Ivy League Professor – Cited by Deepak Bagla in discussion; no name or title provided.


Other panel participants (e.g., audience members) – No specific identities given.


Full session reportComprehensive analysis and detailed insights

The discussion opened by framing the present as a “defining moment” for work, in which artificial intelligence (AI) is unlocking new productivity and creating fresh job opportunities while simultaneously fuelling anxiety about disruption to white-collar occupations [1-3]. The moderator invited Mr Deepak Bagla to outline how businesses and policymakers should navigate this transition [4-5].


Bagla said that no established playbook exists for the coming AI-driven change. He recalled that, in 1986, banking trainees were told the teller job would be “stable and safe” [7-9], yet digitisation soon rendered tellers the first casualties [10-12]. He argued that the next five years will be “the toughest times of disruption” and that workers must prepare psychologically for possible job loss, followed by a decade of reskilling [15-18][27-29]. To mitigate the shock, Bagla highlighted a pilot effort at the “Aatil Tinkering Lab” (as transcribed as ‘Lag’) that introduces AI and hands-on tinkering at school level, aiming to produce task-oriented graduates who can adapt to new job profiles [20-25].


Radhika noted that, citing an ILO study and referencing IMF research, only 3-4 % of occupations worldwide have a high probability of full automation, rising to about 6 % in high-income economies [37-41]. Around 20 % of jobs will see some tasks automated, opening space for productivity gains [42-43]. She argued that policy must address two fronts: (i) a small cohort of workers who will be displaced, requiring industrial, macro-economic, trade, labour and social-protection measures [44-48]; and (ii) the majority whose roles will be partially automated, who need support to augment productivity with generative AI, thereby raising wages, demand and overall job creation [49-52].


Sanjeev argued that, despite widespread media hype, Naukri’s hiring data show no current slowdown, suggesting that AI is still in a productivity-enhancing phase rather than a job-destruction phase [57-59]. He drew a parallel with the 1980s computer rollout in Indian banks, which increased efficiency without massive layoffs [64-70]. From this history he distilled a concrete recommendation: individuals should master three new AI platforms each quarter-twelve per year-to remain employable [65-68][81-85]. His personal anecdote of being the sole PC-literate employee in 1989 illustrates how early technology adoption can safeguard careers [75-80].


Prashant explained that AI will primarily upskill radiologists and primary-care providers by automating image interpretation, symptom triage, test recommendation and note-taking, while regulatory clearance (e.g., FDA or CDSCO) and liability concerns will keep doctors central to decision-making for at least the next five to ten years [95-104][108-119]. AI-driven tools that aggregate electronic medical records, imaging and pathology data into a single decision-support platform exemplify how technology can augment, rather than replace, clinicians [120-124].


Bagla warned that AI is already eroding the perceived value of long degree programmes, noting that master’s students question high tuition fees because AI can supply answers, and that task-oriented work may be performed by teenagers [138-144]. He also noted that the number of people who will move into task-creation and task-execution roles is still un-quantified, highlighting a gap in current labour-market forecasting [138-144]. Sanjeev counter-pointed that elite credentials (e.g., IIT degrees) still serve as a strong filter for hiring, reflecting commitment and analytical ability, yet he acknowledged that real competence also depends on experience, leadership and interpersonal skills [144-154].


Radhika prefaced her analysis by noting that the discussion is taking place at a global-south summit on the future of work [165-167] and broadened the scope to the informal and gig economy, reminding that roughly 45 % of India’s workforce remains in agriculture and 55 % are self-employed, with 95 % of enterprises employing fewer than ten workers [165-167]. For this segment, the risk is not massive automation but exclusion from AI-driven productivity gains due to limited digital infrastructure, financing and skill development [169-173]. She called for updated labour regulations to cover platform work, expanded social-protection schemes, and targeted investments in broadband and AI adoption for micro-SMEs and farms [174-178].


When asked which part of the AI stack needs the most attention, Bagla pointed to the application layer, arguing that small innovators can deliver rapid, high-impact solutions [130-133]. In the rapid-fire round, Bagla emphasised that success will require coordinated effort among government, academia, industry and society, and that the AI “delta multiplier” could deliver especially large benefits to India, provided all stakeholders align [175].


Across the panel, three points of consensus emerged: (i) continuous upskilling/reskilling is indispensable; (ii) AI is expected to augment productivity rather than cause wholesale job loss; and (iii) comprehensive policy-including industrial, macro-economic, trade, labour and social-protection measures-is required to ensure an inclusive transition, particularly for informal workers [16-29][44-48][161-176].


Disagreements followed each consensus point. On the impact of automation, Bagla warned of severe near-term disruption, whereas Radhika’s data suggested only a modest share of jobs face full automation [37-41]; Sanjeev reported no observable hiring slowdown [57-58]. On education, Bagla advocated early AI-centric schooling that could diminish the relevance of traditional degrees, while Sanjeev maintained that elite credentials remain a valuable hiring filter [138-144][144-154]. On policy focus, Bagla promoted a market-driven emphasis on the AI application layer, whereas Radhika argued for a broader policy package to absorb displaced workers [130-133][44-48].


Key take-aways


1. The next five-to-ten years will be the most disruptive period, demanding psychological readiness and reskilling [15-18][27-29].


2. Only a small fraction (3-6 %) of occupations are fully automatable, while about 20 % will experience partial automation that can boost productivity [37-41].


3. Historical technology waves (e.g., computers) increased efficiency without massive layoffs, suggesting a similar trajectory for AI [64-70].


4. Education must shift toward early AI exposure and continuous skill acquisition, with a practical target of mastering three AI platforms each quarter [65-68][81-85].


5. Policy responses must be multi-dimensional-covering industrial strategy, macro-economics, trade, labour-law reform and social protection-to support displaced workers and enable productivity gains [44-48].


6. In healthcare, AI will act as a decision-support and efficiency tool, constrained by regulation and liability [95-104][108-119].


7. The informal sector, which employs the majority of India’s labour force, requires digital infrastructure, financing and tailored skilling programmes [169-173].


8. Prioritising the AI application layer can accelerate deployment by small innovators, while still recognising the need for broader systemic support [130-133].


In the rapid-fire closing round, each panelist offered a concise vision of success for 2030. Bagla highlighted the necessity of joint action among government, society, academia and industry, and the AI “delta multiplier” for India [175]; Prashant envisaged global GDP growth of 10 % or more by 2030 driven by AI [176]; Sanjeev defined success as a net increase in jobs-more jobs created than lost [177]; and Radhika called for an inclusive AI transition that delivers better, more productive jobs across formal, agricultural and MSME sectors without abandoning the informal economy [178].


Thus, while AI presents both disruption and opportunity, the panel agreed that proactive reskilling, inclusive policy design and focused application-layer innovation will determine whether India’s defining moment translates into shared prosperity by 2030.


Session transcriptComplete transcript of the session
Speaker 1

Thank you. We’re at a very defining moment in the history of work. On one end, we’re seeing new possibilities, new productivity unlocks, new jobs being created. And on the other, there’s a lot of growing anxiety around what would it mean and the kind of disruption it will bring to work, especially the knowledge work, the white collar jobs, as they say. Let me start with Mr. Bagla. How should businesses and policymakers think about this transition?

Deepak Bagla

It’s very interesting. First, I don’t think any of us have any answers. We will try. The fundamental point, you know, and I remember when I joined banking and I take you back to 1986, we went for training and the first thing we were told that the only job which will never change. And is stable and safe in the banking world is that of the teller. You have to go get your. The first job to go when digitization happened was the teller. Because you started taking it out of the machine. Now the challenge which remains for all of us is that we are entering into an era where there’s no playbook. What is it which it is going to move into?

So we’ve got to put it into time spans if I look at it. What is going to happen in the next 5 years, 10 years and then after that no one knows. I think next 5 years is going to be one of the toughest times of disruption. How many of you have ever been laid off? Excellent. You’re the only one ready for the next 10 years. That is the most important thing going forward. And I think one of the things which we are trying to do at the Aatil Tinkering Lag, because I have a team here, Dipali is here and with her she is the one who is putting it. At the school level. we are trying to bring AI and tinkering.

The idea of innovation that you… And what I’ve also started seeing as a trend from there that many of them may not be looking at going to a very formal education system, but getting into a job profile there and then. And it’s more task -oriented. So I’ll start off with this, and I know we’ll go on with the questions. But let me end here. But as I see it, I think that disruption in the next five years and 10 -year period will be a lot for all of us to learn psychologically on how can we be without a job when we are asked. That’s the first most important point. And then tend to see what is it which we can pick up to take on next, because that’s where we all talk about will be that reskilling piece coming in.

Speaker 1

Radhika, you have done the research recently on this. Let me ask the same question to you. But let me add, are we overestimating near -term job loss? Are we overestimating the long -term transformation which it’s bringing?

Radhika

somewhat yes first let’s let me also somewhat endorse what Mr. Bagla said I think that this is there’s immense uncertainty and we really do need to have more granular and more nuanced understanding of what this transition actually entails because you know different segments of the population different segments of the workforce are going to be impacted differentially by this transition now there is this narrative of this doomsday prediction and we’re all going to lose our jobs and we’ve got to be psychologically prepared for losing our jobs I think yes it is indeed the case that most of our jobs are going to be exposed to automation and to gen AI but it doesn’t mean that our jobs are going to be destroyed or that they’re going to be completely dispased because if you go and look at the academic literature and a lot of the research the IMF that the managing director was spoke in the session before at the ILO we know that an occupation essentially entails many different tasks it’s a bundle of tasks now there are some tasks in those occupations which are going to get automated.

And there are others which are not going to be done, not going to be. Now, last year, the ILO, late last year, the ILO actually put out a study where they looked at all the different occupations and they did a gradation of the extent to which they were exposed to automation. Now, if you look at the share of jobs where almost all the tasks had a high likelihood of automation and therefore were likely to be displaced, that number was actually somewhere between 3 % or 4%. And that’s a global average. If you actually break it down and look at it in countries with low income, middle income, it was even lower. In high income countries, that was close to 6%.

But the share of jobs where some tasks were going to be automated, but that also meant that there was more scope for freeing up time to bring in new tasks, enhance their productivity, was actually quite high. That was about 20 % of the jobs. So what I’m saying is that in order to manage this transition, there are two things we’re going to have to do. One, of course, it is indeed the case that a small proportion of people will lose their jobs and they will be displaced. We need to think about how they are going to be absorbed in other sectors. And that, to my mind, is going to require more than skilling and reskilling. It’s also going to require thinking more carefully about industrial policy, about macroeconomic policy, trade policies, labor market policies, in particular, social protection.

But for those who are actually in the middle, where some tasks will be automated and others will not, we need to think carefully about how those occupations can actually augment their productivity, how they can engage more meaningfully with Gen AI and enhance their productivity. Because remember, all of this then also has an implication that enhanced productivity, which has an implication in wages and prices. All of that also boosts demand in the economy, which then drives more job creation and investment. And that virtuous cycle of growth, investment, job creation. So I would say that, you know. So, yes, support those who are, you need policies to support those who will be displaced, but at the same time, augment productivity in the other jobs, which are somewhere in the middle, and there is some buffer against automation.

Speaker 1

Sanjeev, with Naukri, you have a front seat to what’s happening in this space. Like, are you seeing structural shifts?

Sanjeev Bikhchandani

You know, there’s a lot of feedback we get from media, from social media, from panelists. But you know what, as of now, Naukri growth has not been impacted. So on the ground, we are not seeing a reduction in hiring. But at the same time, we are careful and cautious and say, what will happen now? Answer, I don’t know. Right. And the truth is, nobody knows. And anybody who is telling you he knows is… is wrong. They don’t know. So because there’s so much happening, and it’s so chaotic, that you can’t really figure out, right, what is going to happen. Right. But I’ll go back in history a bit. 1982 I was in college Deepak was in college we were in college together actually in Delhi University and these two new companies were set up Aptek and NIIT saying we are going to teach you how to use a personal computer nobody cared a few cared but it was not mainstream it was not ok so most people didn’t care by 1985 you know it had become somewhat a requirement that if you go and learn how to use a computer maybe your prospects of getting a job go up or if you got a job maybe you will become more productive at your job in 1985 to the Rajiv Gandhi government the government said we are going to introduce computers in banks at that time banks mostly public sector banks the All India Bank Employees Association which is one of the most powerful trade unions in the country then went ballistic so you got to lose jobs you are going to lose jobs government said never mind we are putting them in anyway so computers came into banks they weren’t used for a while then they began to get used and guess what nobody lost jobs people got more productive people got MIS that they weren’t getting earlier they served their customers better nobody lost jobs so new technology increased productivity did not cause job losses now I am not saying that is exactly what will happen this time but you know maybe now will some jobs or tasks be get automated possibly so but will others come up almost certainly yes so what I tell individuals never mind policy guys and governments and you know multilaterals what I an organization what an individual is look you don’t bother about will jobs be lost will my job be lost will I lose my job and will I get a new job will I get a new job Then my answer is simple.

Learn how to use three AI platforms every quarter. By the end of one year, you know 12 AI platforms. Believe me, you will be employable. I’ll give you an illustration of this. I finished business school in 1989. By then, I had finished college. I had done three years of work in an ad agency and had done business school. That’s very important year. Why is it an important year? Because the classes of 1988 and 1989 were the first two batches to have graduated from the IIMs who had actually used PCs at the IIMs because the PCs came into the IIMs in 1987. So I walked into my job as PC literate. There were two PCs in the marketing department at the company where I was working.

All the other people were senior, very highly qualified. IITs were senior. But I was the only guy who was PC literate. Believe me, if they were sacking then, I would have been the only guy who was PC literate. in that department, I would have been the last to go. I knew how to use that technology. So if AI is coming, it has come. It is inexorable. It is relentless. It will come. It has come. Learn how to use it. So if you don’t do AI, AI will be done to you.

Speaker 1

Very insightful. I think if you optimize local optima, we are somewhere going to find the global balance. Prashant, with that, let me Radhika referred to, you know, job is a bundle of tasks. Tasks will get disrupted. But the role might shift for all of us. Let’s make it real. You are closer to the medical community. How does the role of a doctor or a nurse change going forward? Can we envision an AI doctor in the future? What would the job look like?

Prashant Warier

I think healthcare is slightly different from a lot of other industries. I think it is highly regulated, number one. So I think about three things from a healthcare perspective. From a futuristic perspective as well. One is that we have to be able to able to make sure that we are able to able to the capacity is limited especially if you’re talking about the global south right india has i mean we operate in the radiology ai space we automatically interpret radiology images with ai and if you look at india india has got one radiologist for every hundred thousand people which is about and us has one radiologist for 10 000 people kenya if you look at kenya has the same number of radiologists as marginal hospital so um and and many african countries have like one or two radiologists very very small number right and so there is not enough capacity to meet that demand so when you look at job loss per se i mean there is not enough capacity to meet the demand that is there for health care so in many ways i mean you’re not going to lose jobs it is going to upskill people health care workers and doctors who are on the ground supporting patients so that’s that’s one is about upskilling uh people right and supporting uh making health care workers able to uh support patients maybe there is an ai doctor that can do primary care i mean primary care is something that can be significantly automated i mean you’re looking at three things that you’re doing in primary care one is to understand patient symptoms so ai can prompt the patient can understand what symptoms they might have Second is to recommend tests, which again, AI can identify the right tests and recommend what testing should do.

And third is around diagnosis and treatment, right? Again, which AI can potentially do or even sort of triaging to the specialist. So these are things which AI can do. So I think in general, AI is going to upscale doctors and healthcare workers to do better and meet more patients and save time, right? One of the things that we are seeing across the world is you are using AI agents to scribe and take notes of the doctor -patient conversations, which is a task which, I mean, if a doctor is meeting 40, 50 patients a day, and after every one of those conversations, they have to write down, take notes from that conversation, AI can do that automatically. We use that, we use note takers in our meetings.

Why can’t you use note takers in a doctor -patient conversation? So we are seeing that, I mean, upscaling sort of one area. Second area, I think, which is going to, at least from a healthcare perspective, I see is a tough one is around regulation, right? Everything that… AI does today in… healthcare across the world. In the US, it’s FDA cleared. You have to get FDA cleared to be able to actually provide a clinical decision support to a doctor. So that is not going away right now. And that FDA equivalent, India, CDSCO, every country has its own regulatory body. So you have to figure out how to cross that barrier. That hurdle is still there. And that is not something that is going away right now.

And that brings me to the third point, which is that today, I mean, if a doctor is taking a decision on a patient saying that this patient has tuberculosis, for example, or lung cancer, or any of those, right, they are taking liability for that decision. And till AI is going to be able to take that liability, that is going to be a decision that doctors will make. And so what I see today, and for the next at least five to 10 years, is that AI is going to be supporting doctors in making better decisions. It is helping, it is providing all the data in the right format. For example, what we do is we are able to bring in the right data, and we are able to bring in the right data.

And so that is going to be bring data from electronic medical records, PAX, basically imaging data of the patient, pathology data of the patient, bring all of that together into one place for the doctors to help. diagnose better. So you’re providing that support to the doctor in making a clinical decision and also providing treatment planning, sort of automated treatment planning of treatment plans, which they can use to then provide the treatment plan for that patient. So it is a supportive tool and I see that for the next several years, AI is going to be upskilling doctors in providing better care and providing more care to patients, especially in the global

Speaker 1

If, you know, multiple areas or multiple playgrounds where action is happening, like there’s startups, there’s infra, there is energy, you know, yesterday our Honourable Minister spoke about the five layers of AI. Where do you see most amount of action needed? Like if you had to pick one area to double down on, what does India need?

Deepak Bagla

Within the AI stack or generally?

Speaker 1

Within the AI stack.

Deepak Bagla

I think on the application side is where we will have… a very interesting play on actually the small ones and actually getting them executed. That’s where we’ve had some strength in any case. But let me just step back a minute beyond this question, if I may, with your permission. You know, one very interesting thing. Yesterday, the plenary, I was sitting right there and next to me was a professor from an Ivy League. Let me not just say it, but one of the top five Ivy Leagues. And I was asking the professor, when are you going to go back and start teaching? Because he was taking a break to do it. He told me a very interesting thing.

One of the big things which is happening in this university is that the master’s students are feeling that they don’t longer need to pay that big tuition fees because they are no longer getting challenged. Because AI is giving them all the answers. Now see the repercussion of that. When we say that we have a million people coming into the job market every year, every month in India. that is because we go through a bachelor’s and a master’s and then they’re coming in so let’s say like sanjeev and i started 22 23 24 one of the most interesting elements which was pointed out was that maybe that age barrier no longer remains you may have somebody who’s 13 year old and ready to do a job in a task and that is another trend which might just picked up because the moment you’re going to see a complete change in the educational system think of two industries which have so far withstood or been having a pushback on the huge change which can come to them the financial industry is one and the education industry but now they’re being challenged on it in a big way you’re four years master of two years master’s four years bachelor’s maybe nobody needs to do it but they’re being challenged on it in a big way you’re four years master of two years master’s so see the number of people which will get into the task creation and the task doing force that is another element which we’ve not yet been able to quantify

Speaker 1

very insightful answer jiv this will allow me to go back to the first question i asked you are you seeing a structural shift like for example are people now instead of asking degree pedigree asking for more afluency basic skills instead of

Sanjeev Bikhchandani

oh i people talk about it i’m not sure how many people actually do it right at the end of the day if you’ve got a credential it matters see uh what does an iit degree mean at what level it means you’ve learned something another level it means boss you were you you have demonstrated commitment to a prepare to get it so you you are able to work hard you know some level of physics chemistry maths that’s how you cleared the entrance exam right and you were at the top of the academic heap and that’s how you got into the place in the first place So when we go to IIT to recruit, we don’t hire for the specific knowledge they got at IIT.

We hire for the fact that it’s a fantastic filter on several accounts, right? Also, right? And to some extent, you know, a 13 -year -old, you know, ready for work. Look, business is about people. Business is about people and managing people and working with people and selling to people and, you know, running teams, being a good team player, being a good leader. So that comes with at least some years of experience, some years of, you know, maturity, right? So can I be a forex trader in front of a computer at 16? If I’m technically good enough, answer is yes. But can I be a forex trader in front of a computer at 16? Can I lead a team of salespeople out in the field?

who are calling on clients who are 20 years older than me, I don’t know. Maybe you can, maybe you can’t. So, you know, some stuff, I mean, people are still people.

Speaker 1

Nadeka, what does it mean for the gig workers and the temp workforce? And, you know, the labor laws were written long back. What would it mean as we move ahead? How should we even think of the labor laws or the role that the temporary labor brings in? Like, we are done with the age of working in the same organization for 30 years, as I just mentioned.

Radhika

So you’re talking about temp workers and gig workers. And before I answer that more directly, I just want to reflect on the comments that have been made by the other panelists. You know, the conversation that we’re having here on displacement and productivity enhancement, including the comments that I made earlier, we’re really talking only about 10 % of India’s workforce at this point. The conversation on AI is right now, you know, today we are having this summit in the global south. And the Global South still, vast proportion of the workforce is in the informal sector. For India, 45 % of its workforce is still in the agricultural sector. 55 % of the people are self -employed. 95 % of employment is in enterprises with less than 10 workers.

So, you know, that part of the conversation, we are completely missing out in the future of work. And I think we need to bring that in here as well, because a lot of the gig work and the casual work that you’re referring to is essentially what we see in the informal sector. And for that sector, the risk is not excessive automation. They might completely miss the bus and not realize any of these gains or productivity gains from AI. So we also need to think more carefully about how all of this can enhance productivity in the agricultural sector. How there could be greater AI adoption amongst micro and small. All enterprises, which are basically the engines of job creation in India.

and that’s again going to require a lot more than skilling and credentials but also they’re going to need a lot of financial support for adopting AI they are going to need digital infrastructure access to broadband so on and so forth and now going back to your question on the changes in the world of work and labor regulations indeed there is no denying the fact that labor regulations have not kept pace with the changes in the employer -employee relationships we now live in a world of work where there is a proliferation of non -standard employer employment arrangements the platform economy is a manifestation of that as well and certainly there’s a need to update that at the ILO for example there is a conversation for two years which is happening on what are the kinds of conventions and recommendations that are required to bring decent work into the platform economy and of course India is leading in that conversation with the code and social security which seeks to provide social protection even to platform workers so that’s a very forward looking ambition.

Speaker 1

Yes well we’re at time but I’ll just say that I think it’s a very important point that I think it’s a very important point to just end with one last question rapid fire one word maximum five second answer a lot of still unknowns what does success look like in 2030 what would you be proud of we’ll go in a row

Deepak Bagla

most critical point when everyone works together the government the society the people the academia i think that joining the dots is absolutely core to seeing any element of success for anyone and last point i think the biggest benefit of the delta multiplier of ai is india or will be india

Prashant Warier

krishan i think success for ai is the world’s gdp growing at 10 or more by 2030

Sanjeev Bikhchandani

i think uh if there is net job increase which means the jobs lost if any are less than the jobs created i think that is success

Radhika

i think an inclusive ai transition where we have better jobs, more productive jobs, and where the agricultural sector and the MSME sector have benefited from this transition and we don’t leave the informal sector behind.

Speaker 1

With that, we’ll wrap this panel discussion. Thank you so much for the insightful comments.

Related ResourcesKnowledge base sources related to the discussion topics (37)
Factual NotesClaims verified against the Diplo knowledge base (5)
Confirmedhigh

“The discussion opened by framing the present as a “defining moment” for work, in which artificial intelligence (AI) is unlocking new productivity and creating fresh job opportunities while simultaneously fuelling anxiety about disruption to white‑collar occupations.”

The moderator’s opening remarks describe a defining moment with new productivity, new jobs, and growing anxiety about disruption to work, matching the report’s description [S1].

Additional Contextmedium

“The moderator invited Mr Deepak Bagla to outline how businesses and policymakers should navigate this transition.”

Deepak Bagla is identified as a Mission Director of the Atal Innovation Mission, confirming his relevance to the discussion on AI and work [S4].

Confirmedmedium

“Bagla recalled that, in 1986, banking trainees were told the teller job would be “stable and safe”, yet digitisation soon rendered tellers the first casualties.”

Bagla’s recollection of a 1986 banking training that emphasized teller jobs as “stable and safe” aligns with the transcript excerpt where he mentions the same anecdote [S95].

Additional Contextmedium

“Bagla highlighted a pilot effort at the “Aatil Tinkering Lab” (as transcribed as ‘Lag’) that introduces AI and hands‑on tinkering at school level.”

The Atal Innovation Mission runs a large network of Atal Tinkering Labs (about 10,000 labs) that provide hands-on technology experiences to students, supporting the claim about a pilot AI-tinkering effort [S5].

Confirmedhigh

“Radhika cited an ILO study indicating that only 3‑4 % of occupations worldwide have a high probability of full automation, rising to about 6 % in high‑income economies.”

ILO research reports that roughly 3.3 % of global employment is at risk of full automation, with higher shares in high-income countries, which corroborates the 3-4 % and 6 % figures quoted [S97] and [S15].

External Sources (97)
S1
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Panel Discussion Moderator Sidharth Madaan — – Radhicka Kapoor- Prashant Warier – Sanjeev Bikhchandani- Prashant Warier
S2
WS #53 Promoting Children’s Rights and Inclusion in the Digital Age — – Radhika Gupta: Podar International School Radhika Gupta: All right. Thank you. I like the way you situated in this…
S3
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Panel Discussion Moderator Sidharth Madaan — Radhicka Kapoor provided a crucial counterbalance to doomsday predictions by introducing concrete research data from int…
S4
AI Innovation in India — -Deepak Bagla- Role: Mission Director; Title: Atal Innovation Mission The celebration of the Atal Innovation Mission’s …
S5
From India to the Global South_ Advancing Social Impact with AI — -Deepak Bagla- Mission Director for Atal Innovation Mission
S6
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S7
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S8
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S9
Impact &amp; the Role of AI How Artificial Intelligence Is Changing Everything — The discussion revealed nuanced perspectives on AI’s employment effects. Sanjiv Bikhchandani, founder of InfoEdge and op…
S10
How AI Drives Innovation and Economic Growth — Akcigit distinguishes between two layers of AI development in advanced economies. The application layer has low entry ba…
S11
https://dig.watch/event/india-ai-impact-summit-2026/open-internet-inclusive-ai-unlocking-innovation-for-all — So whether it’s the chip layer, whether it’s the compute layer, I think it’s great that both Adani, Reliance announced $…
S12
Powering AI _ Global Leaders Session _ AI Impact Summit India Part 2 — India’s technical advantages are substantial. The country’s solar and wind patterns are naturally complementary, providi…
S13
The future of work: preparing for automation and the gig economy — PricewaterhouseCoopers’ latest studyforesees three waves of automation in the next 20 years: Throughout all three waves…
S14
Thinking through Augmentation — Artificial Intelligence (AI) and Large Language Models (LLMs) have received significant attention at the World Economic …
S15
Bridging the Digital Skills Gap: Strategies for Reskilling and Upskilling in a Changing World — Strong consensus exists around the need for inclusive, multi-stakeholder approaches to digital skills development, with …
S16
Bridging the Digital Divide for Transition to a Greener Economy — The analysis also underscores the importance of inclusive and innovative funding mechanisms for small and medium-sized e…
S17
AI for Social Empowerment_ Driving Change and Inclusion — Urgent need for comprehensive policy responses including competition policy, tax policy, labor law reforms, and universa…
S18
[Briefing #50] Internet governance in November 2018 — 3.Gig economy is in focus again, explained Ms Marilia Maciel, digital policy senior researcher at DiploFoundation. The g…
S19
Strengthening Worker Autonomy in the Modern Workplace | IGF 2023 WS #494 — Furthermore, a nationwide strike organized by the Indian Federation of App Transport Workers in 2020 demonstrates worker…
S20
Building Inclusive Societies with AI — Aditya Natraj provided crucial perspective on India’s bottom quartile, pointing out that over 200 million people remain …
S21
Big Tech boosts India’s AI ambitions amid concerns over talent flight and limited infrastructure — Majorannouncementsfrom Microsoft ($17.5bn) and Amazon (over $35bn by 2030) have placed India at the centre of global AI …
S22
Laying the foundations for AI governance — Contrary to common assumptions, Seaford argued that “companies want clear regulation but need to avoid unpredictability …
S23
eTrade for all leadership roundtable: Unlocking digital trade for inclusive development — An ILO study in 2023 showed an opportunity to augment work through AI, linked to skilling. The impact of automation is e…
S24
Manufacturing’s Moonshots Are Landing . . . Are You Ready for the Next Wave? — The skill requirements are changing rapidly, making continuous learning and upskilling essential.
S25
Shaping the Future AI Strategies for Jobs and Economic Development — Continuous learning and upskilling will be essential for workforce adaptation to rapid technological change across all s…
S26
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Sector-Specific Applications and Challenges The greatest innovations in healthcare and global health have been based on…
S27
WS #53 Leveraging the Internet in Environment and Health Resilience — Artificial intelligence and other technologies should be designed to support rather than replace human healthcare provid…
S28
Shaping the Future AI Strategies for Jobs and Economic Development — -Workforce Transformation and Job Impact: A central theme throughout both panels was whether AI will replace or enhance …
S29
World in Numbers: Jobs and Tasks / DAVOS 2025 — – Both speakers emphasized the importance of continuous learning and adaptation to technological changes. Both speakers…
S30
AI: The Great Equaliser? — Ultimately, while AI has the potential to act as an equaliser, the analysis also recognises the caveats and conditions t…
S31
Leveraging the UN system to advance global AI Governance efforts — Gilbert Houngbo from the International Labour Organization (ILO) discussed the impact of AI on jobs, acknowledging both …
S32
The impact of AI on jobs and workforce — The ILO’s webinar was triggered by the recent impact of ChatGPT on our society and jobs. OpenAI’s ChatGPT, in particular…
S33
The Impact of Digitalisation and AI on Employment Quality – Challenges and Opportunities — The integration of theoretical knowledge with practical skills is crucial in meeting the needs of employers. At the hear…
S34
AI adoption reshapes UK scale-up hiring policy framework — AI adoption is prompting UK scale-ups torecalibrateworkforce policies. Survey data indicates that 33% of founders antici…
S35
AI impact on employment still limited — A newstudyby Yale’s Budget Lab suggests AI has yet to cause major disruption in the US labour market. Researchers found …
S36
AI as a companion in our most human moments — The goal isn’t to replace human connection, empathy, or professional care. It’s to recognise that AI can play a valuable…
S37
Turbocharging Digital Transformation in Emerging Markets: Unleashing the Power of AI in Agritech (ITC) — Moreover, while AI and new technologies have significant potential in agriculture, it is crucial to understand that they…
S38
Enhancing rather than replacing humanity with AI — Successful applications preserve human agency. People choose when and how to use AI assistance based on their needs and …
S39
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — Economic | Development Four-channel framework showing automation vs. complementation paths, with emphasis on right-hand…
S40
Comprehensive Report: Preventing Jobless Growth in the Age of AI — Economic | Future of work Historical Context and Future of Technological Unemployment Historical evidence shows that t…
S41
Who Benefits from Augmentation? / DAVOS 2025 — Kumar argues that AI can lead to increased productivity and the creation of new job opportunities. He suggests that this…
S42
From Technical Safety to Societal Impact Rethinking AI Governanc — Historical patterns demonstrate that technology does not automatically benefit everyone without deliberate intervention …
S43
Comprehensive Discussion Report: The Future of Artificial General Intelligence — Beddoes references the historical economic argument against the ‘lump of labor fallacy,’ suggesting that technological a…
S44
AI for Social Empowerment_ Driving Change and Inclusion — Focus on enhancing job quality and productivity rather than just preventing job losses
S45
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Panel Discussion Moderator Sidharth Madaan — Radhicka Kapoor provided a more nuanced perspective, citing research showing that while most jobs will be exposed to AI …
S46
Building Trustworthy AI Foundations and Practical Pathways — “But similarly now, econ of maybe writing novels is gone.”[20]. “The movie industry is worried.”[21]. “That entire econo…
S47
The mismatch between public fear of AI and its measured impact — Artificial intelligencehas become one of the loudest topics in public discourse. Headlines speak of mass job displacemen…
S48
Why science metters in global AI governance — She points out that predictions of massive job displacement require policies such as universal basic income, reskilling …
S49
Flexibility 2.0 / Davos 2025 — A significant portion of the discussion focused on the challenges faced by gig workers and the need for new forms of soc…
S50
DigiSov: Regulation, Protectionism, and Fragmentation | IGF 2023 WS #345 — Another point of concern raised in the analysis is the potential risk associated with policy development on the applicat…
S51
Keynote-Bejul Somaia — “The primary opportunity area here is in the application layer.”[25]. “And this requires building applications that unde…
S52
How AI Drives Innovation and Economic Growth — And I would like to, you know, separate advanced economies from emerging or developing economies. So when it comes to ad…
S53
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Panel Discussion Moderator Sidharth Madaan — This panel discussion examined the transformative impact of artificial intelligence on the future of work, exploring bot…
S54
The Foundation of AI Democratizing Compute Data Infrastructure — Yann LeCun offered a realistic assessment of efficiency improvements, noting that while industry has strong incentives t…
S55
Book launch: What changes and remains the same in 20 years in the life of Kurbalija’s book on internet governance? — Development | Economic | Sociocultural Jovan proposes a structured approach to AI risk assessment that prioritizes imme…
S56
The future of work: preparing for automation and the gig economy — Concerns about the future of work also come from ongoing technological advancements in automation and AI. Some worry tha…
S57
The Impact of Digitalisation and AI on Employment Quality – Challenges and Opportunities — Her findings suggested that digitalisation might affect 10.4% of jobs in low-income countries positively, while in high-…
S58
Manufacturing’s Moonshots Are Landing . . . Are You Ready for the Next Wave? — The skill requirements are changing rapidly, making continuous learning and upskilling essential.
S59
Bridging the Digital Skills Gap: Strategies for Reskilling and Upskilling in a Changing World — The argument emphasizes that the primary threat to employment is not AI replacing workers directly, but rather workers b…
S60
Shaping the Future AI Strategies for Jobs and Economic Development — -Workforce Transformation and Job Impact: A central theme throughout both panels was whether AI will replace or enhance …
S61
AI for Bharat’s Health_ Addressing a Billion Clinical Realities — And you can design them to be voice first, which in a way is inducing trust because now they are speaking to someone and…
S62
WS #53 Leveraging the Internet in Environment and Health Resilience — Artificial intelligence and other technologies should be designed to support rather than replace human healthcare provid…
S63
The rise of tech giants in healthcare: How AI is reshaping life sciences — The intersection of technology and healthcareis rapidly evolving, fuelled by advancements in ΑΙ and driven by major tech…
S64
Ateliers : rapports restitution et séance de clôture — Aurélien Macé Apparemment, j’ai droit à 6,6 minutes, deux fois plus que les autres, ce qu’on m’a dit. Le thème de vendre…
S65
AI could save billions but healthcare adoption is slow — AI is being hailed as atransformative force in healthcare, with the potential to reduce costs andimprove outcomesdramati…
S66
Strengthening Worker Autonomy in the Modern Workplace | IGF 2023 WS #494 — In conclusion, the analysis highlights the negative impact of technology on various social issues, including labour expl…
S67
Designing Indias Digital Future AI at the Core 6G at the Edge — This cultural adaptation extends to economic structures, with Roy noting India’s approximately 490 million informal work…
S68
Engineering Accountable AI Agents in a Global Arms Race: A Panel Discussion Report — The discussion maintained a thoughtful but somewhat cautious tone throughout, with speakers acknowledging both opportuni…
S69
AI and Digital Developments Forecast for 2026 — The tone begins as analytical and educational but becomes increasingly cautionary and urgent throughout the conversation…
S70
Defying Cognitive Atrophy in the Age of AI: A World Economic Forum Stakeholder Dialogue — The discussion began with a cautiously optimistic tone, acknowledging both opportunities and risks. However, the tone be…
S71
Webinar session — The discussion maintained a diplomatic and constructive tone throughout, with participants demonstrating nuanced thinkin…
S72
Pathways to De-escalation — The overall tone was serious and somewhat cautious, reflecting the gravity of cybersecurity challenges. While the speake…
S73
Laying the foundations for AI governance — The tone was collaborative and constructive throughout, with panelists building on each other’s points rather than disag…
S74
Session — The tone was primarily analytical and forward-looking, with the speaker presenting evidence-based predictions while ackn…
S75
Impact &amp; the Role of AI How Artificial Intelligence Is Changing Everything — The discussion maintained a cautiously optimistic tone throughout, balancing enthusiasm for AI’s potential with realisti…
S76
WS #484 Innovative Regulatory Strategies to Digital Inclusion — The discussion maintained a collaborative and solution-oriented tone throughout, with experts building on each other’s i…
S77
AI as critical infrastructure for continuity in public services — The discussion maintained a collaborative and constructive tone throughout, with participants building on each other’s p…
S78
Swiss AI Initiatives and Policy Implementation Discussion — The discussion maintained a professional, collaborative tone throughout, with speakers presenting both opportunities and…
S79
WS #462 Bridging the Compute Divide a Global Alliance for AI — The discussion maintained a constructive and collaborative tone throughout, with participants building on each other’s i…
S80
Global AI Policy Framework: International Cooperation and Historical Perspectives — The discussion maintained a constructive and optimistic tone throughout, despite acknowledging significant challenges. S…
S81
Upskilling for the AI era: Education’s next revolution — The tone is consistently optimistic, motivational, and action-oriented throughout. The speaker maintains an enthusiastic…
S82
Regional Leaders Discuss AI-Ready Digital Infrastructure — The discussion maintained a consistently optimistic yet pragmatic tone throughout. Panelists were enthusiastic about AI’…
S83
Keynote-Brad Smith — The tone is optimistic yet realistic, maintaining a balance between acknowledging serious challenges and expressing conf…
S84
Comprehensive Report: Preventing Jobless Growth in the Age of AI — The tone was cautiously optimistic but realistic. While panelists generally agreed that AI wouldn’t lead to permanent ma…
S85
Skilling and Education in AI — The tone was cautiously optimistic throughout. Speakers acknowledged both the tremendous opportunities AI presents for I…
S86
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — The tone was collaborative and solution-oriented throughout, with participants acknowledging both the urgency and comple…
S87
Building Inclusive Societies with AI — -Collaborative spirit: All panelists demonstrated willingness to work together across sectors -Inclusive perspective: S…
S88
Driving Indias AI Future Growth Innovation and Impact — The discussion maintained an optimistic and forward-looking tone throughout, characterized by enthusiasm for India’s AI …
S89
Inclusive AI Starts with People Not Just Algorithms — The tone was consistently optimistic and empowering throughout the discussion. Speakers maintained an enthusiastic, forw…
S90
High Level Session 3: AI &amp; the Future of Work — Jonathan Charles: Good morning, ladies and gentlemen. Thank you for getting out of bed so early for this. Distinguished …
S91
The Intelligent Coworker: AI’s Evolution in the Workplace — -Workforce Impact and Career Evolution- Discussion of how AI will reshape job structures, eliminate traditional entry-le…
S92
Introduction to cyber diplomacy — Striking a balance between comprehensive engagement and the need to keep to the agenda, the moderator judiciously decide…
S93
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Ananya Birla Birla AI Labs — No single institution, no matter how large or how well resourced, can navigate this epoch alone. The journey from $4 tri…
S94
https://dig.watch/event/india-ai-impact-summit-2026/building-trusted-ai-at-scale-cities-startups-digital-sovereignty-keynote-ananya-birla-birla-ai-labs — No single institution, no matter how large or how well resourced, can navigate this epoch alone. The journey from $4 tri…
S95
https://dig.watch/event/india-ai-impact-summit-2026/building-trusted-ai-at-scale-cities-startups-digital-sovereignty-panel-discussion-moderator-sidharth-madaan — It’s very interesting. First, I don’t think any of us have any answers. We will try. The fundamental point, you know, an…
S96
AI cheating scandal at University sparks concern — Hannah, a university student,admits to using AIto complete an essay when overwhelmed by deadlines and personal illness. …
S97
Empowering Workers in the Age of AI — – Juan Ivan Martin Lataix- Tom Wambeke Economic | Development ILO research published in May showing 3.3% of global emp…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
D
Deepak Bagla
5 arguments176 words per minute842 words286 seconds
Argument 1
Disruption will be toughest in the next 5 years; psychological adaptation and reskilling are essential (Deepak Bagla)
EXPLANATION
Bagla warns that the coming five‑year period will experience the most intense AI‑driven disruption, requiring workers to cope psychologically with possible job insecurity. He stresses that reskilling will be crucial for individuals to remain relevant after the disruption.
EVIDENCE
He states, “I think next 5 years is going to be one of the toughest times of disruption” and later adds that “disruption in the next five years and 10-year period will be a lot for all of us to learn psychologically… and then tend to see what is it which we can pick up… the reskilling piece coming in” [16][27-29].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Panel discussion notes that the next five to ten years will bring unprecedented disruption and Bagla emphasised the need for psychological preparation and reskilling [S1].
MAJOR DISCUSSION POINT
Disruption timeline and need for reskilling
DISAGREED WITH
Radhika, Sanjeev Bikhchandani
Argument 2
Introduce AI and tinkering at school level to prepare task‑oriented future workers (Deepak Bagla)
EXPLANATION
Bagla proposes embedding AI education and hands‑on tinkering activities in school curricula so that children develop task‑oriented skills early on. This aims to create a future workforce that can adapt to AI‑augmented job profiles.
EVIDENCE
He explains that his team is “trying to bring AI and tinkering at the school level… many of them may not be looking at going to a very formal education system, but getting into a job profile there and then. And it’s more task-oriented” [20-25].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Bagla referenced the “Aatil Tinkering Lab” and school-level AI activities, and the Atal Innovation Mission runs 10,000 tinkering labs that have nurtured over 1.1 crore young entrepreneurs [S1][S5].
MAJOR DISCUSSION POINT
Early AI education
DISAGREED WITH
Sanjeev Bikhchandani
Argument 3
Prioritise the application layer of the AI stack, enabling small players to execute solutions quickly (Deepak Bagla)
EXPLANATION
Bagla argues that the most impactful AI work lies in the application layer, where small innovators can rapidly develop and deploy solutions. Focusing here can accelerate adoption across the economy.
EVIDENCE
He says, “I think on the application side is where we will have… a very interesting play on actually the small ones and actually getting them executed” [130-132].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Research highlights the application layer as low-entry-barrier space where small firms can compete and drive creative destruction, and India has excelled at this layer [S10][S11].
MAJOR DISCUSSION POINT
Application‑focused AI strategy
Argument 4
Focus on the application side of the AI stack, leveraging small innovators and addressing education disruption and emerging age‑based task forces (Deepak Bagla)
EXPLANATION
Bagla expands on the need to concentrate on AI applications, especially by small firms, while also noting that traditional education timelines may become obsolete and younger workers could enter the labour market directly. This reflects a shift in both technology deployment and talent pipelines.
EVIDENCE
He notes, “I think on the application side… small ones… getting them executed” and later observes that “master’s students are feeling they don’t longer need to pay that big tuition fees because they are no longer getting challenged… maybe a 13-year-old is ready to do a job in a task” indicating disruption to education and age-based task forces [130-134][138-144].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The application layer enables small innovators (S10, S11) and Bagla noted that age barriers may disappear, with 13‑year‑olds ready for task‑oriented work (S1).
MAJOR DISCUSSION POINT
Strategic AI application focus and education disruption
Argument 5
India stands to capture the largest share of AI’s “delta multiplier”, making the country the primary beneficiary of AI‑driven economic gains.
EXPLANATION
Bagla argues that the multiplier effect of AI—whereby productivity gains translate into broader economic growth—will be most pronounced for India, positioning it as a key winner in the global AI landscape.
EVIDENCE
In his rapid-fire closing remark he says, “the biggest benefit of the delta multiplier of AI is India or will be India” [175].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Bagla stated that India will be the biggest benefactor of AI’s delta multiplier, a claim echoed in mission briefings [S4][S5].
MAJOR DISCUSSION POINT
National advantage from AI
R
Radhika
6 arguments186 words per minute1081 words346 seconds
Argument 1
Only 3‑4 % of jobs are fully automatable; about 20 % will see some tasks automated, creating productivity gains (Radhika)
EXPLANATION
Radhika cites research showing that only a small share of occupations are at high risk of total automation, while a larger share will experience partial automation that can free up time for new tasks and productivity improvements.
EVIDENCE
She references an ILO study reporting that “the share of jobs where almost all the tasks had a high likelihood of automation… was somewhere between 3 % or 4 %” and that “the share of jobs where some tasks were going to be automated… was about 20 % of the jobs” [37-42].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
ILO-based studies cited in the panel report that 3-4% of jobs face near-total automation while roughly 20% will see partial task automation [S1][S13].
MAJOR DISCUSSION POINT
Extent of automation risk
AGREED WITH
Deepak Bagla, Sanjeev Bikhchandani
DISAGREED WITH
Deepak Bagla, Sanjeev Bikhchandani
Argument 2
Broad skilling programmes, coupled with financial and digital infrastructure support for MSMEs, are needed for an inclusive transition (Radhika)
EXPLANATION
Radhika stresses that beyond reskilling, small and micro enterprises need funding, broadband access, and digital tools to adopt AI, ensuring that the informal sector benefits from the transition.
EVIDENCE
She points out the need for “greater AI adoption amongst micro and small enterprises… they will need a lot more than skilling… they are going to need a lot of financial support for adopting AI… digital infrastructure access to broadband” [172-176].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Inclusive reskilling frameworks stress the need for digital infrastructure and financing for SMEs, and the panel highlighted policy support for displaced workers and MSME adoption of AI [S15][S16][S1].
MAJOR DISCUSSION POINT
Inclusive skilling and infrastructure
AGREED WITH
Deepak Bagla, Sanjeev Bikhchandani
Argument 3
Comprehensive policies—industrial, macro‑economic, trade, labour, and social protection—are required to absorb displaced workers and enhance productivity (Radhika)
EXPLANATION
Radhika argues that a multi‑dimensional policy package is essential to re‑integrate workers whose jobs are displaced and to boost productivity in occupations where AI augments tasks.
EVIDENCE
She notes that “we need to think about industrial policy, macro-economic policy, trade policies, labour market policies, in particular, social protection” and that “a small proportion of people will lose their jobs… we need to think about how they are going to be absorbed in other sectors” [44-48].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The discussion called for industrial, macro-economic, trade and social-protection policies to manage displacement, aligning with broader calls for comprehensive policy responses [S1][S17].
MAJOR DISCUSSION POINT
Policy framework for transition
AGREED WITH
Deepak Bagla
DISAGREED WITH
Deepak Bagla
Argument 4
Labour laws must be updated to cover platform and gig work, providing social protection for informal workers (Radhika)
EXPLANATION
Radhika highlights that existing labour regulations lag behind the rise of platform‑based and gig employment, calling for updated conventions and social security measures to protect these workers.
EVIDENCE
She observes that “labour regulations have not kept pace… there is a proliferation of non-standard employer-employee arrangements… there’s a need to update that… ILO conversation… India is leading… code and social security which seeks to provide social protection even to platform workers” [161-168].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Experts highlighted the lag in labour regulations for platform and gig arrangements and the need for updated legal frameworks and social security measures [S18][S19].
MAJOR DISCUSSION POINT
Updating labour law for gig economy
AGREED WITH
Deepak Bagla
Argument 5
Informal & gig economy: Large share of India’s workforce (agriculture, self‑employment, micro‑enterprises) risks being left behind; needs AI adoption, broadband, and financing (Radhika)
EXPLANATION
Radhika points out that the majority of India’s workers are in agriculture, self‑employment, or tiny enterprises, and that without targeted AI adoption and digital infrastructure they could be excluded from productivity gains.
EVIDENCE
She provides sector statistics-“45 % of its workforce is still in the agricultural sector, 55 % self-employed, 95 % of employment is in enterprises with less than 10 workers”-and stresses the need for AI adoption, broadband, and financing for these groups [165-176].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The panel noted that discussions often ignore the 90% of India’s workforce in informal sectors, emphasizing the need for broadband, AI adoption and financing to avoid exclusion [S1][S15][S20].
MAJOR DISCUSSION POINT
Risks to informal sector
Argument 6
Partial automation of tasks will boost productivity, raise wages and prices, and trigger a virtuous cycle of higher demand, investment and job creation.
EXPLANATION
Radhika points out that when only some tasks within an occupation are automated, workers can reallocate time to higher‑value activities, which lifts productivity and, through higher wages and spending, fuels broader economic expansion.
EVIDENCE
She notes that “enhanced productivity … has an implication in wages and prices… boosts demand in the economy, which then drives more job creation and investment… a virtuous cycle of growth, investment, job creation” [49-52].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Analyses of task automation show that partial automation can free workers for higher-value activities, driving productivity, wage growth and broader economic expansion [S13].
MAJOR DISCUSSION POINT
Economic spill‑over effects of partial automation
S
Sanjeev Bikhchandani
5 arguments163 words per minute978 words358 seconds
Argument 1
Current hiring remains strong; AI is likely to boost productivity rather than cause immediate job loss, as with past tech waves (Sanjeev Bikhchandani)
EXPLANATION
Sanjeev reports that job postings on Naukri have not declined, indicating that AI has not yet reduced hiring. He draws a parallel with the 1980s computer adoption, which raised productivity without large‑scale layoffs.
EVIDENCE
He states, “Naukri growth has not been impacted… we are not seeing a reduction in hiring” and recounts that after computers were introduced in banks in the mid-1980s “nobody lost jobs, people got more productive… MIS… served their customers better” [57-58][64-70].
MAJOR DISCUSSION POINT
Hiring trends and historical analogy
AGREED WITH
Deepak Bagla, Radhika
Argument 2
Individuals should learn multiple AI platforms each quarter to stay employable (Sanjeev Bikhchandani)
EXPLANATION
Sanjeev advises a proactive learning strategy: mastering three new AI platforms every quarter, totaling twelve per year, to maintain employability in an AI‑driven market.
EVIDENCE
He says, “Learn how to use three AI platforms every quarter… By the end of one year, you know 12 AI platforms” [65-67].
MAJOR DISCUSSION POINT
Personal upskilling cadence
AGREED WITH
Deepak Bagla, Radhika
Argument 3
Formal credentials act as a strong filter, but continuous upskilling and experience remain crucial (Sanjeev Bikhchandani)
EXPLANATION
Sanjeev acknowledges that degrees from elite institutions signal ability and commitment, yet stresses that real work performance, experience, and ongoing learning are essential for career success.
EVIDENCE
He explains that an IIT degree “is a fantastic filter… we hire for the fact that it’s a fantastic filter… but business is about people, managing people, leading teams… which comes with years of experience and maturity” [144-150].
MAJOR DISCUSSION POINT
Credentials vs experience
DISAGREED WITH
Deepak Bagla
Argument 4
Historical tech adoption: Past introduction of computers increased productivity without massive job loss, suggesting a similar pattern may repeat (Sanjeev Bikhchandani)
EXPLANATION
Sanjeev recounts the 1980s rollout of computers in Indian banks, noting that while adoption was initially resisted, it ultimately boosted productivity without causing layoffs, implying a comparable outcome for AI.
EVIDENCE
He narrates that after computers were introduced in banks in 1985 “nobody lost jobs, people got more productive, got MIS… served their customers better” [64-70].
MAJOR DISCUSSION POINT
Lesson from past technology waves
AGREED WITH
Deepak Bagla
Argument 5
Early technology literacy, such as being PC‑literate in the 1980s, provided a decisive competitive edge and job security during past digital disruptions.
EXPLANATION
Sanjeev illustrates that individuals who mastered emerging technologies early were less vulnerable to layoffs when those technologies became mainstream, highlighting the protective value of proactive skill acquisition.
EVIDENCE
He recounts that “if they were sacking then, I would have been the only guy who was PC literate… I would have been the last to go” when computers were introduced in his workplace [79-81].
MAJOR DISCUSSION POINT
Protective role of early tech upskilling
P
Prashant Warier
2 arguments210 words per minute840 words239 seconds
Argument 1
Healthcare AI must navigate regulatory clearance and liability issues; AI will serve as decision‑support rather than replace clinicians (Prashant Warier)
EXPLANATION
Prashant highlights that medical AI applications must obtain regulatory approvals (e.g., FDA, CDSCO) and cannot assume clinical liability, positioning AI as a tool that assists doctors rather than substitutes them.
EVIDENCE
He notes that “everything AI does today in healthcare… has to be FDA cleared… every country has its own regulatory body… until AI can take liability, doctors will make the decision” and that AI currently provides decision-support, not replacement [108-119].
MAJOR DISCUSSION POINT
Regulation and liability in health AI
Argument 2
Healthcare: AI can upskill radiologists, automate primary‑care tasks (symptom triage, test recommendation, note‑taking), but regulatory and liability constraints limit full automation (Prashant Warier)
EXPLANATION
Prashant describes specific AI use‑cases in radiology and primary care—addressing radiologist shortages, triaging symptoms, recommending tests, and automating note‑taking—while reiterating that regulatory clearance and liability concerns prevent full automation.
EVIDENCE
He cites India’s radiologist shortage and says AI can “automatically interpret radiology images… automate primary-care tasks such as symptom triage, test recommendation, and note-taking” and then adds the regulatory and liability hurdles described earlier [99-104][108-119].
MAJOR DISCUSSION POINT
AI applications in clinical workflow
S
Speaker 1
2 arguments191 words per minute474 words148 seconds
Argument 1
The present moment constitutes a defining turning point for work, marked by both emerging productivity gains and rising anxiety about disruption to knowledge‑based jobs.
EXPLANATION
Speaker 1 frames the current era as simultaneously offering new possibilities—such as higher productivity and the creation of novel jobs—while also generating considerable concern about how AI will disrupt existing white‑collar occupations.
EVIDENCE
He opens by stating, “We’re at a very defining moment in the history of work” and then contrasts “new possibilities, new productivity unlocks, new jobs being created” with “a lot of growing anxiety around what would it mean and the kind of disruption it will bring to work, especially the knowledge work, the white-collar jobs” [1-3].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Panel moderators described the current period as a defining moment with new productivity opportunities alongside anxiety about white-collar job disruption [S1].
MAJOR DISCUSSION POINT
Dual nature of AI impact on work
Argument 2
Optimising local optima in AI‑driven transformations can help societies discover a global balance between productivity gains and employment outcomes.
EXPLANATION
Speaker 1 suggests that by focusing on incremental improvements (local optima) within the AI transition, economies can eventually achieve an overall equilibrium that reconciles efficiency with broader social goals.
EVIDENCE
He remarks, “I think if you optimise local optima, we are somewhere going to find the global balance” [86].
MAJOR DISCUSSION POINT
Strategic approach to AI adoption
Agreements
Agreement Points
All speakers stress the need for upskilling/reskilling and continuous learning to stay employable in the AI transition.
Speakers: Deepak Bagla, Radhika, Sanjeev Bikhchandani
Disruption will be toughest in the next 5 years; psychological adaptation and reskilling are essential (Deepak Bagla) Broad skilling programmes, coupled with financial and digital infrastructure support for MSMEs, are needed for an inclusive transition (Radhika) Individuals should learn multiple AI platforms each quarter to stay employable (Sanjeev Bikhchandani)
Bagla warns that the coming five-year period will be the most disruptive and that workers must learn new skills; Radhika calls for broad skilling programmes and reskilling of workers; Sanjeev recommends a rapid cadence of learning AI tools – all converging on the view that continuous upskilling is essential [16][27-29][44-48][65-67].
POLICY CONTEXT (KNOWLEDGE BASE)
This consensus mirrors the ILO’s call for skilling, reskilling and lifelong learning as core elements of AI-related labour policies [S31] and reflects the emphasis on continuous upskilling highlighted at Davos 2025 and in the World in Numbers report [S29].
AI is expected to augment productivity rather than cause massive job loss; only a small share of jobs are fully automatable.
Speakers: Deepak Bagla, Radhika, Sanjeev Bikhchandani
Disruption will be toughest in the next 5 years; psychological adaptation and reskilling are essential (Deepak Bagla) Only 3‑4 % of jobs are fully automatable; about 20 % will see some tasks automated, creating productivity gains (Radhika) Current hiring remains strong; AI is likely to boost productivity rather than cause immediate job loss, as with past tech waves (Sanjeev Bikhchandani)
Bagla acknowledges disruption but focuses on reskilling; Radhika cites ILO data showing only 3-4 % of occupations face total automation and ~20 % face partial automation; Sanjeev notes that job postings have not declined and past computer adoption increased productivity without layoffs – together they convey that AI will largely augment work, not eliminate it [27-29][37-42][57-58][64-70].
POLICY CONTEXT (KNOWLEDGE BASE)
The view that AI will mainly augment productivity aligns with the ‘collaboration not displacement’ narrative in AI strategies for jobs [S28] and is supported by evidence that only 3-4 % of occupations face full automation [S45], as well as historical analyses showing limited net job loss [S40].
Coordinated policy and social protection measures are required to manage the AI transition, especially for informal and gig workers.
Speakers: Radhika, Deepak Bagla
Comprehensive policies—industrial, macro‑economic, trade, labour, and social protection—are required to absorb displaced workers and enhance productivity (Radhika) Labour laws must be updated to cover platform and gig work, providing social protection for informal workers (Radhika) Most critical point when everyone works together the government, society, academia … is core to seeing any element of success (Deepak Bagla)
Radhika calls for a multi-dimensional policy package and updated labour regulations to protect platform and gig workers; Bagla emphasizes that government, society, and academia must collaborate for success – both underline the need for coordinated policy and social safety nets [44-48][161-168][175].
POLICY CONTEXT (KNOWLEDGE BASE)
ILO discussions stress coordinated social protection and labour-market policies for informal and gig workers in the AI transition [S31], and Davos 2025 highlighted the need for new safety nets for gig economies [S49]; proposals such as universal basic income and reskilling programmes are cited as policy levers [S48].
Historical technology adoption (e.g., computers) increased productivity without large‑scale layoffs, suggesting a similar pattern may repeat for AI.
Speakers: Deepak Bagla, Sanjeev Bikhchandani
The first job to go when digitisation happened was the teller … because you started taking it out of the machine (Deepak Bagla) Historical tech adoption: Past introduction of computers increased productivity without massive job loss, suggesting a similar pattern may repeat (Sanjeev Bikhchandani)
Bagla recounts the teller story as the first job displaced by digitisation; Sanjeev recounts the 1980s computer rollout in Indian banks that boosted productivity without layoffs – both use history to argue AI may follow a similar trajectory [7-12][64-70].
POLICY CONTEXT (KNOWLEDGE BASE)
Historical studies of past technological waves, including computerisation, show productivity gains without large-scale layoffs, a pattern reiterated in the ‘Preventing Jobless Growth’ report [S40] and in the AGI future discussion referencing the lump-of-labour fallacy [S43].
Similar Viewpoints
Both recognise that traditional degree timelines are losing relevance and that early, task‑oriented skill acquisition (even by very young workers) will become a key employability factor, while elite credentials remain a useful filter but must be complemented by continuous learning [138-144][144-150].
Speakers: Deepak Bagla, Sanjeev Bikhchandani
Focus on the application side of the AI stack, leveraging small innovators and addressing education disruption and emerging age‑based task forces (Deepak Bagla) Formal credentials act as a strong filter, but continuous upskilling and experience remain crucial (Sanjeev Bikhchandani)
Unexpected Consensus
AI will primarily serve as an augmenting tool rather than a replacement for professionals.
Speakers: Deepak Bagla, Prashant Warier
Disruption in the next five years … we need to pick up what we can … reskilling piece coming in (Deepak Bagla) Healthcare AI must navigate regulatory clearance and liability issues; AI will serve as decision‑support rather than replace clinicians (Prashant Warier)
Bagla’s emphasis on reskilling and task-oriented augmentation aligns with Prashant’s view that AI in healthcare will act as decision-support, not a substitute, highlighting a shared belief that AI augments existing roles across sectors – a convergence not explicitly anticipated at the start of the panel [27-29][108-119].
POLICY CONTEXT (KNOWLEDGE BASE)
Multiple authorities describe AI as a complementary tool rather than a replacement, e.g., the ‘enhancing rather than replacing humanity’ perspective [S38], the ILO’s augmentation framing [S28], and sector-specific examples emphasizing human agency [S36][S44].
Overall Assessment

The panel shows strong convergence on three core themes: (1) the imperative of upskilling/reskilling to navigate AI‑driven disruption; (2) the expectation that AI will largely augment productivity with limited full automation; (3) the need for coordinated policy, social protection, and inclusive measures for informal and gig workers. Historical analogies and the view of AI as an augmenting tool further reinforce these points.

High consensus – most speakers echo similar conclusions despite differing emphases, indicating a shared understanding that proactive skill development and supportive policy frameworks are essential for a positive AI transition.

Differences
Different Viewpoints
Magnitude and timeline of job displacement due to AI
Speakers: Deepak Bagla, Radhika, Sanjeev Bikhchandani
Disruption will be toughest in the next 5 years; psychological adaptation and reskilling are essential (Deepak Bagla) Only 3‑4 % of jobs are fully automatable; about 20 % will see some tasks automated, creating productivity gains (Radhika) Current hiring remains strong; AI is likely to boost productivity rather than cause immediate job loss (Sanjeev Bikhchandani)
Bagla warns of a severe, near-term disruption wave that will require massive psychological adjustment and reskilling [16][27-29]. Radhika points to ILO data showing a small share of occupations at high risk of total automation and a larger share only partially affected, implying limited job loss overall [37-42]. Sanjeev observes that job postings on Naukri have not declined, suggesting AI has not yet reduced hiring and may instead raise productivity [57-58]. The three speakers therefore disagree on how extensive and immediate the employment impact will be.
POLICY CONTEXT (KNOWLEDGE BASE)
Analyses of public discourse versus empirical data show disagreement over the timing and scale of AI-induced displacement, with studies noting a mismatch between fear and measured impact [S47] and early evidence of limited labour market disruption [S35].
Importance of formal credentials versus early AI‑focused education
Speakers: Deepak Bagla, Sanjeev Bikhchandani
Introduce AI and tinkering at school level to prepare task‑oriented future workers (Deepak Bagla) Formal credentials act as a strong filter, but continuous upskilling and experience remain crucial (Sanjeev Bikhchandani)
Bagla proposes embedding AI and hands-on tinkering in school curricula and suggests that traditional degree timelines may become obsolete, even envisioning 13-year-olds entering task-based work [20-25][138-144]. Sanjeev counters that elite degrees (e.g., IIT) remain a powerful hiring filter and that real competence comes from years of experience and ongoing learning, though he also stresses upskilling [144-150]. They share the goal of preparing workers but diverge on whether early school-level AI training can replace or diminish the role of formal higher-education credentials.
Policy focus: market‑driven application layer versus comprehensive macro‑policy package
Speakers: Deepak Bagla, Radhika
Prioritise the application side of the AI stack, enabling small players to execute solutions quickly (Deepak Bagla) Comprehensive policies—industrial, macro‑economic, trade, labour, and social protection—are required to absorb displaced workers and enhance productivity (Radhika)
Bagla argues that the most impactful AI work lies in the application layer, especially for small innovators, and that focusing there will accelerate adoption [130-132]. Radhika stresses that a multi-dimensional policy framework covering industrial, macro-economic, trade, labour and social protection is essential to manage displacement and boost productivity [44-48]. The disagreement centers on whether the priority should be a technology-focused, market-driven push or a broader, government-led policy response.
POLICY CONTEXT (KNOWLEDGE BASE)
Policy debates contrast a market-driven focus on the application layer with broader macro-policy approaches; this tension is discussed in IGF 2023 on application-layer regulation [S50] and in keynote remarks stressing application-specific governance [S51][S52].
Unexpected Differences
Current impact of AI on hiring trends
Speakers: Deepak Bagla, Sanjeev Bikhchandani
Disruption will be toughest in the next 5 years; psychological adaptation and reskilling are essential (Deepak Bagla) Current hiring remains strong; AI is likely to boost productivity rather than cause immediate job loss (Sanjeev Bikhchandani)
Bagla’s forward‑looking warning implies that hiring will soon be affected by disruption, whereas Sanjeev, based on real‑time Naukri data, reports no observable reduction in hiring and emphasizes productivity gains. The contrast between a predicted near‑term hiring shock and observed hiring stability was not anticipated given both speakers’ business backgrounds.
POLICY CONTEXT (KNOWLEDGE BASE)
Recent surveys of UK scale-ups report hiring slowdowns and anticipated cuts due to AI adoption [S34], while other research finds minimal changes in employment patterns since ChatGPT’s launch [S35]; together they illustrate divergent views on AI’s current hiring impact.
Overall Assessment

The panel displayed moderate disagreement on three core fronts: (1) the scale and immediacy of AI‑driven job loss, with Bagla foreseeing severe short‑term disruption, Radhika citing modest automation rates, and Sanjeev observing unchanged hiring; (2) the role of formal education versus early AI‑centric schooling, where Bagla envisions school‑level AI training supplanting traditional degree pathways, while Sanjeev upholds elite credentials as a key hiring filter; (3) the policy approach, with Bagla championing a market‑driven application‑layer focus and Radhika urging a comprehensive macro‑policy and social‑protection package. While there is consensus on the need for upskilling, the speakers diverge on the mechanisms and urgency.

The disagreements are substantive but not irreconcilable; they reflect differing perspectives (business leader vs policy analyst vs academic) rather than outright conflict. The implications are that coordinated action will require aligning expectations about disruption timelines, integrating education reforms with credentialing systems, and balancing market‑led AI application development with robust policy frameworks to ensure an inclusive transition.

Partial Agreements
All three agree that upskilling the workforce is essential to navigate AI‑driven change, but differ on the mechanism: Bagla stresses psychological readiness and reskilling broadly, Sanjeev proposes a fast‑paced cadence of mastering several AI platforms each quarter, while Radhika calls for systemic skilling programmes together with financing and digital infrastructure for small enterprises [16][27-29][65-67][172-176].
Speakers: Deepak Bagla, Sanjeev Bikhchandani, Radhika
Disruption will be toughest in the next 5 years; psychological adaptation and reskilling are essential (Deepak Bagla) Individuals should learn multiple AI platforms each quarter to stay employable (Sanjeev Bikhchandani) Broad skilling programmes, coupled with financial and digital infrastructure support for MSMEs, are needed for an inclusive transition (Radhika)
Both see AI as a tool to augment productivity rather than wholesale job elimination. Bagla focuses on early education to create a task‑oriented workforce, while Radhika highlights that most occupations will only experience partial automation, allowing productivity gains. They share the goal of leveraging AI for productivity but differ on the primary lever (education vs labour‑market analysis) [20-25][37-42].
Speakers: Deepak Bagla, Radhika
Introduce AI and tinkering at school level to prepare task‑oriented future workers (Deepak Bagla) Only 3‑4 % of jobs are fully automatable; about 20 % will see some tasks automated, creating productivity gains (Radhika)
Takeaways
Key takeaways
AI-driven disruption will be most intense in the next 5‑10 years, requiring psychological adaptation and reskilling. Only a small share (3‑4 %) of occupations are fully automatable; about 20 % will see partial task automation that can boost productivity. Historical tech waves (e.g., computers) increased productivity without massive job loss, suggesting AI may follow a similar pattern. Education must shift toward early AI exposure, tinkering, and continuous upskilling; individuals should learn multiple AI platforms regularly. Policy response must be comprehensive—industrial, macro‑economic, trade, labour, and social‑protection measures—to absorb displaced workers and enhance productivity. Healthcare AI will act as decision‑support and productivity enhancer, constrained by regulation and liability; it will not replace clinicians in the near term. The informal sector, gig workers, and MSMEs risk being left behind and need digital infrastructure, financing, and tailored skilling programmes. For India’s AI ecosystem, the priority is the application layer, enabling small innovators to build and deploy solutions quickly.
Resolutions and action items
Introduce AI and tinkering modules at school level (proposed by Deepak Bagla). Encourage individuals to learn at least three new AI platforms each quarter (suggested by Sanjeev Bikhchandani). Develop and implement broader policy packages—including industrial policy, macro‑economic measures, trade policy, labour reforms, and social‑protection schemes—to support displaced workers (Radhika). Prioritise development of AI applications that can be executed by small players, focusing on the application stack (Deepak Bagla). Update labour regulations to cover platform and gig work, ensuring social protection for informal workers (Radhika). Facilitate regulatory pathways for AI in healthcare, ensuring FDA/CDSCO clearance and addressing liability issues (Prashant Warier). Provide financial support, broadband access, and AI adoption assistance to MSMEs and agricultural enterprises (Radhika).
Unresolved issues
Exact magnitude and timing of job displacement versus job creation remain uncertain. How to operationalise large‑scale reskilling and upskilling programmes, especially for the informal sector, is not detailed. Specific mechanisms for financing AI adoption in micro‑enterprises and agriculture are not defined. The path to harmonising AI regulatory approvals across jurisdictions and handling liability for clinical decisions remains open. How education credentials will evolve (e.g., relevance of traditional degrees versus task‑based learning) lacks a concrete roadmap. Implementation timeline and coordination among government, academia, and industry for the proposed actions are not established.
Suggested compromises
Balance between supporting displaced workers through social protection and encouraging productivity gains via partial automation. Use AI as a supportive tool rather than a full replacement in regulated sectors like healthcare, respecting liability and regulatory constraints. Maintain the value of formal credentials as a filter while promoting continuous, task‑oriented upskilling. Focus on rapid application‑layer development by small innovators while still investing in foundational AI research and education.
Thought Provoking Comments
The only job that was once said to be stable – the bank teller – disappeared with digitisation. We now have no playbook; the next five years will be the toughest period of disruption and we must prepare for a world where jobs can vanish and reskilling becomes essential.
Bagla uses a concrete historical example to shatter the myth of any ‘future‑proof’ job, highlighting the unprecedented uncertainty of the AI era and the urgency of psychological and skill adaptation.
This set the tone for the whole panel, prompting other speakers to frame their answers around uncertainty, the need for reskilling, and the lack of a historical roadmap. It led Radhika to bring data‑driven nuance and Sanjeev to share his own ‘no‑playbook’ experience.
Speaker: Deepak Bagla
Only 3‑4 % of jobs globally have a high likelihood of full automation, while about 20 % will see some tasks automated, freeing time for new tasks. The transition therefore requires not just skilling but industrial, macro‑economic, trade and social‑protection policies.
She grounds the debate in empirical research, counters alarmist narratives, and expands the conversation from individual reskilling to systemic policy design.
Her data‑driven point shifted the discussion from fear‑based speculation to a balanced view of risk and opportunity, prompting Sanjeev to reference historical productivity gains and prompting later comments about the informal sector.
Speaker: Radhika
Learn how to use three AI platforms every quarter – by the end of a year you’ll have mastered twelve. In the early PC era I was the only literate person and survived; the same will happen with AI.
Provides a clear, actionable prescription rooted in personal anecdote, turning abstract concerns into a concrete skill‑building strategy and illustrating how early adoption can be a career safeguard.
This practical advice resonated with the audience and reinforced the earlier theme of continuous learning. It also sparked the later exchange on credentials versus skills, and reinforced the panel’s emphasis on proactive upskilling.
Speaker: Sanjeev Bikhchandani
In healthcare, AI will not replace doctors but will upscale them – e.g., AI‑driven radiology interpretation, note‑taking agents, and decision‑support tools – while regulatory approval and liability remain major hurdles.
He introduces sector‑specific nuance, showing that AI’s impact varies by regulation and liability concerns, and that the technology is more about augmentation than replacement.
His sector focus broadened the conversation beyond generic job loss, leading to a deeper discussion on how AI can be integrated responsibly, and highlighted the need for regulatory frameworks, which later tied into Radhika’s points on policy.
Speaker: Prashant Warier
AI is already challenging the education model – master’s students question paying high tuition because AI gives them answers, and age barriers may disappear as 13‑year‑olds can perform task‑based work.
He spotlights a disruptive ripple effect of AI on higher education and talent pipelines, suggesting a future where traditional degree structures lose relevance.
This comment pivoted the dialogue toward long‑term structural change, prompting Sanjeev to discuss the enduring value of credentials as filters and raising questions about how hiring will evolve.
Speaker: Deepak Bagla
45 % of India’s workforce is in agriculture and 55 % are self‑employed; the informal sector – which makes up the vast majority of jobs – risks being left out of the AI conversation and will need infrastructure, finance and digital access, not just skilling.
She expands the scope of the discussion to include the informal economy, reminding the panel that AI policy must be inclusive and not just focused on formal white‑collar jobs.
This reframed the debate from a narrow focus on knowledge work to a broader development challenge, influencing the final rapid‑fire answers about inclusive AI transition and underscoring the need for systemic support.
Speaker: Radhika
Overall Assessment

The discussion was shaped by a series of pivotal remarks that moved the conversation from vague anxiety to a nuanced, data‑backed, and sector‑specific analysis. Deepak Bagla’s opening anecdote about the teller created a sense of urgency, which Radhika tempered with empirical evidence and a call for comprehensive policy. Sanjeev’s actionable learning roadmap and Prashant’s healthcare‑focused augmentation narrative added concrete pathways and highlighted regulatory complexities. Bagla’s later insight on education disruption and Radhika’s emphasis on the informal sector broadened the lens to systemic, long‑term implications. Together, these comments redirected the panel from speculative fear to a balanced view of risk, opportunity, and the multi‑dimensional policy response needed for India’s AI‑driven future.

Follow-up Questions
What specific policies should businesses and policymakers implement to support workers displaced by AI over the next 5‑10 years?
Both highlighted the upcoming disruption and the need for policy action but did not outline concrete measures, indicating a gap that requires further exploration.
Speaker: Deepak Bagla, Radhika
How can we develop a granular, task‑level analysis of automation exposure across occupations in India?
Radhika emphasized the need for more nuanced, task‑based understanding of automation impacts, suggesting that current data are insufficient for targeted interventions.
Speaker: Radhika
What industrial, macro‑economic, trade, and social‑protection measures are required to absorb workers whose jobs are displaced by AI?
She noted that reskilling alone is insufficient and that broader policy levers are needed, pointing to a research gap on the design of such measures.
Speaker: Radhika
What regulatory pathways are needed to enable AI‑driven primary‑care tools in low‑resource settings while addressing liability concerns?
Prashant identified regulation and liability as major barriers to AI adoption in healthcare, indicating the need for research on appropriate regulatory frameworks.
Speaker: Prashant Warier
How should regulatory frameworks evolve to allow AI clinical decision support while managing doctor liability?
Related to the previous point, this question focuses specifically on liability and the evolution of medical device/AI regulations.
Speaker: Prashant Warier
Which layer of the AI stack should India prioritize for investment to maximize economic and employment impact?
Deepak asked where to double‑down within the AI stack, but did not provide a definitive answer, leaving the optimal focus area open for investigation.
Speaker: Deepak Bagla
How will the shift from degree‑based hiring to skill‑based hiring affect recruitment and career progression in India?
Sanjeev discussed the changing value of credentials versus skills, raising the need to study the implications for hiring practices and labor market dynamics.
Speaker: Sanjeev Bikhchandani
How can gig and informal sector workers be included in AI‑driven productivity gains, and what policy changes are needed?
Radhika highlighted that the informal sector is largely omitted from current AI discussions, indicating a research and policy gap.
Speaker: Radhika
What digital infrastructure and financing mechanisms are required for MSMEs and the agricultural sector to adopt AI?
She pointed out the need for broadband, financial support, and other enablers for small enterprises and agriculture, suggesting further study on effective models.
Speaker: Radhika
How can we quantify the emerging task‑creation workforce and its impact on employment dynamics?
Deepak mentioned the lack of data on people moving into task‑creation roles, indicating a need for measurement and analysis.
Speaker: Deepak Bagla
What metrics should define a successful AI transition for India by 2030?
The rapid‑fire round yielded varied visions of success, but concrete, shared metrics are missing, calling for a systematic definition of success indicators.
Speaker: All panelists

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vivek Raghavan Sarvam AI

Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vivek Raghavan Sarvam AI

Session at a glanceSummary, keypoints, and speakers overview

Summary

Speaker 1 introduced Vivek Raghavan, co-founder of Sarvam, a company developing AI that understands India’s languages and context [1-3]. Raghavan opened by asserting that India is capable of training state-of-the-art models and delivering AI to a billion users [4-7]. He argued that while short-term technical leads such as model size or chip speed are fleeting, long-term national sovereignty in AI is essential and must be built domestically [10-13][19-20]. Raghavan warned that without indigenous AI, India risks becoming a digital colony dependent on foreign technology, emphasizing that AI is a core capability a country cannot forgo [21-27]. He highlighted India’s unique advantage of linguistic diversity-22 official languages and regional variations every 50 km-which must be captured for AI to reflect the voice of the people [28-34]. The large, cost-conscious Indian market provides both demand and the need for low-cost, scalable AI solutions, as illustrated by the success of UPI and the potential for AI to improve public services affordably [35-42]. Sarvam’s strategy is a full-stack sovereign AI platform comprising home-grown models, applications, and infrastructure designed for Indian scale [42-46]. Its models are built from scratch without external data dependencies, yet aim to be world-class state-of-the-art systems [46-52]. The SARAS speech model, trained on diverse Indian data, is claimed to outperform global competitors in Indian-language speech-to-text and text-to-speech quality [53-66][69-71]. Sarvam also offers a 3-billion-parameter vision model that surpasses larger international models on document digitisation and visual-grounding tasks, especially in Indian languages [84-90][91-92]. The company has produced several LLMs, including a small 32K-context model trained on 16 trillion tokens and a 105-billion-parameter model, both benchmarked as superior to comparable global open-source models [93-106]. These achievements were realised by a team of only fifteen young engineers, demonstrating the depth of talent available in India [112-119]. Sarvam is already deploying the models in real-time voice applications serving millions of minutes daily, supporting NGOs, content digitisation, and edge devices such as glasses, while building the compute infrastructure needed for nationwide rollout [120-135]. The discussion concluded that building sovereign AI is both a strategic necessity and a feasible path for India to lead in AI that serves its diverse population and economy [13-27][34-38][42-46].


Keypoints


Major discussion points


AI sovereignty is essential for India’s future.


Raghavan stresses that reliance on foreign models would make India a “digital colony” and that long-term national security depends on building AI in-house, just as India created Aadhaar and the India Stack as open-source public infrastructure [10-13][14-19][24-27].


India’s unique strengths make sovereign AI feasible and necessary.


He highlights the country’s linguistic diversity (22 official languages, regional variation every ~50 km) and massive, cost-conscious market that can drive demand for AI at scale [28-34][35-40].


Sarvam is building a full-stack sovereign AI platform: models, applications, and infrastructure.


The platform is organized around three layers-home-grown models, AI-powered applications, and scalable infrastructure-to deliver “world-class, state-of-the-art” solutions entirely from India [42-46][47-52].


Concrete AI breakthroughs demonstrate world-class capability.


• Speech-to-text (SARAS) and text-to-speech models trained on diverse Indian data outperform global competitors [53-65][69-71].


• Vision models (3 billion-parameter) excel at document digitisation and visual grounding, beating larger international models [84-89][90-91].


• Large language models, from a 32K-context 16-trillion-token model to a 105 billion-parameter LLM, achieve superior benchmarks against GPT-OSS, Gemini Flash, etc., while being trained entirely in India [92-106][108-110].


Deployment focus: real-world applications and scalable infrastructure.


Sarvam already powers over a million minutes of multilingual voice conversations daily, supports NGOs, content dubbing, and is optimizing models for edge devices and custom hardware to deliver AI at “India scale and India cost” [120-136].


Overall purpose / goal


The discussion aims to convince the audience that India not only can but must develop its own sovereign AI ecosystem. By outlining strategic imperatives, showcasing Sarvam’s technical achievements, and illustrating tangible applications, Raghavan calls for continued investment and collaboration to ensure India’s independence and leadership in AI.


Tone of the discussion


The tone is consistently confident and rallying, beginning with a broad, patriotic appeal to sovereignty, moving into a data-driven exposition of India’s advantages, then shifting to a technical, demonstrative mode when describing models and benchmarks. It concludes on an optimistic, forward-looking note, emphasizing youth talent and the potential for even larger breakthroughs. Throughout, the speaker maintains an enthusiastic, persuasive stance without significant negativity or doubt.


Speakers

Speaker 1


– Role/Title: Event moderator/host (introducing the keynote) [S1][S3]


– Area of Expertise:


Vivek Raghavan


– Role/Title: Co-founder of Sarvam (AI company) [S4]


– Area of Expertise: Artificial Intelligence, sovereign AI, speech and language models, large language models, Indian language technology


Additional speakers:


Full session reportComprehensive analysis and detailed insights

Speaker 1 introduced Vivek Raghavan, co-founder of Sarvam, which is building artificial-intelligence systems that can speak India’s many languages and understand its local context [1-3].


Raghavan opened with a concise rallying cry: “India can train state-of-the-art AI models and deliver them to a billion users,” positioning AI sovereignty as a national mandate [4-7].


He argued that fleeting technical bragging rights-such as model size or chip speed-are transitory, whereas home-grown AI is essential to prevent India from becoming a “digital colony.” He cited his work on Aadhaar and the open-source India Stack as proof that publicly built, indigenous technology can scale to serve a nation [10-13][14-19].


India’s unique advantages make sovereign AI both feasible and necessary. First, the country’s linguistic diversity-22 official languages and dialects that shift roughly every 50 km-requires AI that can capture this variation [28-34]. Second, the massive, cost-conscious market creates demand for affordable, scalable solutions; the success of UPI illustrates how technology can achieve mass adoption while remaining inexpensive [35-40].


Sarvam’s response is a full-stack sovereign AI platform organised around three layers: indigenous models, AI-powered applications, and infrastructure built for Indian scale and cost [42-46]. All models are built from scratch with no reliance on external data, yet aim for world-class performance [47-52].


Speech – The SARAS speech-to-text model, trained on extensive Indian data, is best-in-class for Indian languages in blind tests. Its text-to-speech and dubbing capabilities also rank highest in the country, with dubbing that preserves speaker modality, offers precise duration control, and supports mixed-language output [53-66][69-71][133-135].


Vision – A 3-billion-parameter state-space model excels at document digitisation, language-layout understanding, visual grounding, and reading-order prediction, outperforming larger international models on both Indian-language and English tasks [84-90][91-92][136-138].


Large language models – Sarvam has (i) a compact model with a 32 k-token context, trained on 16 trillion tokens for real-time multilingual conversation, and (ii) a 105-billion-parameter LLM, the largest trained entirely in India. The LLM is on par with most open-source and closed-source models of its class and is superior to GPT-OSS 120 B and Gemini Flash in benchmark comparisons [92-106][108-110][139-141].


The development of these models was enabled by a grant from the India AI Mission [142-144].


All of the above were achieved by a team of just fifteen young engineers, underscoring the depth of talent available in India [112-119].


Sarvam’s Servum platform powers more than one million minutes of real-time voice conversation each day across eleven Indian languages; NGOs have generated a crore minutes of calls in a single month, and the platform also supports content digitisation, translation, dubbing, and enterprise and government use-cases [120-130].


To reach every citizen, Sarvam is optimising models for edge deployment on smartphones and augmented-reality glasses [145-147], and is building large-scale compute infrastructure that can deliver AI at “India scale and India cost” [148-150].


In closing, Raghavan reiterated that AI sovereignty is not a luxury but a national mandate. It safeguards India from dependence on foreign technology, leverages the country’s linguistic richness and market size, and capitalises on its youthful talent pool. He called for continued investment, policy backing, and collaborative effort to expand the sovereign AI ecosystem, positioning India to lead globally while serving its diverse population [151-155].


Session transcriptComplete transcript of the session
Speaker 1

I move on to our next keynote speaker, who is Mr. Vivek Raghavan, the co -founder of Sarvam, a company building AI that speaks India’s languages and understands India’s context. In a world dominated by models trained on English language data, their work is a powerful demonstration that sovereign AI capability is not just a luxury, it is a necessity. So, ladies and gentlemen, please welcome Mr. Vivek Raghavan, co -founder of Sarvam.

Vivek Raghavan

I come here to say that India can. And I think that’s the message I want to say. India can. And India can train state -of -the -art models, bring AI to a billion Indians, and do it all. And that’s really the message of why we started Sarvam. I want to talk about the long arc. You know, when you look at, you know, today the world is moving so fast. Everybody talks about where is the largest model or who has the fastest chip. These are all transitory technical advantages. In the long run, it’s sovereignty that matters. And unless we build these things ourselves, it’s something that, you know, will be left behind in the race. In the past 15 years of my life, I worked on building Aadhaar.

Which is India’s… digital identity program. Prior to that, many of these technologies, many of these technologies were proprietary technologies. And we built this kind of self -created technology that is open source and a public infrastructure that is available to all of us. And that led to the creation of the India Stack. So when you look at it over the course of long periods of time, sovereignty will always trump technical leads that are short term. We have a mandate to build. It is not an option whether we want to build in these technologies. AI is a technology that has impact on every single aspect of human life. And it’s a core technology that a country like India must have.

And it’s a core technology that a country like India must understand. from the foundational level. Otherwise, we will become a digital colony which is dependent on other countries for this core, core technology. That’s something that is, it’s not an option. It is something that we must do. And we have unique advantages. And our unique advantage is actually our diversity. We have so many languages. We have 22 official languages. And in fact, you know, the way people speak in our country changes every 50 kilometers. And that diversity must be captured if we have to understand the voice of the people. And therefore, if we build AI from India, it must acknowledge that diversity and do this. The other thing, of course, is we are a huge economy.

There is demand. and if AI is there to help the citizen to do everything, this is we can be one of the largest consumers of AI in the world. And that demand is there and then we have to build. We know that we are a cost -conscious country, right? Everything needs to be at the lowest cost. So we need to build efficient AI that actually can be delivered at scale for the people so that the last person in the country can actually have a better experience, right? Today, if you look at UPI, one of the great success stories of the past decade, and if you look, it is for the first time we feel that in India, things can be better than everywhere else in the world.

But AI done the right way can make sure that every service to citizens actually is the best and the cheapest and actually done in the best possible way for the country and that’s the promise of AI and that’s why we said we need to do this in India. and I think it’s not about I mean we have companies which have globally when you look at AI companies they are massive companies but in the end we have to show a new model where we can actually build AI which helps everyone and then we can win for the people and our model can be adopted in the world and that is my belief of where we need to go on this thing So Sarvam has been building India’s full stack sovereign AI platform and we work with developers fundamentally India is a country of developers we have more developers and we work with enterprises and we work with governments and we have a full stack platform which I’ll talk about In fact the full stack platform consists of three different things one is models we need models that are built in India And that is the key thing and that’s what we’ll focus on, sovereignty and models.

And then we are focusing on applications. Applications is AI for everyday tasks, for making things better for people. And finally, we’ll talk about infrastructure and infrastructure at India scale. The first thing I want to talk about are sovereign models. Rule number one is they are built from scratch. They are not dependent on any other model that is there in the world. They are built from scratch. There is no data dependency on anyone else. But at the same time, the focus is these are world -class, state -of -the -art models that we have. So I’m going to talk a little bit about some of our models. The first model is actually the SARAS model, which is actually the speech model which helps recognize speech in Indian languages.

This is native. This is native. This is native. This is native. This is native. This is native. This is native. This is native. This is native. This is native. This is native. This is native. This is native. This is native. This is native. This is native. This is native. This is native. This is native. This is native. This is native. This is native. In fact, if you see, this is actually best in class in Indian languages compared to any other global model in terms of that’s something to say that in the, these are extremely small models, but these are models which have been trained with significant amounts of Indian diverse data, which will actually lead to better performance on Indie voices.

So let me just take a play, an iconic moment in India, diarized using our model. Oh, sorry. Okay, I don’t think I can make this happen. Okay. Is audio, it’s not playing. let’s maybe we’ll come back to this and here we have a majestic liftoff of lbm3 m4 rocket carrying india’s prestigious chandrayaan 3 spacecraft this is the rocket of lbm3 m4 rocket this is the rocket of lbm3 m4 rocket this is the rocket of lbm3 m4 rocket this is the rocket of lbm3 m4 rocket this is the rocket of lbm3 m4 rocket this is the rocket of lbm3 m4 rocket this is the rocket of lbm3 m4 rocket this is the rocket of lbm3 m4 rocket this is the rocket of lbm3 m4 rocket this is the rocket of lbm3 m4 rocket this is the rocket of lbm3 m4 rocket this is the rocket of lbm3 m4 rocket this is the rocket of lbm3 m4 rocket this is the rocket of lbm3 m4 rocket this is the rocket of lbm3 m4 rocket this is the rocket of lbm3 m4 rocket this is the rocket of lbm3 m4 rocket We want to create models which are naturally expressive Indian voices, and they have low latency streaming and actually production grade quality.

So, in fact, our models, our speech to text models are considered in blind tests are actually the most preferred voices in Indian languages. And this is something compared to all the global competitors, such as Levin Labs and Cartesia, etc. And we have actually the most preferred text to speech model in the country. We’ll also talk, we also have a dubbing capability, which actually preserves the speaker modality and has precise control over duration and supports mixed language things. So I will show, in fact, in this model as well. We see compared to any other model in the world, we are the most preferred as far as dubbing is concerned. I will show a small snippet of what happens here.

We have the remaining 13 bits in which the 12th bit is called the small a bit. Then the remaining 6 bits are compute bits. These 3 are the destination bits and these are the jump bits. Then we have the remaining 13 bits in which the 12th bit is called the small a bit. Then the remaining 6 bits are compute bits. These 3 are the destination bits and these are the jump bits. Then we have the remaining 13 bits in which the 12th bit is called the small a bit. Then the remaining 6 bits are compute bits. These 3 are the destination bits and these are the jump bits. So we’ve also built these vision models. And these vision models are actually very good for document digitization.

They’re very good at language layout understanding, visual grounding, and in fact, finding intelligence by visual components. And finally, reading order predictions. In fact, the vision model that we built is only a 3 billion parameter state space model which beats all other models in the world, not just in Indian languages, but in English as well. So therefore, it shows, and many of these models are many orders of magnitude bigger than our models, and still we are able to get world -class performance from them. Of course, in Indian languages, we are far and away ahead of the models that are there from the global comparison. Now we come talk about some of the LLMs. We have actually, India has started the training of LLMs from scratch.

And this was done through a grant from the India AI Mission that actually without which it would not have been possible for us to train these kinds of models through a GPU grant. In fact, it is a context, it’s a model which is an extremely small model which can run on a single GPU. And it has a 32K context length and is trained on 16 trillion tokens. And it’s extremely efficient thinking which actually gives better answers with lower. And the focus of this model is actually real time conversational applications that people will be able to, it will be able to generate conversations in all the Indian languages in a real time system. And some of the benchmarks show compared to models, global models of similar size.

such as QN30B or GPT -OSS. It is far superior in the ability in terms of various parameters, such as fluency, language and script, and usefulness and conciseness. So the important thing is that this is able to, at the size, it is again a global best. And then we’ll come finally to our largest model, which is the 105 billion parameter model. This is the largest LLM that has trained from scratch in the country. And it is basically better on par with most open source and closed source models of its class. And it can handle various kinds of complex reasoning tasks, as well as web searching and things like that. So it’s actually a fairly complex, fairly advanced model, which again works in all…

all Indian languages. From a benchmark perspective, these models, again, compared to things like GBT OSS 120 billion, as well as Gemini Flash, is actually superior in terms of the kinds of outputs it can generate. So therefore, this is really an example. While this is quite a small model, it is really, just to give you an idea, last year we had DeepSeek R1, which actually was launched, and that was 670 billion parameters. The numbers that we are getting are actually superior to what DeepSeek R1 had last year. Of course, the state of the art has also improved. But the goal is we can show that India can build these things. And I want this. The most important thing that I want people to understand is…

just because, and I think that the, you know, I would love that not just us, but many other people come and show that we can actually build world -class models from India, because that is the fact. And these models was built with a team of just 15 young people. And really it’s the game of the youth of India that have actually made this model what it is. I’m just the spokesperson. These young kids have actually made something like this happen. And if these kids can do it, we have so much talent in the country. And I think that’s, I’m very positive about given the right kind of support in the way that we have been given, that much bigger things can happen.

Moving beyond our models, we actually want these models to become useful. And so therefore we build applications. And I’ll talk very briefly. about the kinds of applications that we build. So in fact, we have an ability to actually converse, and this is our real -time voice conversation that happens. We do more than a million minutes of voice conversation in 11 Indian languages every day using Servum. So these models are actually being used to build things. So these models which have been trained fully in our control are actually now being used for conversations across enterprises and government use cases. In fact, in the last month, we actually took about 20 NGOs, actually did a crore minutes of calling within a month to really understand what is the real -time voice conversation and what people actually are saying at the ground.

Okay. So we actually have ways by which we can make this available for work tasks and enterprise tasks. And we also have the ability to do this for content, which is digitizing books, translating books, dubbing videos. These are all studio products that we have. And I think finally, I want to end with infrastructure. We actually are doing many interesting things. We are making our models smaller to work on the edge, to work on phones. We are making, we’re actually, many of you may have heard that we have also launched some of these glasses so that we are able to have these models run on different form factors and capture the intelligence, capture the voice of India at every point in there.

And finally, for these things to work, you need to have compute at a large scale and the ability to actually very efficiently deliver all these models to India at India scale at India cost. Finally, we are, of course,

Related ResourcesKnowledge base sources related to the discussion topics (19)
Factual NotesClaims verified against the Diplo knowledge base (7)
Confirmedhigh

“Home‑grown AI is essential to prevent India from becoming a “digital colony.””

The knowledge base includes a statement that India must develop its own core technology or risk becoming a digital colony, echoing Raghavan’s point [S14].

Confirmedhigh

“India has 22 official languages.”

A source explicitly notes that India has 22 official languages [S56].

Confirmedhigh

“UPI demonstrates that technology can scale rapidly and affordably for a massive Indian market.”

The knowledge base cites UPI processing over 20 billion transactions monthly and showing scalable, inclusive technology [S57].

Additional Contextmedium

“India’s massive, cost‑conscious market makes sovereign AI both feasible and necessary.”

Another source highlights India’s status as the world’s strongest growth market where AI’s deflationary nature aligns with development needs, providing economic context for the claim [S53].

Additional Contextmedium

“Building sovereign AI is a national mandate to preserve cultural and technological independence.”

A discussion of India’s need for its own foundation models frames the issue as cultural preservation and national capability, adding nuance to the sovereignty argument [S8].

Additional Contextlow

“Sarvam’s platform relies entirely on indigenous models without external data.”

The knowledge base mentions the broader Indian push for heterogeneous compute and sovereign AI capabilities, underscoring the strategic emphasis on indigenous model development [S19].

Additional Contextlow

“India’s linguistic diversity creates a need for AI that can handle many languages and dialects.”

A source notes India’s multilingual environment (22 official languages) and the importance of multilingual collaboration for AI, providing additional background [S22].

External Sources (63)
S1
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S2
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S3
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S4
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Vivek Raghavan Sarvam AI — -Announcer: Role/Title: Event announcer; Area of expertise: Not mentioned Rather than viewing India’s complexity as a c…
S5
Host Country Open Stage — Francis D Silva: Please welcome to the stage, from Brnoisund Register Centre, Francis de Silva. Good morning. We are the…
S6
https://dig.watch/event/india-ai-impact-summit-2026/building-trusted-ai-at-scale-cities-startups-digital-sovereignty-keynote-kiran-mazumdar-shaw — Deep science requires a lot of research and development. It requires patient capital. But the societal and economic retu…
S7
(Day 2) General Debate – General Assembly, 79th session: morning session — Emmanuel Macron – France: President of the General Assembly, Heads of State and Government, Ministers, Ambassadors, La…
S8
From Innovation to Impact_ Bringing AI to the Public — The discussion concludes with predictions about the pace of transformation. Sharma suggests that the changes will be dra…
S9
Indias Roadmap to an AGI-Enabled Future — This reframes India’s perceived disadvantages (diversity, complexity) as unique competitive advantages in the AI era. It…
S10
https://dig.watch/event/india-ai-impact-summit-2026/the-global-power-shift-indias-rise-in-ai-semiconductors — And one of the changes that has happened, obviously India becoming the larger in terms of GDP size, consumer demand, peo…
S11
https://dig.watch/event/india-ai-impact-summit-2026/waves-of-infrastructure-open-systems-open-source-open-cloud — So I do expect that to start happening. That’s why we started working with CDAC and VVDN to some extent. We do see the o…
S12
https://dig.watch/event/india-ai-impact-summit-2026/driving-social-good-with-ai_-evaluation-and-open-source-at-scale — Can I, so I just wanted to add something to what you were saying. This is, you know, some of the organizations that we’v…
S13
DigiSov: Regulation, Protectionism, and Fragmentation | IGF 2023 WS #345 — Andrea Beccalli:So Daniele, yes, so indeed, the model that, as I said, underpins the internet, the protocol layer, is a …
S14
https://dig.watch/event/india-ai-impact-summit-2026/building-trusted-ai-at-scale-cities-startups-digital-sovereignty-keynote-vivek-raghavan-sarvam-ai — And it’s a core technology that a country like India must understand. from the foundational level. Otherwise, we will be…
S15
Keynote-Vishal Sikka — “So if you are counting, that is about more than 250 times improvement in productivity.”[1]. “Recently, he rebuilt that …
S16
Building Indias Digital and Industrial Future with AI — Deepak Maheshwari from the Centre for Social and Economic Progress provided historical context, tracing India’s digital …
S17
Partnering on American AI Exports Powering the Future India AI Impact Summit 2026 — This comment demonstrates sophisticated understanding that ‘AI sovereignty’ isn’t a monolithic concept but represents di…
S18
Designing Indias Digital Future AI at the Core 6G at the Edge — This comment connects technical sovereignty to cultural and ethical sovereignty, highlighting that AI systems trained on…
S19
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — This comment provides crucial context about India’s position in the global AI ecosystem, distinguishing between applicat…
S20
The Global Power Shift India’s Rise in AI &amp; Semiconductors — -Building India’s AI and Semiconductor Ecosystem: The panel discussed India’s positioning in the global AI and semicondu…
S21
Need and Impact of Full Stack Sovereign AI by CoRover BharatGPT — “What raw material is needed for AI?”[9]. “sovereign AI comes to India, we’ll have the control”[56]. “Indian government …
S22
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — Hello, good afternoon. Good afternoon. Good afternoon. My name is Sunil Gupta. I am co -founder and CEO of IOTA. So we r…
S23
IGF 2024 Opening Ceremony — This comment provided a structure for subsequent speakers to address specific aspects of AI governance and inequality. I…
S24
Comprehensive Report: China’s AI Plus Economy Initiative – A Strategic Discussion on Artificial Intelligence Development and Implementation — There is unexpected consensus among speakers from different backgrounds (academia, industry startup, and large corporati…
S25
Survival Tech Harnessing AI to Manage Global Climate Extremes — Thank you. See, we have data in place. We have policies in place. We have science in place. Now, what? Money in place. S…
S26
Global AI Policy Framework: International Cooperation and Historical Perspectives — Werner identifies three critical barriers that prevent AI for good use cases from scaling globally. He emphasizes that d…
S27
Democratizing AI: Open foundations and shared resources for global impact — ## Practical Applications and Real-World Impact Bernard Maissen: Yes, thank you. Hello, everybody, dear panelists. Nina…
S28
Digital Democracy Leveraging the Bhashini Stack in the Parliamen — Industry adoption requires domain-specific adaptation, feedback loops, and scalable edge deployment infrastructure for r…
S29
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — This comment provides crucial context about India’s position in the global AI ecosystem, distinguishing between applicat…
S30
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Vivek Raghavan Sarvam AI — This comment establishes the foundational premise for the entire presentation, shifting the conversation from ‘why build…
S31
Need and Impact of Full Stack Sovereign AI by CoRover BharatGPT — “What raw material is needed for AI?”[9]. “sovereign AI comes to India, we’ll have the control”[56]. “Indian government …
S32
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Kiran Mazumdar-Shaw — “I believe that nations that command the convergence of biology and AI, or what I like to call the convergence of biolog…
S33
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Kiran Mazumdar-Shaw — “I believe that nations that command the convergence of biology and AI, or what I like to call the convergence of biolog…
S34
Designing Indias Digital Future AI at the Core 6G at the Edge — see this token economy in which we are going to go in the next 5 to 7 years so sovereignty is going to be a token sovere…
S35
Keynote ‘I’ to the Power of AI An 8-Year-Old on Aspiring India Impacting the World — 8 year old prodigy: Sharing is learning with the rest of the world. One, an AI that is independent. From large global A…
S36
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — Absolutely. I think we are trying to do that in a collaborative way with all of our contributors. Please be a collaborat…
S37
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — This comment provides crucial context about India’s position in the global AI ecosystem, distinguishing between applicat…
S38
AI Innovation in India — India’s unique strength lies in its people’s ability to work in unstructured environments and get jobs done regardless o…
S39
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Vivek Raghavan Sarvam AI — And it’s a core technology that a country like India must understand. from the foundational level. Otherwise, we will be…
S40
Leaders’ Plenary | Global Vision for AI Impact and Governance- Afternoon Session — Thank you, Prime Minister, for having us. As my colleagues have said, India will no doubt be a powerhouse in AI in many …
S41
Need and Impact of Full Stack Sovereign AI by CoRover BharatGPT — I think whatever is there, first, energy. Our brain is very useful. It only runs on 20 watts. But, the GPU doesn’t run o…
S42
Skilling and Education in AI — A technology company representative highlighted the critical importance of building comprehensive AI infrastructure with…
S43
We are the AI Generation — Doreen Bogdan Martin: Thank you. Good morning and welcome to Geneva for the AI for Good Global Summit 2025. I want to th…
S44
The potential of AI and recent breakthroughs in technology — I am excited about how my friends at Microsoft and their partners have been working together to use advanced generative …
S45
Artificial Intelligence &amp; Emerging Tech — One viewpoint acknowledges the transformative potential of AI and its ability to generate novel content and integrate di…
S46
Breakthroughs in human-centric bioscience with AI — During the 2020-2021 COVID-19 pandemic, AI models dramatically sped up vaccine development, screening immune system targ…
S47
From KW to GW Scaling the Infrastructure of the Global AI Economy — The success of this transformation will depend on continued collaboration between global technology providers and local …
S48
Digital Democracy Leveraging the Bhashini Stack in the Parliamen — Industry adoption requires domain-specific adaptation, feedback loops, and scalable edge deployment infrastructure for r…
S49
Democratizing AI: Open foundations and shared resources for global impact — Focus on Real-World Impact and Practical Applications
S50
Waves of infrastructure Open Systems Open Source Open Cloud — Bharat from Divium addressed a critical deployment challenge: 90% of generative AI pilots never reach production, not du…
S51
AI for agriculture Scaling Intelegence for food and climate resiliance — “We will move from pilots to platforms, from fragmented data to interoperable systems, from experimentation to execution…
S52
Opening remarks — The keynote culminated in an invocation of collective duty and a rallying cry for all attendees to commit to this common…
S53
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Hemant Taneja General Catalyst — Taneja argued that India is uniquely positioned to lead in AI deployment due to its status as the world’s strongest grow…
S54
India needs a quantum leap in defence AI, says LatentAI founder — Jags Kandasamy, founder of US-based defence tech company LatentAI, isworking with Indian firms to pursue defence contrac…
S55
Open Forum #37 Digital and AI Regulation in La Francophonie an Inspiration and Global Good Practice — It is a space where there are several languages, more than 1,000 languages. For example, take the country of RDC, they h…
S56
Setting the Rules_ Global AI Standards for Growth and Governance — So… As a recent computer science student, I’m interested in building AI for India. Specifically with such a distinguis…
S57
Keynote-Rishad Premji — “UPI today processes over 20 billion transactions every month and has transformed how individuals and businesses partici…
S58
Scaling Trusted AI_ How France and India Are Building Industrial &amp; Innovation Bridges — The summit’s emphasis on trust as the foundation for scale provides a framework for understanding why some AI applicatio…
S59
Open Forum #50 Digital Innovation and Transformation in the UN System — Ensuring solutions are scalable and cost-effective
S60
Open Forum #66 the Ecosystem for Digital Cooperation in Development — African Child Project’s work in local talent development and their success with school connectivity as a grassroots init…
S61
Can (generative) AI be compatible with Data Protection? | IGF 2023 #24 — Smriti Parsheera:Thanks so much, Luca, and hello to everyone in the room and online. So as Luca mentioned, I’m gonna rea…
S62
European Tech Sovereignty: Feasibility, Challenges, and Strategic Pathways Forward — Sovereignty has multiple layers: data, operations, technology stack – can control three out of four
S63
https://dig.watch/event/india-ai-impact-summit-2026/regulating-open-data_-principles-challenges-and-opportunities — and international cooperation that respects national regulatory frameworks. Together, these signals suggest that the eme…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
Speaker 1
1 argument133 words per minute73 words32 seconds
Argument 1
Sovereignty necessity – (Speaker 1)
EXPLANATION
The speaker emphasizes that having AI capabilities that are owned and controlled by India is essential, not a luxury. Sovereign AI is presented as a strategic requirement for the country’s future.
EVIDENCE
The speaker states that in a world dominated by English-language models, Sarvam’s work shows that sovereign AI capability is “not just a luxury, it is a necessity” [2].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for sovereign AI is reinforced by remarks that sovereignty is not isolation and that India must develop core AI capabilities to avoid dependence, as discussed in [S6] and echoed in the keynote emphasizing the risk of becoming a digital colony [S4].
MAJOR DISCUSSION POINT
Need for sovereign AI capability
AGREED WITH
Vivek Raghavan
V
Vivek Raghavan
12 arguments139 words per minute2407 words1033 seconds
Argument 1
Build indigenous AI to avoid digital colonisation – (Vivek Raghavan)
EXPLANATION
Vivek argues that India must develop its own AI systems to prevent dependence on foreign technologies, which would turn the country into a digital colony. Indigenous development is framed as a non‑optional national mandate.
EVIDENCE
He warns that without building AI domestically, India will become “a digital colony which is dependent on other countries for this core, core technology” and stresses that this is “not an option” but something the country “must do” [25-27].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The keynote repeatedly warns that without indigenous AI India will become a digital colony dependent on foreign technology, matching the argument’s claim [S4] and [S14].
MAJOR DISCUSSION POINT
Preventing digital colonisation
AGREED WITH
Speaker 1
Argument 2
Linguistic diversity as a strategic asset – (Vivek Raghavan)
EXPLANATION
Vivek highlights India’s vast linguistic landscape—22 official languages and regional variations every 50 km—as a unique advantage for AI development. He asserts that AI built in India must capture this diversity to truly represent the population.
EVIDENCE
He notes India’s 22 official languages, the rapid change in spoken language across short distances, and the need for AI to acknowledge this diversity to understand the voice of the people [29-34].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
A roadmap paper frames India’s cultural and linguistic diversity as a strategic advantage for AI development, directly supporting the argument [S9].
MAJOR DISCUSSION POINT
Leveraging linguistic diversity
Argument 3
Massive, cost‑conscious market drives demand – (Vivek Raghavan)
EXPLANATION
Vivek points out that India’s large, price‑sensitive market creates strong demand for affordable AI solutions at scale. He links this demand to the country’s economic size and cost‑conscious consumer behavior.
EVIDENCE
He describes India as a huge economy with demand for AI, emphasizing the need for low-cost, efficient AI that can reach the “last person” in the country, and cites the UPI success story as evidence of India’s ability to deliver better and cheaper services [35-40][41-42].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Analyses of India’s large, price-sensitive economy and growing consumer demand for affordable digital services, exemplified by the UPI success story, provide contextual backing [S10] and [S11].
MAJOR DISCUSSION POINT
Market size and cost sensitivity as drivers
Argument 4
Three‑layer stack: models, applications, infrastructure – (Vivek Raghavan)
EXPLANATION
Vivek outlines Sarvam’s full‑stack sovereign AI platform, which is organized into three layers: indigenous models, AI‑powered applications, and scalable infrastructure. This structure is presented as the roadmap for building sovereign AI.
EVIDENCE
He explicitly lists the three components-models, applications, and infrastructure-as the pillars of Sarvam’s platform [43-46].
MAJOR DISCUSSION POINT
Full‑stack AI architecture
Argument 5
SARAS speech model delivers best‑in‑class Indian language performance – (Vivek Raghavan)
EXPLANATION
Vivek claims that the SARAS speech‑to‑text model, trained on extensive Indian data, outperforms global competitors in Indian languages. He stresses its native design and superior user preference in blind tests.
EVIDENCE
He describes SARAS as a native speech model trained on diverse Indian data, calling it best-in-class for Indian languages, and notes that blind-test results show it is the most preferred voice compared to global rivals such as Levin Labs and Cartesia [53-65][69-70].
MAJOR DISCUSSION POINT
Superior Indian‑language speech model
Argument 6
Vision model (3 B parameters) outperforms global rivals in document digitisation – (Vivek Raghavan)
EXPLANATION
Vivek presents a 3‑billion‑parameter vision model that excels at document digitisation, language layout understanding, and visual grounding, outperforming larger international models in both Indian and English contexts.
EVIDENCE
He explains that the vision model, despite its modest size, beats all other world models in document digitisation, language layout understanding, visual grounding, and reading order prediction, and that it outperforms models even with many more parameters [84-89].
MAJOR DISCUSSION POINT
High‑performance, lightweight vision model
Argument 7
Small LLM with 32K context, 16 T tokens, real‑time multilingual chat – (Vivek Raghavan)
EXPLANATION
Vivek describes a compact large language model with a 32 K token context window, trained on 16 trillion tokens, designed for real‑time conversational AI across all Indian languages. The model is positioned as efficient yet capable.
EVIDENCE
He notes that the model can run on a single GPU, has a 32 K context length, was trained on 16 trillion tokens, and is optimized for real-time multilingual chat applications [92-98].
MAJOR DISCUSSION POINT
Efficient multilingual conversational LLM
Argument 8
105 B parameter LLM matches or exceeds open‑source giants on benchmarks – (Vivek Raghavan)
EXPLANATION
Vivek asserts that Sarvam’s 105‑billion‑parameter LLM, the largest trained in India from scratch, performs on par with or better than leading open‑source and proprietary models on a range of benchmarks, including reasoning and web‑search tasks.
EVIDENCE
He states that the 105 B model is comparable to top open-source and closed-source models, superior to GBT-OSS 120 B and Gemini Flash in output quality, and handles complex reasoning and web-search tasks [101-107].
MAJOR DISCUSSION POINT
World‑class large‑scale Indian LLM
Argument 9
15‑person youth team built world‑class models, proving talent depth – (Vivek Raghavan)
EXPLANATION
Vivek highlights that a small team of 15 young engineers built the described models, demonstrating the depth of technical talent available in India and the potential for larger achievements with proper support.
EVIDENCE
He mentions that the models were built by a team of just 15 young people, emphasizing the youth’s contribution and expressing confidence that greater support would enable even bigger successes [114-119].
MAJOR DISCUSSION POINT
Youth talent and feasibility
Argument 10
Voice‑conversation platform handling >1 M minutes daily in 11 languages – (Vivek Raghavan)
EXPLANATION
Vivek reports that Sarvam’s real‑time voice conversation platform processes over one million minutes of speech each day across eleven Indian languages, illustrating large‑scale deployment and multilingual reach.
EVIDENCE
He states that more than a million minutes of voice conversation in 11 Indian languages are handled daily using Sarvam’s platform, and that these models are already used in enterprise and government contexts [123-126].
MAJOR DISCUSSION POINT
High‑volume multilingual voice service
Argument 11
Applications for NGOs, enterprises, government, content digitisation and dubbing – (Vivek Raghavan)
EXPLANATION
Vivek outlines a suite of applications built on the models, including NGO outreach, enterprise workflows, government services, book digitisation, translation, and video dubbing, showing practical societal impact.
EVIDENCE
He describes collaborations with NGOs that generated crore minutes of calls, as well as capabilities for digitising books, translating, and dubbing videos, indicating a broad application portfolio [127-131].
MAJOR DISCUSSION POINT
Diverse real‑world AI applications
Argument 12
Edge‑optimized models for phones and AR glasses; large‑scale compute at Indian cost – (Vivek Raghavan)
EXPLANATION
Vivek explains efforts to shrink models for edge deployment on smartphones and AR glasses, and to provide large‑scale compute resources at costs affordable for India, ensuring nationwide accessibility.
EVIDENCE
He mentions making models smaller for edge devices, launching glasses that run AI locally, and building large-scale compute infrastructure that delivers models at Indian scale and Indian cost [133-135].
MAJOR DISCUSSION POINT
Edge deployment and cost‑effective compute
Agreements
Agreement Points
Sovereign AI is essential for India’s future and must be built domestically
Speakers: Speaker 1, Vivek Raghavan
Sovereignty necessity – (Speaker 1) Build indigenous AI to avoid digital colonisation – (Vivek Raghavan)
Both speakers stress that AI capability owned and controlled by India is not optional but a strategic necessity; without it India risks becoming a digital colony and being left behind in the AI race [2][12-14][25-27].
POLICY CONTEXT (KNOWLEDGE BASE)
The consensus reflects India’s push for digital sovereignty, stressing the need to develop foundational AI models locally rather than rely on external providers, as discussed in the context of heterogeneous compute and sovereign capabilities [S29], and reinforced by keynote arguments that India must build AI to remain sovereign [S30][S31].
Similar Viewpoints
Both see sovereign AI capability as a non‑luxury, mandatory national mandate to safeguard India’s technological independence and development trajectory [2][12-14][25-27].
Speakers: Speaker 1, Vivek Raghavan
Sovereignty necessity – (Speaker 1) Build indigenous AI to avoid digital colonisation – (Vivek Raghavan)
Unexpected Consensus
Overall Assessment

The discussion shows clear alignment between the introductory speaker and the keynote on the need for indigenous, sovereign AI for India, framing it as a strategic imperative to avoid digital dependence. Beyond this core point, there is little overlap on other themes such as linguistic diversity, market size, or specific technical achievements.

Moderate consensus limited to the sovereignty argument; this shared stance reinforces policy momentum for building a national AI ecosystem but indicates divergent focus on implementation details and broader AI applications.

Differences
Different Viewpoints
Unexpected Differences
Overall Assessment

The transcript shows strong alignment between the introductory remarks and Vivek Raghavan’s keynote. Both emphasize the strategic need for sovereign, indigenous AI to avoid dependence on external technology and to serve India’s linguistic diversity and large market. No substantive contradictions or opposing viewpoints are evident.

Minimal – the speakers are largely in consensus, indicating a unified stance on the importance of building indigenous AI capacity. This coherence suggests that policy discussions around AI sovereignty in this context are likely to progress without major internal contention.

Partial Agreements
Both speakers stress that India must develop its own AI capabilities rather than rely on foreign models. Speaker 1 calls sovereign AI a "necessity" in a world dominated by English‑language models [2], while Vivek warns that without domestic AI India will become "a digital colony" and says building AI is "not an option" but a mandate [25-27].
Speakers: Speaker 1, Vivek Raghavan
Sovereignty necessity – (Speaker 1) Build indigenous AI to avoid digital colonisation – (Vivek Raghavan)
Takeaways
Key takeaways
AI sovereignty is essential for India to avoid digital colonisation and ensure long‑term strategic advantage. India’s linguistic diversity and large, cost‑conscious market provide unique advantages for building indigenous AI solutions. Sarvam is developing a full‑stack sovereign AI platform comprising three layers: indigenous models, applications, and scalable infrastructure. Indigenous models include the SARAS speech model (best‑in‑class for Indian languages), a 3 B‑parameter vision model that outperforms global rivals in document digitisation, a small LLM with 32K context and 16 trillion tokens for real‑time multilingual chat, and a 105 B‑parameter LLM that matches or exceeds leading open‑source and closed‑source models. A small, 15‑person youth team was able to build world‑class models, demonstrating the depth of talent in India. Sarvam’s applications already power over 1 million minutes of voice conversation daily in 11 Indian languages and support NGOs, enterprises, government, content digitisation, and dubbing. Infrastructure efforts focus on edge‑optimized models for phones and AR glasses and on delivering large‑scale compute at Indian cost.
Resolutions and action items
None identified
Unresolved issues
None identified
Suggested compromises
None identified
Thought Provoking Comments
India can train state‑of‑the‑art models, bring AI to a billion Indians, and do it all. In the long run, it’s sovereignty that matters; unless we build these things ourselves we will be left behind.
Frames AI development as a matter of national sovereignty rather than just a technical race, shifting the conversation from competition over model size to strategic self‑reliance.
Sets the thematic foundation for the entire talk, prompting the audience to view subsequent technical details through the lens of national independence and leading to deeper discussion of why indigenous AI is essential.
Speaker: Vivek Raghavan
Our unique advantage is actually our diversity – 22 official languages and dialects that change every 50 km. AI built in India must capture that diversity to truly understand the voice of the people.
Highlights linguistic diversity as a strategic asset, introducing a novel angle on why Indian AI can outperform global models on local tasks.
Leads to the introduction of the SARAS speech model and the emphasis on native language performance, steering the conversation toward concrete examples of leveraging diversity.
Speaker: Vivek Raghavan
We are a cost‑conscious country; everything needs to be at the lowest cost. So we need to build efficient AI that can be delivered at scale for the last person in the country.
Connects economic realities with technical design, urging a focus on efficiency and affordability rather than raw scale.
Shifts the tone from showcasing large models to discussing model size, optimization, and edge deployment, paving the way for later remarks about small‑model performance and edge devices.
Speaker: Vivek Raghavan
Rule number one for sovereign models: they are built from scratch, with no data dependency on anyone else, yet they are world‑class, state‑of‑the‑art models.
Establishes a clear principle that underpins the company’s technical strategy, challenging the common practice of fine‑tuning existing global models.
Creates a turning point that moves the discussion from high‑level motivation to the concrete methodology of model development, prompting listeners to consider the feasibility of truly independent AI pipelines.
Speaker: Vivek Raghavan
Our 105 billion‑parameter LLM, trained entirely in India by a team of just 15 young engineers, matches or exceeds the performance of comparable open‑source and closed‑source models.
Demonstrates that scale and excellence can be achieved with limited resources, reinforcing the earlier sovereignty narrative and inspiring confidence in domestic talent.
Elevates the conversation to a proof‑of‑concept milestone, encouraging the audience to envision larger future projects and reinforcing the message that talent, not just capital, drives success.
Speaker: Vivek Raghavan
We are already delivering more than a million minutes of real‑time voice conversation in 11 Indian languages every day, and have helped NGOs generate a crore minutes of calls in a month.
Shows tangible, large‑scale impact of the technology on society, moving the discussion from theory to real‑world application.
Broadens the scope of the talk to include social impact and public‑sector use cases, prompting listeners to think about deployment challenges and benefits beyond commercial profit.
Speaker: Vivek Raghavan
We are making our models smaller to run on the edge, on phones, even on glasses, so that AI can be present at every point in India at India‑scale and India‑cost.
Introduces the vision of ubiquitous, low‑cost AI access, linking back to the earlier cost‑consciousness point and expanding the discussion to hardware and infrastructure.
Serves as a forward‑looking conclusion that ties together sovereignty, diversity, efficiency, and accessibility, setting the stage for future collaborations and policy discussions.
Speaker: Vivek Raghavan
Overall Assessment

The discussion was driven by a series of strategically placed insights from Vivek Raghavan that moved the audience from a broad, ideological stance on AI sovereignty to concrete demonstrations of technical capability, social impact, and future deployment. Each pivotal comment reframed the conversation—first by positioning sovereignty as a national imperative, then by leveraging India’s linguistic diversity, emphasizing cost‑effective design, insisting on building models from scratch, showcasing a world‑class large model built by a tiny team, evidencing real‑world usage, and finally envisioning edge‑centric, ubiquitous AI. These moments collectively shifted the tone from abstract advocacy to demonstrable achievement and forward‑looking ambition, deepening the audience’s understanding of how India can achieve independent, inclusive AI at scale.

Follow-up Questions
How can India develop sovereign AI models from scratch without relying on external data or pretrained models?
Raghavan emphasizes the need for completely home‑grown models to ensure AI sovereignty, highlighting a gap that requires research into data collection, model architecture, and training pipelines.
Speaker: Vivek Raghavan
What methods can be used to capture and model India’s linguistic diversity, including 22 official languages and regional dialects that change every 50 km?
He stresses that AI must reflect India’s language variety, indicating a need for research on multilingual data acquisition, dialect representation, and evaluation metrics.
Speaker: Vivek Raghavan
How can AI models be made cost‑effective and efficient enough to serve the ‘last person’ in India at scale?
Raghavan points out India’s cost‑conscious market and the requirement for low‑cost deployment, suggesting research into model compression, quantization, and affordable inference infrastructure.
Speaker: Vivek Raghavan
What strategies are needed to deploy AI models on edge devices such as smartphones and specialized hardware like glasses?
He mentions work on making models run on phones and glasses, indicating further investigation into edge‑optimised architectures, on‑device training, and power‑efficient inference.
Speaker: Vivek Raghavan
How can large‑scale compute infrastructure be built in India to support training and serving of sovereign AI models at national cost levels?
Raghavan notes the necessity of massive, affordable compute for India‑scale AI, highlighting a research and policy area around data centre design, hardware sourcing, and financing models.
Speaker: Vivek Raghavan
What benchmarking frameworks should be used to evaluate Indian AI models against global competitors across speech, vision, and language tasks?
He references blind tests and comparisons with global models, implying a need for standardized, transparent benchmarking tailored to Indian languages and use‑cases.
Speaker: Vivek Raghavan
How can AI applications be effectively integrated into government services and large public platforms like UPI and Aadhaar?
Raghavan links AI to existing digital infrastructure, suggesting research on secure, scalable integration, privacy preservation, and impact assessment.
Speaker: Vivek Raghavan
What sustainable funding and policy mechanisms are required to support long‑term AI sovereignty initiatives beyond initial grants?
He mentions the grant from the India AI Mission that enabled model training, indicating a need to explore ongoing financing, regulatory frameworks, and public‑private partnerships.
Speaker: Vivek Raghavan
How can the talent pipeline be expanded and supported so that small teams (e.g., 15 young engineers) can scale up to larger, more complex AI projects?
Raghavan credits a small youth team for their achievements, pointing to research on education, mentorship, and ecosystem development to nurture AI expertise.
Speaker: Vivek Raghavan
What are the best practices for building AI‑driven applications (e.g., real‑time voice conversation, document digitization, dubbing) that serve NGOs, enterprises, and citizens effectively?
He describes various applications in use, indicating a need for further study on productisation, user experience, scalability, and impact measurement.
Speaker: Vivek Raghavan

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI

HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI

Session at a glanceSummary, keypoints, and speakers overview

Summary

This panel discussion focused on heterogeneous computing and AI infrastructure challenges in India, featuring experts from Qualcomm, Cisco, IIT Madras, and Intel, along with a government minister. The central theme revolved around distributing AI compute across different layers – from edge devices to data centers – to create more efficient and resilient AI systems.


Durga Malladi from Qualcomm emphasized the importance of running AI inference directly on devices, noting that smartphones can now handle 10 billion parameter models while smart glasses can run sub-1 billion parameter models. He advocated for “hybrid AI” that seamlessly distributes computing between devices, edge cloud, and data centers based on connectivity and requirements. The discussion highlighted voice interfaces in native languages as a key application area, with support for 14 languages mentioned.


Arun Shetty from Cisco identified three major impediments to AI adoption: infrastructure constraints (power, compute, and networking), security and safety concerns, and data gaps. He stressed that enterprises and governments possess the best datasets but need secure, fit-for-purpose solutions. The security aspect was particularly emphasized, noting challenges like model hallucination, toxicity injection, and the need for comprehensive visibility across AI systems.


Professor Kamakoti discussed the critical importance of trust in AI systems, explaining that mathematical definitions of trust are complex and context-dependent. He emphasized the need for sovereign AI models and robust cybersecurity measures, particularly for critical infrastructure and public systems. Energy efficiency emerged as a crucial concern, with discussions about power usage effectiveness (PUE) and the need for hybrid energy solutions. The panelists concluded that India’s AI future depends on collaborative efforts to address infrastructure, security, and energy challenges while leveraging the country’s strengths in application development and diverse datasets.


Keypoints

Major Discussion Points:

Heterogeneous Computing and Distributed AI Infrastructure: The panel extensively discussed the need for distributed computing across devices, edge cloud, and data centers rather than concentrating all compute in single locations. This includes running inference on smartphones (up to 10 billion parameter models) and smart glasses to reduce dependency on network connectivity and data centers.


Infrastructure Constraints and Resource Management: Significant focus on three critical bottlenecks – power consumption (with projections of 63 gigawatts needed), compute availability, and networking challenges. The discussion emphasized energy efficiency, with data centers requiring 40% power for cooling, 40% for computing, and 20% for connectivity, highlighting the need for better power usage efficiency (PUE).


Security and Safety in AI Systems: Comprehensive discussion on AI security challenges including model vulnerabilities, adversarial AI, data poisoning, and the need for “shadow AI” detection in enterprises. The panel distinguished between safety issues (models not working as intended) and security threats (external actors changing model behavior).


Data Quality and Sovereign AI Models: Emphasis on the importance of high-quality, accessible datasets for AI development, with particular focus on India’s need for sovereign large language models using local data rather than relying solely on public datasets used by global models.


Practical Applications and India’s AI Ecosystem: Discussion of India’s growing AI landscape with 300+ Gen AI startups, focus on application layer development, and the need for localized solutions including voice interfaces in 14 Indian languages and domain-specific models for various verticals.


Overall Purpose:

The discussion aimed to explore India’s path toward building robust, secure, and efficient AI infrastructure through heterogeneous computing approaches, addressing both technical challenges and policy considerations for scaling AI adoption across enterprises and public systems.


Overall Tone:

The discussion maintained a professional, collaborative, and optimistic tone throughout. Panelists demonstrated mutual respect and built upon each other’s points constructively. The tone was forward-looking and solution-oriented, with participants sharing practical insights from their respective domains while acknowledging shared challenges. The minister’s closing remarks reinforced the positive, collaborative atmosphere by emphasizing the partnership between policymakers and technologists for societal welfare.


Speakers

Speakers from the provided list:


Kazim Rizvi – Moderator/Host of the panel discussion


Prof. V. Kamakoti – Professor and Director of a premium educational institution in India, involved in India’s AI policies, expertise in cybersecurity and trust in AI systems


Arun Shetty – Representative from Cisco, expertise in networking, connectivity, AI infrastructure, and AI safety/security


Gokul Subramaniam – Expertise in edge computing, AI deployment models, vertical-specific AI applications, and infrastructure optimization


Durga Malladi – Representative from Qualcomm, expertise in processors, heterogeneous computing, AI inference on devices, and hybrid AI solutions


Sridhar Babu – Honorable Minister, policymaker focused on providing infrastructure support (power, electricity, water, land) for AI development


Additional speakers:


Sarah – Representative from Intel (mentioned only briefly at the end for gift presentation)


Full session reportComprehensive analysis and detailed insights

This panel discussion on heterogeneous computing and AI infrastructure in India brought together leading experts from industry, academia, and government to address critical challenges and opportunities in the country’s AI development. Moderated by Kazim Rizvi, the panel featured Durga Malladi from Qualcomm, Arun Shetty from Cisco, Professor V. Kamakoti from IIT Madras, Gokul Subramaniam from Intel, and Minister Sridhar Babu, creating a convergence of technical expertise and policy perspectives.


The Shift Towards Distributed AI Infrastructure

Durga Malladi from Qualcomm opened with a compelling vision for distributed computing that challenges conventional AI infrastructure thinking. His central principle—that AI user experience should remain consistent regardless of network connectivity—established the framework for reimagining AI deployment. This necessitates running inference directly on devices rather than relying solely on centralized cloud processing.


Malladi demonstrated the feasibility of this approach with impressive technical achievements: modern smartphones can handle up to 10 billion parameter multimodal models, while smart glasses can efficiently run sub-1 billion parameter models with 24-hour battery life. These capabilities represent a significant leap in edge computing power, enabling sophisticated AI applications to function independently of network connectivity.


The concept of “hybrid AI” emerged as Qualcomm’s strategic approach, distributing computing across devices, edge cloud infrastructure, and traditional data centers based on specific workload requirements. This optimization across the computing continuum moves away from forcing all AI processing through centralized bottlenecks.


Voice interfaces exemplified this distributed approach’s practical applications. Malladi emphasized voice as “the most natural user interface,” particularly important for native language interaction. Supporting 14 languages requires heterogeneous processors capable of handling diverse linguistic and cultural contexts, benefiting from localized processing that understands specific user environments.


Infrastructure Constraints and Energy Challenges

Arun Shetty from Cisco identified three critical impediments to AI adoption in India: infrastructure constraints encompassing power, compute, and networking; security and safety concerns; and significant data gaps. The power challenge emerged as particularly acute, with projections that AI infrastructure will require substantial energy scaling in coming years.


Gokul Subramaniam from Intel highlighted three physical constraints India cannot circumvent: land, water, and power. His analysis revealed that in data centers, 40% of energy goes to cooling, 40% to computing, and 20% to connectivity. This breakdown emphasizes the importance of achieving optimal Power Usage Efficiency ratios, where maximum energy goes to actual computing rather than supporting infrastructure.


The cooling challenge becomes complex as compute requirements scale, with different cooling solutions needed for varying power densities. For India, with its diverse climate conditions, this requires region-specific solutions accounting for local environmental factors.


Subramaniam emphasized the leapfrogging opportunity this presents for India, noting that edge computing can reach areas without traditional connectivity infrastructure, potentially democratizing access to AI capabilities across the country’s diverse geographic and economic landscape.


Security and Safety: Understanding the Distinction

Arun Shetty made a crucial distinction between safety and security concerns in AI systems. Safety issues involve models not working as intended—including hallucination, toxicity, and unpredictable behavior. Security concerns involve external actors deliberately changing model behavior through adversarial attacks or data poisoning.


This distinction has profound implications for risk mitigation strategies. Safety requires internal controls and model validation, while security demands external threat detection and defensive mechanisms. The non-deterministic nature of AI models complicates both challenges, as consistent input-output relationships cannot be guaranteed.


Professor Kamakoti provided a mathematical framework for understanding trust in AI systems, referencing the TV show “Yes Prime Minister” to illustrate that trust is neither reflexive, symmetric, nor transitive. Trust is context-dependent and temporal, varying based on circumstances and changing over time. This complexity necessitates new approaches to AI security that account for trust’s nuanced, contextual nature.


Shetty briefly mentioned the challenge of “shadow AI” in enterprises, where organizations lack visibility into AI applications their employees use, creating potential security vulnerabilities and compliance risks.


Data Sovereignty and Quality

The discussion revealed significant opportunities for India to leverage its unique datasets while addressing quality and accessibility challenges. Shetty observed that while most global AI models train on publicly available data, enterprises and governments possess superior datasets that could enable more effective AI applications.


Kazim Rizvi noted that India has approximately 300 GenAI startups building on large language models while simultaneously developing sovereign models. This dual strategy leverages global AI advances while building indigenous capabilities, balancing innovation speed with strategic autonomy.


Professor Kamakoti suggested incorporating “need to know” principles into AI models, similar to security clearance systems, enabling appropriate responses based on user authorization levels while maintaining functionality for authorized users.


Practical Applications and Strategic Opportunities

Gokul Subramaniam highlighted specific AI applications in education, including real-time translation and transcription services that could transform learning experiences. These domain-specific models optimized for educational content could provide personalized learning and adaptive content delivery, functioning effectively even in areas with limited connectivity.


The education sector represents a particularly promising area for distributed AI deployment, potentially democratizing access to high-quality educational resources across India’s diverse geographic regions.


Small and medium businesses also represent significant opportunities for edge AI deployment, making advanced AI capabilities accessible to organizations that previously couldn’t afford sophisticated cloud-based solutions.


Policy Support and Collaborative Framework

Minister Sridhar Babu’s participation highlighted critical policy support for India’s AI infrastructure development. His commitment to providing adequate power, electricity, water, and land infrastructure represents essential government backing for private sector AI initiatives.


The minister emphasized “welfare for all, happiness for all” as the ultimate goal of AI implementation, providing important ethical grounding that ensures AI development serves broader social goals rather than purely technical or commercial objectives.


Future Outlook

The panelists outlined a vision for India’s AI future that balances ambitious technical goals with practical implementation challenges. The hybrid AI approach represents a pragmatic path forward, enabling incremental deployment of AI capabilities across the computing continuum without requiring massive upfront investments in centralized infrastructure.


The development of sovereign AI models represents both a technical challenge and strategic opportunity, requiring sustained investment in data infrastructure, model development capabilities, and human capital to compete globally while serving specifically Indian needs.


Energy efficiency improvements offer significant opportunities for reducing environmental impact while controlling operational costs. The combination of edge computing capabilities with strategic data center deployment could optimize India’s AI infrastructure development within existing resource constraints.


Conclusion

This panel discussion illuminated the complex challenges facing India’s AI infrastructure development while highlighting significant opportunities for innovation and leadership. The shift towards heterogeneous, distributed computing represents a fundamental reimagining of AI deployment that could serve diverse user needs while respecting infrastructure constraints and security requirements.


India’s unique position—combining technical talent, diverse datasets, a vibrant startup ecosystem, and supportive policy environment—positions the country to lead in this new paradigm. The collaborative spirit evident in this discussion, where technical experts, policymakers, and industry leaders work toward common goals, provides a compelling framework for navigating the complex challenges ahead while maximizing AI’s transformative potential for all citizens.


The vision articulated by the panelists of AI systems that serve all citizens, respect sovereignty and security requirements, and operate efficiently within India’s constraints offers a roadmap for the country’s AI future that balances innovation with practical implementation realities.


Session transcriptComplete transcript of the session
Durga Malladi

with them. 14 languages. Voice is the most natural user interface to devices around you. So the idea is not to actually keep typing and texting, but it’s about the usage of voice, but in native languages, which actually work very nicely. And that means that you have to make sure that the use cases are built on top of it. So that’s what our focus is from a processor standpoint. One final note, and given that I have maybe just one minute, another aspect of heterogeneous computers, disaggregation of compute within the network itself. What I mean by that is, at some point in time, you might have extremely good connectivity to the network. And at some other point in time, you might have zero connectivity to the network.

And the question to ask is, do you want your AI user experience to be invariant to the quality of the communications that you have at that point in time? Or do you want it to depend on it? Obviously, you want it to be invariant. That means you must have the ability to run inference directly on devices. Not that you want to do it all the time, but when you can, why not? today we can run up to a 10 billion parameter model multimodal model state of the art on a smartphone and a sub 1 billion parameter model in your glasses without necessarily charging a device the whole day it’s once every 24 hours so we’ve come a long way in that which means use the data centers use the edge cloud as and when necessary they have a role to play at the same time make sure that we also build for devices where the inference actually occurs and users directly perceive that’s where the data originates so it’s important to think about it that way

Kazim Rizvi

yeah there’s there’s also very strong environmental aspect to this and which often gets unnoticed and undiscussed but that element is also very important in terms of efficiently managing the energy requirements because energy as we also know is finite and so I think you one thing which I was struck to me which is spoke what was inferences and the other is that it’s not just about the energy but it’s also about the energy and the A lot of what’s happening in India is also around inferencing models, right? So, I mean, in terms of the Gen AI story, which we have, we have almost 300 Gen AI startups, which are building on top of the large language models.

And India is definitely leading the way in terms of application layer. There’s no doubt about that. Now, of course, with Sarvam and others, we are also building sovereign large language models, right? So, we are sort of, as Minister Vaishnav has spoken about, every, you know, piece of the puzzles. We are there in terms of fitting that puzzle together. I’d like to come to Mr. Arun Shetty, sir, is with Cisco. And, you know, we just want to take it further from where Durga sir had left in terms of talking about enterprise adoption at scale. And, you know, of course, with Cisco, what are the challenge of bottlenecks, which you see in terms of computer availability, connectivity, which Cisco is trying to do, which you see in generally.

And I think that’s a really important thing to talk about. And I think that’s a really important thing to talk about. And I think that’s a really important thing to talk about. And I think that’s a really important thing to talk about. And I think that’s a really important thing to talk about. And I think that’s a really important thing to talk about. And I think that’s a really important thing to talk about. And I think that’s a really important thing to talk about. And I think that’s a really important thing to talk about. And I think that’s a really important thing to talk about. And I think that’s a really important thing to talk about.

And I think that’s a really important thing to talk about. And I think that’s a really important thing to talk about. And

Arun Shetty

Yeah, so as you know, we connect and protect the… This should be working, right? Yeah, yeah, yeah. As you know, we connect and protect even in the AI era, right? We started in the internet, we came into the cloud, and we are in this era. First of all, thank you very much for having me, and it’s indeed a pleasure to be representing this esteemed panel. So I think what I’ll do is I’ll summarize based on what others have spoken, actually, and I think those are real problems. The first one is clearly the three impediments for AI adoption is one is clearly infrastructure constraints, and we all spoke about it, and they all spoke about it.

The first one is the power. power is a challenge will be a challenge i think usc is expecting it will be 63 gigawatts of power in couple of years what they require okay and then the compute is a problem we did recognize that compute is becoming a problem and then uh kamakoti sir did tell that cisco is in networking what are you doing in networking and networking will be a problem actually and then we need to see how we need to address and clearly it has to be a fit for purpose solutions because you not only do huge data centers and i think what we see is in couple of years you will see there is more inferencing happening at the edge and that’s what we need that’s what the how the world will move and that’s why solutions have to be fit for purpose for sure the second bigger challenge what we have is the security and the safety aspect so that is something what we need to pay lot of attention because as the adage says what if you can’t see you can’t trust right you can’t trust something what you can’t see so you need to have the visibility across the stack and also you need to see whether the models what we are using are the right models for us or is there anything malicious into the models itself actually vulnerabilities in that model so the security aspect becomes where security and safety aspect becomes very very important because the models hallucinate you can inject toxicity into the model so those are the challenges what we need to address as far as what we use so i think it is very very important to build our models and if you look at the models all the models were built using the public data which was the text voice and video data so but however the enterprises the government has the best data sets so why can’t we use those data sets so the third impediment what we have today is the data set so the third impediment what we have today is the data set so the third impediment is the data gap and data gap is essentially i need to have high quality accessible and manageable data and we can build gpts using that what we can call it as a machine gpt what we can build using that use that for inferencing use that for training use that for inferencing and we get a lot of quality use of ai without data the which is the fuel for the ai today you can’t really move forward on the ai and i think these are the typical three problems and the ways we are looking at addressing this is clearly one is i will not be able to build a huge data center for a specific use case so take a use case and then see how fast i can give that infrastructure a comprehensive secure ai factory or a secure infrastructure whether it is in the data center or in the edge actually so that people can focus on building the use cases or the applications on top of it and the second thing comes on the safety and the security aspect of it and how we can do the defense mechanism and the third one is the data so these are the three problems what cisco is trying to address along with the ecosystem partners of course because this is not a problem what you can solve alone actually yeah thank you

Kazim Rizvi

yeah i think i don’t know if my mic okay it’s okay yeah and i’ll i’ll sort of take from the security point which you have spoken and i’ll come back to dr kamakoti i think we have on the clock it shows seven but on my watch it shows 15 yeah so i’ll go by my watch uh yeah so dr kamakoti would like to focus on critical infra and public systems here and as you know that as with the advent of ai we’re going to use it across these sectors as well so how important do you see heterogeneous compute in terms of contributing to national resilience to safeguard and to sort of you know ensure that our critical infrastructure public systems are secure as well

Prof. V. Kamakoti

So today, the type of things that we need to do for each one of these actions, the type of inferencing, type of response time we need, as Shetty mentioned, it’s going to be different. I hope all of you have seen Yes Prime Minister, and always they say, need to know, right? You need to know, right? And now what happens is if I am going to make a model that has understood the entire data, then this that the model, and it is used to be someone that someone should they need to know that data? That’s a very important question. So that’s where the entire aspect of cybersecurity comes in. And that’s why we are all saying that we have need to have sovereign models.

As he rightly pointed out, we can have adversarial AI, we can go poison the whole thing and then make it teach make it tell the things that, you know, should not be told, or need not be told. Okay. This is something that we need to very much look at from a security point where i do an inferencing and my training data set goes for a toss number one so we need to have something for for education at least as a director of one of the premium students in the country what my worry is that for education like how we have since our board for uh you know movies what we should make models for which certain details alone should be fed into it see is a bacha right whatever you teach what it will tell you back probably do a little more uh generative on that so this is number one number two is again coming back to cisco itself right you do deep packet inspection and basically you do it with some signatures today the the whole story is changing dynamically the malware can change its signature so that’s going to be the biggest challenge now and what sort of inferencing they are going to do they have to bring some more different architecture and that will be a heterogeneous architecture now and so so So, ultimately, you know, as you see, you know, what you see, the trust component, I always repeat this, I’ll finish with this with my one minute.

So, trust is, you know, friends, you know, if you want to define A is equivalent to B, that’s the definition, right? If you want to define A, you have to come with B, which is equivalent to A. So, equivalence in discrete mathematics, equivalence relation should satisfy three properties, reflexive, symmetric, transitive. A is trust is not reflexive, I don’t trust myself sometimes. Trust is not symmetric, I trust Sarah, Sarah may not trust me. Trust is not transitive, I trust Gokul, Gokul trust you, I may not trust you. Trust is in addition, trust is context dependent, I trust. I trust you on something, I don’t trust you on something else. It is temporal, morning I trust you, evening I don’t trust you.

So, right? So, the main thing is, we have to build that mathematics. defined trusted and if you go to you know some of these search engine and define trust you get 1 million hits for that so so that is going to be the most important part so specifically on heterogeneous we will have certain different types of security issues something which a can sound something which is originating because of a and that’s where all of us edge connectivity server all the three people have to work together and and we will teach and he’ll put policy so

Kazim Rizvi

but both of you are equally playing an important role in terms of policy dr. Kamothi you’re also you know very influential and important figure in India’s AI policies of course lots to learn from you Goku very quickly would like to come to you and you know just sort of taking away in terms of the practical deployment models and what are the sort of examples you’ve seen which demonstrate that we are moving towards heterogeneous compute right and what needs to be done to also get get to that

Gokul Subramaniam

So I started off with workload and I’ll go back to the same thing. So one of the things that we’re looking at and it’s critical is to see what vertical really needs what kind of domain specific models. And then try to apply that as much as possible as edge inferencing and contain the walls that are there that prevents AI to work efficiently. Primarily it’s like memory, you know, the connectivity, the IO, the thermal and then the power. So from an edge inferencing standpoint, there are quite a few things that are being done, be it an education segment where you want more translation, data being available, transcription. So that the knowledge is being imparted in a way that you have with the right data with the lowest power that’s meaningful for the student.

And more importantly, when we talk security, it’s not only about protecting data. the models we keep talking data and models it’s protecting the user that’s even more fundamental and how you can ensure that that happens second thing is applying it to other verticals be it small and medium business i think there is a great opportunity there where edge inferencing and putting compute with the right kind of power that can translate the businesses into actually using ai more effectively the last aspect that i want to also touch upon is in terms of just power you know as we go from one gig to nine to ten gig in the next five years in the country we have to realize that india is challenged by three physical things that we cannot run away from land water and power and these are very important aspects that it will drive how we set up our infrastructure and you know almost you know in a hundred percent of your power energy that comes into a data center forty percent goes into cooling forty percent into your computer and twenty percent on connectivity and there is this famous metric that you use, the PUE, the power usage efficiency.

It has to be as close to one as possible. All the power that you give goes to the most important thing, which is the computer, not to the cooling and things. And there are a lot of technologies that are being played with with respect to how much you can air cool on a rack, per rack, and that was okay up to about 25 kilowatt, and as you start to get to 100, you have to use liquid cooling, and then how we can set that infrastructure up. And for a country like India, it’s absolutely important to look at what hybrid energy solutions we can go with, because just pure renewable may not be able to address it. You’ll have to have something that is stable and be able to do something off -grid so that there is that dependency for you to get the data from the data centers and push as much as possible to edge, because edge is all about reach.

How can I take it to places across the country where there is no access to connectivity? It’s about how can I leapfrog? How can I leapfrog with verticals that have not used technology as much? We’ve always done a leapfrogging in India, and this is a great moment for us, and total cost of ownership. Those are the big areas.

Kazim Rizvi

Thank you, Gokul. And I think as we are approaching the end of the panel, I’d sort of like to go to Durga and Dr. Shetty also in terms of closing remarks and the way forward. So to both of you, I’ll pose this question in terms of the next two to four years, because I think the AI age, we don’t think too far ahead. We can’t do five -year planning or 10 -year planning. I think two -year planning is sufficient. So what enterprise outcomes are you both looking at? Maybe we can start with Durga in terms of defining India’s access to compute, access to infrastructure, capacity, and also sort of building in scale, cost efficiency and energy efficiency.

Durga Malladi

So I’ll keep it brief. I think what I’m looking forward to with all the conversations here and in other parts of the world as well, where the problems are somewhat similar, is the ability to distribute compute across the entire network. So think of a combination of inference that runs in devices to the largest… extent that’s possible. Edge cloud, on -prem servers, where a lot of the localized processing can be done. And these can be done in air -cooled carts, by the way. The point that was made earlier is absolutely relevant. You don’t necessarily need liquid cooling all the time. You can do air -cooled carts and then just use air -cooled servers and running up to 100 to 300 billion parameter models, which are getting pretty sophisticated.

That’s the edge cloud. And as you go deeper from there onwards, then you have the data centers. It then mitigates the overall requirements of what you need in a data center. And instead of, therefore, concentrating the entire compute in one single location and then building it for just that alone, a holistic approach of devices, edge cloud, plus data center is probably what we are looking forward to. From Qualcomm, we call it as hybrid AI. It’s not just a marketing slogan, but it is something that we truly believe in. Thank you.

Arun Shetty

Since the infrastructure part has been addressed here, so let me talk. A little bit more on safety and security aspects. So I think one of the things what we need to understand about the modern… these models are very intricate and very complex. And it’s also non -deterministic because if you give an input, not necessarily the output will be the same like a standard application, correct? So that’s why it is non -deterministic. So what one should be doing, right? There are two aspects of safety and security. I’ll just touch upon why it is important to know that actually. Safety is all about, we want the models to work in a certain way but it is not working in that certain way or the way we want them to work.

That is the first part of it. That’s where the toxicity part, hallucination, all those challenges come actually. The second part of it is the security part wherein a bad actor from outside can change the behavior of the model. So we need to be careful about both the things actually. So what one should be doing? Say for example, I think Kamakoti sir also told about users to have, that’s it. users also to be secure, right? So it is essential that the organizations or the country has to build that actually. So which means if I’m accessing a chat GPT and sending some confidential info, the system should stop me. So that is the when I’m accessing a third party application, the system should be smart enough to stop me saying that you can’t be sharing that information that’s not allowed for you to share that.

So that’s something which is already happening in organizations today. The second part of it is the first party application, I’m building an application, and I’m using a model. So now the organization should be able to scan what all my AI assets are. Because one of the biggest challenges for enterprise is the shadow AI applications, they don’t know what people are doing actually. So I need to clearly know what all my assets are. That is number one, I detect all my assets or discover all my assets. And next is I should scan. and also ensure that these models and the applications what I’m using are not vulnerable. If it is vulnerable, then I need to put guardrails around it or I need to fix those problems.

And similarly, there are organizations who are already telling that there are a lot of risks. So you need to nist Mitre and OWASP are telling that there are a lot of risks associated with that and we need to ensure that we need to stop that. So that is something what Cisco is focus, our focus to see how we can use AI to defend the, to defend against all these malice and also the vulnerabilities what we see. Thank you so much.

Kazim Rizvi

I think with this, we’ll probably close the panel, but I’d like to invite Honorable Minister once again for his very quick closing remarks that you have sort of. Thank you. us highly motivated to sort of build on this. You’ve heard us in the last one hour. What are your thoughts? We’d love to hear from you in terms of your closing address.

Sridhar Babu

Thank you, Rizvi. And in fact, it’s a great pleasure to be here with the eminent Padmasree Awadi, Professor Kamakoti and Gokul and Durga Prasad and Mr. Vichetti sharing their truly professional experience and how as a policymaker, how we should view the things especially in terms of power, electricity, water and the land. How we should be well equipped to provide all these things where all the eminent panelists over here or the eminent people of the days would be thinking of putting. My primary challenge they have posed before is try to provide all these things. We are here to provide the rest remaining. And in fact, you know, thanks once again for a very apt introduction. very apt dialogue over here.

Ultimately, we have to all, me as a policymaker, and you all technocrats and innovators have to think the basic agenda for this AI impact term is welfare for all, happiness for all. Thank you for inviting me. Thank you so much.

Kazim Rizvi

With this, we will have to close the panel. I’d like to thank all our panelists and also invite colleagues, Sarah from Intel to hand over the gifts. But we’ll just have a group photo. Thank you.

Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
D
Durga Malladi
4 arguments197 words per minute538 words163 seconds
Argument 1
Voice interfaces in native languages require heterogeneous processors to handle diverse use cases
EXPLANATION
Durga Malladi argues that voice is the most natural user interface to devices, but it must work in native languages rather than just typing and texting. This requires heterogeneous computing processors that can handle the complexity of multiple languages and build appropriate use cases on top of this foundation.
EVIDENCE
Mentioned support for 14 languages and emphasized that use cases must be built on top of voice interfaces in native languages
MAJOR DISCUSSION POINT
Heterogeneous Computing and Distributed AI Infrastructure
Argument 2
AI user experience should be invariant to network connectivity quality, requiring on-device inference capabilities
EXPLANATION
Malladi contends that users should have consistent AI experiences regardless of whether they have excellent network connectivity or zero connectivity. This necessitates the ability to run AI inference directly on devices, not as the primary method but as a backup when network conditions are poor.
EVIDENCE
Explained the scenario of varying network connectivity quality and the need for consistent user experience
MAJOR DISCUSSION POINT
Heterogeneous Computing and Distributed AI Infrastructure
Argument 3
Modern smartphones can run 10 billion parameter multimodal models, glasses can run sub-1 billion parameter models
EXPLANATION
Malladi demonstrates the current capabilities of edge devices in running sophisticated AI models. He shows that significant AI processing can now happen directly on consumer devices without requiring constant charging, representing major progress in on-device AI capabilities.
EVIDENCE
Specific technical specifications: 10 billion parameter multimodal models on smartphones, sub-1 billion parameter models in glasses, with 24-hour battery life
MAJOR DISCUSSION POINT
Heterogeneous Computing and Distributed AI Infrastructure
Argument 4
Hybrid AI approach combining devices, edge cloud, and data centers is the optimal solution
EXPLANATION
Malladi advocates for a distributed computing approach that leverages devices, edge cloud, and data centers as needed rather than concentrating all compute in one location. This hybrid approach, which Qualcomm calls ‘hybrid AI,’ mitigates overall data center requirements and provides more flexible, efficient AI deployment.
EVIDENCE
Mentioned Qualcomm’s ‘hybrid AI’ concept and explained how this approach reduces data center concentration requirements
MAJOR DISCUSSION POINT
Heterogeneous Computing and Distributed AI Infrastructure
AGREED WITH
Arun Shetty, Gokul Subramaniam
DISAGREED WITH
Arun Shetty
A
Arun Shetty
8 arguments179 words per minute1219 words407 seconds
Argument 1
Power consumption will reach 63 gigawatts in coming years, presenting major infrastructure challenges
EXPLANATION
Shetty identifies power as one of the three major impediments to AI adoption, citing projections that power requirements will reach 63 gigawatts in the coming years. This represents a significant infrastructure challenge that must be addressed for widespread AI deployment.
EVIDENCE
Cited USC expectations of 63 gigawatts power requirement in a couple of years
MAJOR DISCUSSION POINT
Infrastructure Constraints and Energy Efficiency
AGREED WITH
Gokul Subramaniam, Kazim Rizvi, Sridhar Babu
DISAGREED WITH
Gokul Subramaniam
Argument 2
Edge inferencing will become more prevalent, requiring fit-for-purpose solutions rather than huge centralized data centers
EXPLANATION
Shetty argues that the future of AI will see more inferencing happening at the edge rather than in massive centralized facilities. This shift requires developing specific solutions tailored to particular use cases rather than building enormous data centers for every application.
EVIDENCE
Mentioned that in a couple of years, more inferencing will happen at the edge and emphasized the need for fit-for-purpose solutions
MAJOR DISCUSSION POINT
Heterogeneous Computing and Distributed AI Infrastructure
AGREED WITH
Durga Malladi, Gokul Subramaniam
DISAGREED WITH
Durga Malladi
Argument 3
Security and safety are major challenges as AI models are non-deterministic and can hallucinate or be injected with toxicity
EXPLANATION
Shetty emphasizes that AI models are intricate, complex, and non-deterministic, meaning they don’t always produce the same output for the same input. This creates both safety issues (models not working as intended, hallucinations, toxicity) and security vulnerabilities where bad actors can change model behavior.
EVIDENCE
Explained the non-deterministic nature of AI models and distinguished between safety issues (hallucination, toxicity) and security threats from external bad actors
MAJOR DISCUSSION POINT
Security and Safety in AI Systems
AGREED WITH
Prof. V. Kamakoti, Gokul Subramaniam
DISAGREED WITH
Gokul Subramaniam
Argument 4
Visibility across the entire stack is essential for trust, and models themselves can contain vulnerabilities
EXPLANATION
Shetty argues that organizations need complete visibility across their AI technology stack to establish trust. He emphasizes that AI models themselves can contain malicious elements or vulnerabilities, making it crucial to verify the integrity of the models being used.
EVIDENCE
Referenced the adage ‘you can’t trust something you can’t see’ and mentioned the need to check for malicious elements in models
MAJOR DISCUSSION POINT
Security and Safety in AI Systems
Argument 5
High-quality, accessible, and manageable datasets are essential for effective AI implementation
EXPLANATION
Shetty identifies the data gap as the third major impediment to AI adoption, emphasizing that organizations need high-quality, accessible, and manageable data to build effective AI systems. Without proper data as the fuel for AI, organizations cannot move forward effectively with AI implementation.
EVIDENCE
Described data as ‘the fuel for AI’ and emphasized the need for high-quality, accessible, and manageable datasets
MAJOR DISCUSSION POINT
Data Quality and Sovereign AI Models
AGREED WITH
Prof. V. Kamakoti, Kazim Rizvi
Argument 6
Enterprises and governments have the best datasets that should be utilized instead of relying only on public data
EXPLANATION
Shetty points out that while most AI models are built using public text, voice, and video data, enterprises and governments possess superior datasets that should be leveraged. He suggests building specialized GPTs using these high-quality private datasets for training and inference.
EVIDENCE
Noted that current models use public data while enterprises and governments have the best datasets, suggesting building ‘machine GPTs’ with private data
MAJOR DISCUSSION POINT
Data Quality and Sovereign AI Models
Argument 7
Organizations need protection from both internal misuse (sharing confidential info with third-party AI) and external threats
EXPLANATION
Shetty describes a two-pronged security approach: preventing employees from sharing confidential information with third-party AI services like ChatGPT, and securing first-party AI applications that organizations build themselves. Systems should be smart enough to stop users from sharing inappropriate information.
EVIDENCE
Gave specific example of systems stopping users from sharing confidential information with ChatGPT
MAJOR DISCUSSION POINT
Security and Safety in AI Systems
Argument 8
Shadow AI applications pose risks as enterprises don’t know what AI tools employees are using
EXPLANATION
Shetty highlights shadow AI as a major challenge where organizations are unaware of what AI applications their employees are using. He emphasizes the need for organizations to discover, scan, and secure all their AI assets, including identifying vulnerabilities and implementing appropriate guardrails.
EVIDENCE
Mentioned that organizations need to discover AI assets, scan for vulnerabilities, and referenced NIST, Mitre, and OWASP guidance on AI risks
MAJOR DISCUSSION POINT
Security and Safety in AI Systems
G
Gokul Subramaniam
6 arguments186 words per minute572 words183 seconds
Argument 1
Domain-specific models should be applied at edge for different verticals like education and small-medium businesses
EXPLANATION
Subramaniam advocates for applying domain-specific AI models at the edge for various industry verticals, particularly highlighting opportunities in education (for translation and transcription) and small-medium businesses. This approach enables more efficient AI deployment while containing the technical constraints that prevent AI from working efficiently.
EVIDENCE
Mentioned specific applications in education for translation and transcription, and opportunities for small-medium businesses to use AI more effectively
MAJOR DISCUSSION POINT
Heterogeneous Computing and Distributed AI Infrastructure
AGREED WITH
Durga Malladi, Arun Shetty
Argument 2
India faces physical constraints of land, water, and power that will drive infrastructure setup decisions
EXPLANATION
Subramaniam identifies three fundamental physical limitations that India cannot avoid: land, water, and power. These constraints are critical factors that will determine how AI infrastructure is established and deployed across the country, requiring careful consideration in planning and implementation.
EVIDENCE
Specifically mentioned land, water, and power as three physical constraints that India cannot run away from
MAJOR DISCUSSION POINT
Infrastructure Constraints and Energy Efficiency
AGREED WITH
Arun Shetty, Kazim Rizvi, Sridhar Babu
Argument 3
Data centers require 40% power for cooling, 40% for compute, 20% for connectivity – optimal PUE ratio needed
EXPLANATION
Subramaniam breaks down data center power consumption, showing that cooling and computing each consume 40% of total power, with connectivity taking 20%. He emphasizes the importance of achieving a Power Usage Efficiency (PUE) ratio as close to 1.0 as possible, meaning maximum power goes to computing rather than cooling and other overhead.
EVIDENCE
Provided specific breakdown: 40% cooling, 40% compute, 20% connectivity, and explained PUE metric
MAJOR DISCUSSION POINT
Infrastructure Constraints and Energy Efficiency
Argument 4
Air-cooled racks work up to 25 kilowatts, liquid cooling needed beyond 100 kilowatts
EXPLANATION
Subramaniam explains the technical limitations of different cooling approaches for data centers. Air cooling is sufficient for racks up to about 25 kilowatts, but as power density increases to 100 kilowatts and beyond, liquid cooling systems become necessary to manage the heat generated.
EVIDENCE
Provided specific technical thresholds: 25 kilowatt limit for air cooling, 100 kilowatt requirement for liquid cooling
MAJOR DISCUSSION POINT
Infrastructure Constraints and Energy Efficiency
Argument 5
Hybrid energy solutions and off-grid capabilities are essential for India’s infrastructure needs
EXPLANATION
Subramaniam argues that India cannot rely solely on renewable energy for its AI infrastructure needs and must develop hybrid energy solutions that provide stability. Off-grid capabilities are particularly important to reduce dependency on centralized power systems and enable AI deployment in areas with limited connectivity.
EVIDENCE
Mentioned that pure renewable energy may not be sufficient and emphasized the need for stable, off-grid solutions
MAJOR DISCUSSION POINT
Infrastructure Constraints and Energy Efficiency
DISAGREED WITH
Arun Shetty
Argument 6
Protecting users is more fundamental than just protecting data and models
EXPLANATION
Subramaniam emphasizes that while discussions often focus on protecting data and AI models, the more fundamental concern should be protecting the users themselves. This represents a shift in security thinking from technical asset protection to human-centered security approaches.
EVIDENCE
Explicitly stated that protecting users is ‘even more fundamental’ than protecting data and models
MAJOR DISCUSSION POINT
Security and Safety in AI Systems
AGREED WITH
Arun Shetty, Prof. V. Kamakoti
DISAGREED WITH
Arun Shetty
P
Prof. V. Kamakoti
5 arguments170 words per minute611 words215 seconds
Argument 1
Different types of inferencing and response times require heterogeneous architectures for cybersecurity applications
EXPLANATION
Kamakoti argues that cybersecurity applications require different types of AI inferencing with varying response time requirements, necessitating heterogeneous computing architectures. He specifically mentions how deep packet inspection, traditionally done with signatures, must evolve to handle dynamically changing malware that can alter its signatures.
EVIDENCE
Referenced Cisco’s deep packet inspection capabilities and the challenge of malware changing signatures dynamically
MAJOR DISCUSSION POINT
Heterogeneous Computing and Distributed AI Infrastructure
Argument 2
Trust is not reflexive, symmetric, or transitive, and is context-dependent and temporal
EXPLANATION
Kamakoti provides a mathematical analysis of trust, explaining that unlike mathematical equivalence relations, trust doesn’t follow standard logical properties. Trust is not reflexive (people don’t always trust themselves), not symmetric (if A trusts B, B may not trust A), not transitive (if A trusts B and B trusts C, A may not trust C), and varies by context and time.
EVIDENCE
Provided mathematical examples: ‘I don’t trust myself sometimes,’ ‘I trust Sarah, Sarah may not trust me,’ ‘I trust Gokul, Gokul trusts you, I may not trust you,’ and explained context and temporal dependencies
MAJOR DISCUSSION POINT
Security and Safety in AI Systems
Argument 3
Adversarial AI can poison models and make them reveal information inappropriately
EXPLANATION
Kamakoti warns about adversarial AI attacks that can poison AI models and cause them to reveal information that should not be disclosed. This represents a significant security threat where malicious actors can manipulate AI systems to behave inappropriately or leak sensitive information.
EVIDENCE
Mentioned that adversarial AI can ‘poison the whole thing’ and make models ‘tell things that should not be told’
MAJOR DISCUSSION POINT
Security and Safety in AI Systems
AGREED WITH
Arun Shetty, Gokul Subramaniam
Argument 4
Need-to-know principles should apply to AI models to prevent unauthorized access to sensitive data
EXPLANATION
Kamakoti references the ‘need to know’ principle from security protocols, questioning whether AI models that understand entire datasets should be accessible to users who don’t have clearance for all that information. This raises important questions about data access control in AI systems.
EVIDENCE
Referenced ‘Yes Prime Minister’ and the ‘need to know’ principle, questioning if someone should access a model trained on data they don’t have clearance for
MAJOR DISCUSSION POINT
Security and Safety in AI Systems
AGREED WITH
Arun Shetty, Kazim Rizvi
Argument 5
Educational AI models should be curated like movie ratings to ensure appropriate content for different audiences
EXPLANATION
Kamakoti suggests that AI models used in education should be curated and filtered similar to how movies are rated for different audiences. He emphasizes the need to control what information is fed into educational AI models to ensure age-appropriate and contextually suitable content.
EVIDENCE
Made analogy to movie rating boards and emphasized the need to control what details are fed into educational AI models
MAJOR DISCUSSION POINT
Data Quality and Sovereign AI Models
K
Kazim Rizvi
2 arguments183 words per minute839 words275 seconds
Argument 1
Energy management is crucial as energy resources are finite, with strong environmental implications
EXPLANATION
Rizvi emphasizes the environmental aspects of AI infrastructure deployment, noting that energy resources are finite and that efficient energy management is crucial. He points out that this environmental dimension often goes unnoticed and undiscussed despite its importance.
EVIDENCE
Mentioned that energy is finite and highlighted the environmental aspect that ‘often gets unnoticed and undiscussed’
MAJOR DISCUSSION POINT
Infrastructure Constraints and Energy Efficiency
AGREED WITH
Arun Shetty, Gokul Subramaniam, Sridhar Babu
Argument 2
India is building sovereign large language models while leading in AI applications with 300+ GenAI startups
EXPLANATION
Rizvi highlights India’s dual approach to AI development: leading in AI applications with over 300 generative AI startups building on top of large language models, while also developing sovereign capabilities through companies like Sarvam. This represents India’s comprehensive strategy across the AI value chain.
EVIDENCE
Cited specific number of 300+ GenAI startups and mentioned Sarvam as an example of sovereign LLM development
MAJOR DISCUSSION POINT
Data Quality and Sovereign AI Models
AGREED WITH
Arun Shetty, Prof. V. Kamakoti
S
Sridhar Babu
2 arguments141 words per minute166 words70 seconds
Argument 1
Policymakers must ensure adequate provision of power, electricity, water, and land for AI infrastructure
EXPLANATION
Minister Babu acknowledges the infrastructure challenges raised by the technical experts and commits to providing the necessary foundational resources. He recognizes that policymakers have a crucial role in ensuring adequate provision of basic infrastructure requirements that enable AI development and deployment.
EVIDENCE
Directly responded to panelists’ concerns about infrastructure constraints and committed to providing power, electricity, water, and land
MAJOR DISCUSSION POINT
Policy and National Resilience
AGREED WITH
Arun Shetty, Gokul Subramaniam, Kazim Rizvi
Argument 2
The ultimate goal of AI implementation should be welfare and happiness for all citizens
EXPLANATION
Minister Babu emphasizes that regardless of the technical complexities and infrastructure challenges, the fundamental objective of AI development should be ensuring welfare and happiness for all people. This represents a human-centered approach to AI policy and implementation.
EVIDENCE
Explicitly stated ‘welfare for all, happiness for all’ as the basic agenda for AI impact
MAJOR DISCUSSION POINT
Policy and National Resilience
Agreements
Agreement Points
Distributed AI infrastructure is superior to centralized data centers
Speakers: Durga Malladi, Arun Shetty, Gokul Subramaniam
Hybrid AI approach combining devices, edge cloud, and data centers is the optimal solution Edge inferencing will become more prevalent, requiring fit-for-purpose solutions rather than huge centralized data centers Domain-specific models should be applied at edge for different verticals like education and small-medium businesses
All three technical experts agree that the future of AI lies in distributed computing architectures rather than massive centralized data centers, with edge inferencing becoming increasingly important for various use cases
Power and energy constraints are critical challenges for AI infrastructure
Speakers: Arun Shetty, Gokul Subramaniam, Kazim Rizvi, Sridhar Babu
Power consumption will reach 63 gigawatts in coming years, presenting major infrastructure challenges India faces physical constraints of land, water, and power that will drive infrastructure setup decisions Energy management is crucial as energy resources are finite, with strong environmental implications Policymakers must ensure adequate provision of power, electricity, water, and land for AI infrastructure
There is unanimous agreement that power and energy constraints represent fundamental challenges that must be addressed through policy and infrastructure planning
AI security requires comprehensive, multi-layered approaches
Speakers: Arun Shetty, Prof. V. Kamakoti, Gokul Subramaniam
Security and safety are major challenges as AI models are non-deterministic and can hallucinate or be injected with toxicity Adversarial AI can poison models and make them reveal information inappropriately Protecting users is more fundamental than just protecting data and models
All speakers agree that AI security is complex, requiring protection against multiple threat vectors including model poisoning, adversarial attacks, and user protection beyond just data security
High-quality, sovereign data and models are essential for effective AI deployment
Speakers: Arun Shetty, Prof. V. Kamakoti, Kazim Rizvi
High-quality, accessible, and manageable datasets are essential for effective AI implementation Need-to-know principles should apply to AI models to prevent unauthorized access to sensitive data India is building sovereign large language models while leading in AI applications with 300+ GenAI startups
There is consensus that data quality and sovereignty are crucial, with agreement on the need for controlled access to sensitive information and development of indigenous AI capabilities
Similar Viewpoints
Both speakers emphasize the current capabilities and future potential of edge devices for running sophisticated AI models, demonstrating technical feasibility of distributed AI
Speakers: Durga Malladi, Gokul Subramaniam
Modern smartphones can run 10 billion parameter multimodal models, glasses can run sub-1 billion parameter models Domain-specific models should be applied at edge for different verticals like education and small-medium businesses
Both speakers approach trust and security from a systems perspective, emphasizing the complexity of establishing trust in AI systems and the need for comprehensive visibility and understanding
Speakers: Arun Shetty, Prof. V. Kamakoti
Visibility across the entire stack is essential for trust, and models themselves can contain vulnerabilities Trust is not reflexive, symmetric, or transitive, and is context-dependent and temporal
Both speakers provide specific technical details about power consumption challenges in AI infrastructure, demonstrating deep understanding of energy efficiency requirements
Speakers: Gokul Subramaniam, Arun Shetty
Data centers require 40% power for cooling, 40% for compute, 20% for connectivity – optimal PUE ratio needed Power consumption will reach 63 gigawatts in coming years, presenting major infrastructure challenges
Unexpected Consensus
User protection as primary security concern
Speakers: Gokul Subramaniam, Arun Shetty, Prof. V. Kamakoti
Protecting users is more fundamental than just protecting data and models Organizations need protection from both internal misuse (sharing confidential info with third-party AI) and external threats Educational AI models should be curated like movie ratings to ensure appropriate content for different audiences
While technical discussions often focus on data and model security, there was unexpected consensus that user protection should be the primary concern, representing a human-centered approach to AI security that goes beyond traditional technical safeguards
Mathematical approach to understanding trust in AI systems
Speakers: Prof. V. Kamakoti, Arun Shetty
Trust is not reflexive, symmetric, or transitive, and is context-dependent and temporal Visibility across the entire stack is essential for trust, and models themselves can contain vulnerabilities
The convergence on a mathematical and systematic approach to defining and implementing trust in AI systems was unexpected, showing alignment between academic and industry perspectives on the fundamental complexity of trust
Hybrid energy solutions necessity
Speakers: Gokul Subramaniam, Kazim Rizvi, Sridhar Babu
Hybrid energy solutions and off-grid capabilities are essential for India’s infrastructure needs Energy management is crucial as energy resources are finite, with strong environmental implications Policymakers must ensure adequate provision of power, electricity, water, and land for AI infrastructure
The consensus that pure renewable energy is insufficient and that hybrid solutions are necessary represents an unexpected pragmatic approach to environmental sustainability in AI infrastructure
Overall Assessment

The panel demonstrated remarkable consensus across technical, policy, and business perspectives on key challenges and solutions for AI infrastructure deployment. Main areas of agreement include the superiority of distributed AI architectures, the critical nature of power and energy constraints, the complexity of AI security requirements, and the importance of data sovereignty.

High level of consensus with strong alignment between industry experts, academics, and policymakers. This suggests a mature understanding of AI infrastructure challenges and indicates potential for coordinated policy and technical responses. The agreement spans both technical implementation details and broader strategic approaches, suggesting that India’s AI development strategy has broad stakeholder support.

Differences
Different Viewpoints
Centralized vs. Distributed AI Infrastructure Approach
Speakers: Durga Malladi, Arun Shetty
Hybrid AI approach combining devices, edge cloud, and data centers is the optimal solution Edge inferencing will become more prevalent, requiring fit-for-purpose solutions rather than huge centralized data centers
While both speakers agree on moving away from purely centralized approaches, Malladi advocates for a comprehensive hybrid system that still includes data centers as part of the solution, whereas Shetty emphasizes moving toward edge-focused solutions and away from building huge data centers entirely
Primary Security Focus: Technical Assets vs. Human Protection
Speakers: Arun Shetty, Gokul Subramaniam
Security and safety are major challenges as AI models are non-deterministic and can hallucinate or be injected with toxicity Protecting users is more fundamental than just protecting data and models
Shetty focuses extensively on technical security aspects like model vulnerabilities, visibility across stacks, and protecting data and models, while Subramaniam argues that protecting users themselves should be the more fundamental concern rather than just technical asset protection
Energy Infrastructure Strategy: Hybrid Solutions vs. Cooling Optimization
Speakers: Gokul Subramaniam, Arun Shetty
Hybrid energy solutions and off-grid capabilities are essential for India’s infrastructure needs Power consumption will reach 63 gigawatts in coming years, presenting major infrastructure challenges
Subramaniam emphasizes the need for hybrid energy solutions and off-grid capabilities as fundamental requirements, while Shetty focuses more on the scale of power challenges and fit-for-purpose solutions without specifically advocating for hybrid energy approaches
Unexpected Differences
Fundamental Philosophy of AI Security
Speakers: Arun Shetty, Gokul Subramaniam
Visibility across the entire stack is essential for trust, and models themselves can contain vulnerabilities Protecting users is more fundamental than just protecting data and models
This disagreement is unexpected because both speakers are addressing AI security concerns, but they have fundamentally different philosophical approaches. Shetty takes a traditional cybersecurity approach focusing on technical assets and system visibility, while Subramaniam advocates for a human-centered security philosophy. This represents a deeper divide in security thinking than might be expected in a technical infrastructure discussion
Trust as a Mathematical vs. Practical Concept
Speakers: Prof. V. Kamakoti, Arun Shetty
Trust is not reflexive, symmetric, or transitive, and is context-dependent and temporal Visibility across the entire stack is essential for trust, and models themselves can contain vulnerabilities
This is an unexpected disagreement because while both speakers discuss trust in AI systems, Kamakoti approaches it as a complex mathematical and philosophical problem that cannot be solved through traditional equivalence relations, while Shetty treats trust as a practical engineering problem that can be addressed through visibility and technical controls. This represents a fundamental divide between theoretical and applied approaches to the same concept
Overall Assessment

The speakers show moderate disagreement on implementation approaches while sharing common goals around distributed AI, security, and infrastructure efficiency

The disagreement level is moderate but significant, particularly around philosophical approaches to security and the optimal balance between centralized and distributed infrastructure. These disagreements have important implications as they reflect different priorities: technical optimization vs. human-centered design, theoretical rigor vs. practical implementation, and comprehensive hybrid solutions vs. focused edge-first approaches. The disagreements suggest that while there is consensus on the challenges facing AI infrastructure deployment, there are meaningful differences in how to address these challenges that could impact policy and implementation decisions.

Partial Agreements
All speakers agree that edge computing and distributed AI are important for the future, but they disagree on the optimal balance between edge, cloud, and data center resources. Malladi wants a hybrid approach that includes all three, Shetty emphasizes fit-for-purpose solutions over large data centers, and Subramaniam focuses on domain-specific edge applications
Speakers: Durga Malladi, Arun Shetty, Gokul Subramaniam
AI user experience should be invariant to network connectivity quality, requiring on-device inference capabilities Edge inferencing will become more prevalent, requiring fit-for-purpose solutions rather than huge centralized data centers Domain-specific models should be applied at edge for different verticals like education and small-medium businesses
Both speakers agree that AI security is a critical concern and that models can be compromised, but they approach the solution differently. Shetty focuses on organizational visibility, asset discovery, and technical safeguards, while Kamakoti emphasizes the mathematical complexity of trust and the need for access control based on security clearance principles
Speakers: Arun Shetty, Prof. V. Kamakoti
Security and safety are major challenges as AI models are non-deterministic and can hallucinate or be injected with toxicity Adversarial AI can poison models and make them reveal information inappropriately
Both speakers agree on the importance of developing sovereign AI capabilities and utilizing high-quality datasets, but they emphasize different aspects. Shetty focuses on enterprises and governments having better datasets than public sources, while Rizvi highlights India’s comprehensive approach across applications and sovereign model development
Speakers: Arun Shetty, Kazim Rizvi
High-quality, accessible, and manageable datasets are essential for effective AI implementation India is building sovereign large language models while leading in AI applications with 300+ GenAI startups
Takeaways
Key takeaways
Hybrid AI approach combining on-device inference, edge cloud, and data centers is essential for optimal AI deployment, rather than relying solely on centralized data centers India faces critical infrastructure constraints in power (projected 63 gigawatts needed), land, and water that will fundamentally shape AI infrastructure decisions Security and safety are paramount concerns as AI models are non-deterministic, vulnerable to adversarial attacks, and can hallucinate or be poisoned with malicious content Voice interfaces in native languages (14 languages mentioned) represent the most natural user interface, requiring heterogeneous processors to handle diverse use cases Energy efficiency is crucial with data centers consuming 40% power for cooling, 40% for compute, and 20% for connectivity – requiring optimization toward PUE ratio of 1 India is leading in AI applications with 300+ GenAI startups while also developing sovereign large language models for national resilience Trust in AI systems is complex – not reflexive, symmetric, or transitive, and is both context-dependent and temporal, requiring new mathematical frameworks High-quality enterprise and government datasets should be leveraged instead of relying solely on public data for training AI models Edge inferencing will become more prevalent, with modern smartphones capable of running 10 billion parameter models and smart glasses running sub-1 billion parameter models
Resolutions and action items
Organizations must implement systems to detect and prevent sharing of confidential information with third-party AI applications Enterprises need to discover and scan all AI assets to address shadow AI applications and vulnerabilities Policymakers committed to providing adequate power, electricity, water, and land infrastructure to support AI development Need to develop fit-for-purpose solutions for different verticals like education and small-medium businesses using domain-specific models Implement guardrails around vulnerable AI models and applications Develop hybrid energy solutions and off-grid capabilities for distributed AI infrastructure
Unresolved issues
How to mathematically define and implement trust frameworks for AI systems given their complex, non-reflexive, non-symmetric, and non-transitive nature Specific mechanisms for transitioning from current centralized data center models to distributed hybrid AI infrastructure Detailed strategies for managing the transition from air-cooled to liquid cooling systems as compute requirements exceed 25-100 kilowatts per rack Concrete implementation timelines and resource allocation for the projected 63 gigawatt power requirement Standardization approaches for sovereign AI models across different government and enterprise use cases Specific regulatory frameworks for curating AI models for different audiences (similar to movie rating systems mentioned for education) Technical specifications for ensuring AI user experience remains invariant to network connectivity quality
Suggested compromises
Distribute compute requirements across devices, edge cloud, and data centers rather than concentrating everything in centralized locations Use air-cooled servers for edge cloud deployments (100-300 billion parameter models) while reserving liquid cooling for larger data center operations Implement hybrid energy solutions combining renewable and stable power sources rather than relying purely on renewable energy Balance between using sovereign models for sensitive applications while leveraging global models for general use cases Apply need-to-know principles to AI models while maintaining functionality for authorized users Focus on protecting users as the primary concern while also implementing data and model protection measures
Thought Provoking Comments
The question to ask is, do you want your AI user experience to be invariant to the quality of the communications that you have at that point in time? Or do you want it to depend on it? Obviously, you want it to be invariant. That means you must have the ability to run inference directly on devices.
This comment reframes the entire AI infrastructure discussion by challenging the assumption that AI processing must be centralized. It introduces the concept of ‘invariant user experience’ regardless of connectivity, which is a sophisticated way of thinking about distributed computing that goes beyond technical specifications to user experience design.
This comment established the foundational theme for the entire discussion – the need for distributed, heterogeneous computing. It shifted the conversation from traditional centralized AI models to a more nuanced understanding of edge computing, influencing subsequent speakers to address power efficiency, security implications, and practical deployment models in this distributed context.
Speaker: Durga Malladi
Trust is not reflexive, I don’t trust myself sometimes. Trust is not symmetric, I trust Sarah, Sarah may not trust me. Trust is not transitive, I trust Gokul, Gokul trust you, I may not trust you. Trust is in addition, trust is context dependent… It is temporal, morning I trust you, evening I don’t trust you.
This mathematical deconstruction of trust is profoundly insightful because it applies formal mathematical principles (equivalence relations) to a fundamental human concept that underpins all AI security discussions. It reveals the complexity of building ‘trusted AI’ by showing that trust itself defies the logical structures we typically use in computing.
This comment elevated the security discussion from technical vulnerabilities to philosophical foundations. It provided a theoretical framework that influenced how other panelists approached AI safety, moving beyond conventional security measures to consider the fundamental nature of trust in AI systems. It also bridged the gap between technical and policy perspectives.
Speaker: Prof. V. Kamakoti
India is challenged by three physical things that we cannot run away from: land, water and power… almost in a hundred percent of your power energy that comes into a data center forty percent goes into cooling forty percent into your computer and twenty percent on connectivity
This comment grounds the entire AI infrastructure discussion in India’s specific physical and geographical constraints. The precise breakdown of power usage in data centers provides concrete data that transforms abstract discussions about ‘energy efficiency’ into actionable insights about infrastructure design.
This observation shifted the conversation from theoretical AI capabilities to practical implementation challenges specific to India. It influenced subsequent discussions about hybrid energy solutions, edge computing necessity, and the importance of air-cooled versus liquid-cooled systems, making the entire panel more focused on India-specific solutions.
Speaker: Gokul Subramaniam
Safety is all about, we want the models to work in a certain way but it is not working in that certain way… The second part of it is the security part wherein a bad actor from outside can change the behavior of the model.
This distinction between safety (internal model behavior) and security (external threats) is crucial because it clarifies two often-conflated aspects of AI risk. It provides a clear framework for understanding different types of AI vulnerabilities and their respective mitigation strategies.
This clarification helped structure the security discussion more systematically. It influenced how other panelists approached AI governance, leading to more specific discussions about shadow AI applications, model scanning, and the need for different types of guardrails for internal versus external threats.
Speaker: Arun Shetty
We have almost 300 Gen AI startups, which are building on top of the large language models. And India is definitely leading the way in terms of application layer… we are also building sovereign large language models
This comment provides crucial context about India’s position in the global AI ecosystem, distinguishing between application-layer innovation and foundational model development. It highlights India’s unique strength while acknowledging the need for sovereign capabilities.
This observation helped frame the entire discussion within India’s specific AI development trajectory. It influenced how panelists discussed infrastructure needs, security requirements, and policy implications, making the conversation more strategically focused on India’s path to AI self-reliance rather than generic AI development.
Speaker: Kazim Rizvi
Overall Assessment

These key comments fundamentally shaped the discussion by establishing three critical frameworks: (1) the technical paradigm shift from centralized to distributed AI computing, (2) the theoretical foundation for understanding trust and security in AI systems, and (3) the practical constraints and opportunities specific to India’s AI development. The conversation evolved from abstract AI concepts to concrete, India-specific implementation strategies. Durga’s opening comment about invariant user experience set the distributed computing theme that ran throughout the panel. Kamakoti’s mathematical analysis of trust provided intellectual depth that elevated security discussions beyond technical fixes. Gokul’s infrastructure constraints grounded the conversation in physical realities, while Shetty’s safety-security distinction provided operational clarity. Rizvi’s framing of India’s AI ecosystem position gave strategic context. Together, these comments created a comprehensive discussion that balanced theoretical insights with practical implementation challenges, ultimately producing a roadmap for India’s heterogeneous computing future that addresses technical, security, infrastructure, and policy dimensions simultaneously.

Follow-up Questions
How can we build mathematical frameworks to define and measure trust in AI systems, given that trust is not reflexive, symmetric, or transitive, and is context-dependent and temporal?
This is critical for establishing security and safety standards in AI systems, especially for critical infrastructure and public systems where trust mechanisms are fundamental to national resilience.
Speaker: Prof. V. Kamakoti
How can we effectively implement ‘need to know’ principles in AI models to prevent unauthorized access to sensitive data while maintaining model functionality?
This addresses the cybersecurity challenge of ensuring that AI models don’t expose sensitive information to users who shouldn’t have access to it, which is crucial for sovereign AI models.
Speaker: Prof. V. Kamakoti
What new architectures are needed for dynamic malware detection when signatures can change dynamically, moving beyond traditional deep packet inspection?
This is essential for cybersecurity as traditional signature-based detection methods become inadequate against evolving AI-powered threats.
Speaker: Prof. V. Kamakoti
How can organizations effectively discover and manage shadow AI applications that employees are using without IT knowledge?
This is a critical enterprise security challenge as unauthorized AI usage can lead to data breaches and compliance violations.
Speaker: Arun Shetty
What hybrid energy solutions can India implement to support AI infrastructure given the constraints of land, water, and power?
This is crucial for India’s AI infrastructure development, as the country faces physical constraints that will determine how data centers and edge computing can be deployed at scale.
Speaker: Gokul Subramaniam
How can we optimize the balance between air cooling and liquid cooling in data centers to achieve PUE (Power Usage Efficiency) as close to 1 as possible?
This is important for energy efficiency in AI infrastructure, as cooling represents 40% of data center power consumption and optimizing this could significantly reduce overall energy requirements.
Speaker: Gokul Subramaniam
What specific guardrails and scanning mechanisms are needed to protect against AI model vulnerabilities and ensure first-party AI applications are secure?
This addresses the need for practical security implementations as organizations build their own AI applications and need to protect against both safety issues (hallucination, toxicity) and security threats from bad actors.
Speaker: Arun Shetty
How can we develop domain-specific models for different verticals while optimizing for edge inferencing constraints like memory, connectivity, IO, thermal, and power?
This is essential for practical AI deployment across various industries, ensuring that AI solutions are tailored to specific use cases while being efficient enough to run at the edge.
Speaker: Gokul Subramaniam

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Matthew Prince Cloudflare

Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Matthew Prince Cloudflare

Session at a glanceSummary, keypoints, and speakers overview

Summary

At the India AI Summit, Cloudflare CEO Matthew Prince outlined a vision for the future of artificial intelligence, drawing parallels with the historic diffusion of the printing press ([2-5][7-15]). He argued that, like Gutenberg’s invention, AI should not be confined to a handful of firms but instead be distributed among hundreds of thousands of companies worldwide ([24-26]). Prince emphasized five guiding principles: decentralizing AI ownership, ensuring creators are fairly compensated, empowering small businesses-especially in the global South-against consolidation, preserving cultural and linguistic diversity, and making the technology affordable for the poorest users ([24-42][44-48]). He warned that the current internet revenue model based on human traffic is collapsing, citing the decline in Google-driven visits from one per two pages to one per thirty, and noting that AI providers now scrape millions of pages for each visitor they return ([73-86][87-89]). Because “human eyeball traffic” is disappearing, Prince called for a new business model that rewards creators for advancing knowledge rather than generating click-bait, proposing a system that fills the “holes” in humanity’s collective understanding ([94-103][108-112]). He cautioned that without such reforms, AI could become as centralized as past telecom and social-network monopolies, concentrating power in five dominant firms instead of the desired 500,000 ([113-118]). Positioning Cloudflare as a neutral infrastructure provider, Prince noted that the company operates in over 120 countries, handles more than 20 % of global internet traffic, and is used by over 80 % of leading AI firms despite not being an AI company itself ([56-63][64-66]). To promote the five principles, Cloudflare is deploying top AI models on its global network so they run locally, simplifying access for users without deep technical expertise ([124-128]). The firm also funds education through a large Indian startup accelerator, offers free credits to emerging AI projects, and has launched “AI for Bharat,” a multilingual model supporting 22 Indian languages ([129-138]). Security-by-design and cost-efficiency are highlighted as essential, with Cloudflare working to reduce the massive budgets traditionally required to build AI services ([141-147]). Prince challenged the audience to adopt these values, urging policymakers, businesses, and civil society to create an inclusive AI economy that is not limited to a few companies in a single location ([148-152]). He concluded by expressing optimism that, with coordinated effort, AI can enhance humanity, protect cultural uniqueness, and become universally accessible ([39-42][49-51]). The speech underscored that the AI ecosystem stands at a crossroads, requiring immediate action to avoid consolidation and to establish new, knowledge-focused compensation mechanisms ([52-53][90-96]). Ultimately, Prince’s message was that democratizing AI infrastructure and rewarding genuine knowledge creation are critical for a fair and sustainable digital future ([44-48][108-112]).


Keypoints


Major discussion points


Democratizing AI and preventing concentration in a few hands – Prince argues that AI should be “distributed, not controlled” and that the ecosystem must involve “500,000 companies… spread around the world” rather than a handful of dominant players [24-26][118-119][150-152].


Creating sustainable business models that compensate creators – He stresses that the current AI paradigm “takes but does not give back” and calls for new models that reward journalists, academics, and researchers for generating knowledge instead of merely driving traffic [27-31][90-97][101-104][112].


Ensuring inclusion of small businesses and the Global South while preserving cultural diversity – Prince highlights the risk that AI could become a “consolidator” that marginalizes small enterprises, especially in developing regions, and warns against “Americanizing” the world, urging AI to respect local languages and identities [32-38][40-42][120-122].


Cloudflare’s role as an enabler and broker for a fair AI ecosystem – He outlines how Cloudflare leverages its global network to make AI models locally available, funds education and accelerators, and builds secure-by-design, region-specific services (e.g., AI for Bharat) to lower entry barriers [56-63][124-138][141-147].


Addressing the risk of centralization and calling for coordinated policy action – The speech warns that without deliberate effort, AI could repeat historical patterns of “telecoms… social networks… hyperscalers” consolidating power, and urges governments, businesses, and civil society to act together to achieve the five outlined goals [53-55][113-118][71-73].


Overall purpose / goal of the discussion


Prince’s talk is a policy-oriented call-to-action that frames a five-point framework for the future of AI: (1) broad distribution of the technology, (2) fair compensation for content creators, (3) support for small businesses and the Global South, (4) preservation of cultural diversity, and (5) universal accessibility. He positions Cloudflare as a neutral infrastructure broker that can help realize this vision while urging all stakeholders to adopt these principles in shaping AI policy and business practice.


Tone of the discussion


The tone begins historical and reflective, using the printing press analogy to set an optimistic vision. It then shifts to cautiously critical, highlighting risks of centralization, loss of creator value, and exclusion of the Global South. The remainder of the speech adopts a constructive and hopeful tone, detailing concrete steps Cloudflare is taking and ending with an encouraging call for collective action. Overall, the delivery moves from reverent storytelling to urgent advocacy, ending on an upbeat, collaborative note.


Speakers

Speaker 1


– Role/Title: Event moderator / host (introducing the keynote) [S1]


– Area of Expertise:


Matthew Prince


– Role/Title: CEO and Co-founder, Cloudflare; former professor of history [S4][S6]


– Area of Expertise: Internet infrastructure, cloud services, AI policy, technology democratization


Additional speakers:


(none)


Full session reportComprehensive analysis and detailed insights

Matthew Prince opened his keynote by thanking the audience and the AI Summit hosts, noting his honour at speaking in India and hinting at a follow-up appearance in Geneva [2-4][155-156]. He framed his remarks as a historical reflection, recalling his former role as a history teacher and arguing that studying past technological revolutions can illuminate the path forward [5-6].


Using Gutenberg’s printing press as an analogy, Prince explained that the press originated near Mainz, Germany and spread within sixty years to dozens of European cities-including Paris, Rome, the Netherlands, Spain and London-through itinerant technicians who set up local workshops with regional investors [7-15]. Because the press was not centrally controlled, no single nation could gate-keep or suppress it [7-15]. He argued that this decentralized diffusion made the press a “once-in-a-lifetime” catalyst for societal improvement, and that today’s AI era represents a comparable turning point [18-19].


From this perspective Prince introduced a five-point framework for AI development:


1. Distribution – AI should be “distributed, not controlled,” with ownership spread across roughly 500 000 firms worldwide rather than a handful of giants [24-30].


2. Creator enablement – Business models must ensure that journalists, academics and researchers are fairly compensated for their work instead of having it merely harvested by AI systems [27-31][90-97].


3. Support for small enterprises – The ecosystem should empower small businesses and entrepreneurs, especially in the Global South, so AI does not become a consolidating force that erodes personal relationships and local commerce [32-38][120-122].


4. Cultural and linguistic diversity – AI must preserve regional identities and avoid an “Americanising” homogenisation [35-38].


5. Universal affordability – AI should be affordable and accessible to the poorest users, not locked behind expensive subscriptions [40-42]; the underlying business model must allow AI to reach the broadest set of users [148-152].


Prince warned that the traditional internet revenue model-driven by human “eyeball” traffic that fuels advertising and subscriptions-is collapsing. He cited data showing Google’s efficiency falling from one visitor per two pages scraped a decade ago to one visitor per thirty pages today, while AI providers such as OpenAI and Anthropic now scrape thousands of pages for each visitor they return [73-86][87-89]. This shift threatens the historic traffic-based monetisation that once sustained the web [81-85].


To address this, Prince proposed a new reward system that values the creation of knowledge-advancing content rather than click-bait. He likened human knowledge to a block of Swiss cheese, where the “holes” represent gaps in human knowledge; AI companies are willing to pay to fill those gaps, aligning incentives between AI firms and society [101-104][108-112]. This model would shift compensation from sheer traffic metrics toward contributions that genuinely expand collective understanding [94-103].


Prince cautioned that without deliberate action AI could repeat the centralisation patterns seen in telecommunications, social networks and hyperscalers, concentrating power in a few dominant firms-he warned it must not be “restricted to literally five companies in one postal code in San Francisco” [113-118]. He called for coordinated policy, business and civil-society efforts to prevent this consolidation and to ensure that AI remains a globally distributed resource [53-55][71-73].


Positioning Cloudflare as a neutral broker, Prince highlighted the company’s extensive infrastructure-operating in over 120 countries, handling more than 20 % of global internet traffic, and serving over 80 % of leading AI firms-while noting that Cloudflare does not develop AI models [56-63][64-66][70-73]. Leveraging this network, which spans more than 300 cities, Cloudflare is deploying leading AI models at the edge so they run locally in users’ cities, simplifying access for those without deep technical expertise [124-128]. The firm has also regionalised models to respect local laws, languages and cultures, exemplified by “AI for Bharat,” which supports 22 Indian languages and is available to students and startups [136-138].


Further, Cloudflare runs a large Indian startup accelerator, provides free credits for emerging AI projects, and organises hackathons such as the IIT “build-a-thon” to foster local talent [129-140]. The company emphasises “security-by-design,” noting that the original internet was built without security in mind and that AI systems must be built securely from the ground up [141-147]. It also argues that future AI providers should not require trillion-dollar budgets or nuclear-scale infrastructure, but instead operate on affordable, efficient systems [141-147].


Prince concluded by urging all stakeholders-governments, businesses, and civil society-to adopt the five values of distribution, creator enablement, support for small enterprises, cultural preservation and universal access. He reiterated that AI should accelerate humanity, not diminish it, and expressed confidence that, with collective effort, AI can enhance humanity rather than erode it [84-86][155-156].


Session transcriptComplete transcript of the session
Speaker 1

Ladies and gentlemen, please welcome Mr. Matthew Prince, CEO, Cloudflare.

Matthew Prince

Thank you. Thank you. It’s an honor to be here at India’s AI Summit, and I look forward to what we’ll be doing in Geneva next year. I know that here I’m supposed to be talking about the future, but forgive me for a second. I used to be a professor, sometimes teaching history. And so I think sometimes in order for us to understand the future, it’s actually good for us to understand some of the past. The past we start with and what the previous speakers were talking about was another technological marvel, which was the birth of the printing press. The printing press started as transformative technology built in Germany, just outside of Mainz. And it was, though not held there, not contained there, but spread incredibly quickly across the whole.

Of Europe, expanding not so that it was in any one place, but. to a thousand cities within less than 60 years, which at that time was remarkable. It started in Germany, but it was never just a German thing. By 1470, there were presses in Paris, Rome. By 1473, the Netherlands and Spain. By 1476, in London. German technicians who learned from Gutenberg literally walked across Europe with that knowledge and shared it across all of Europe. And they would set up a shop in a new city, find a local investor, a merchant or a bishop, and then start printing local laws, local languages, local cultures. And because the technology was not centrally controlled, no single country could gatekeep it or shut it down.

This was one of those once -in -a -lifetime moments where technology spread and the world got better as a result. And I think today that is that turning point that we are now. And so inspired by the Honorable Prime Minister’s words yesterday, I thought I would frame what I would think of. As a framework of five things that we should all be playing for. And I think we can almost all agree that these things, if AI delivers them, will be better than if it doesn’t. So the first is. Much like the printing press, this should not be a technology which is controlled by five companies. It should be 500 ,000 companies, and those companies should be spread around the world.

We need to make sure that, as the honorable prime minister said, we democratize this technology and make it available for everyone and anyone. Secondly, we need to make sure that we’re building business models around this technology. Too often today in the early times of AI, AI takes but it does not give back. We need to make sure that content creators, that journalists, that academics, that researchers are able to be compensated for the hard work that they do to create their content, rather than just having that content taken, regurgitated, and spit back through AI systems. And this is one of the key challenges that we have to think about as we go forward. We also need to make sure that what has thrived in the early Internet, small businesses, individual entrepreneurs, the global South being able to ship to the world, that that needs to be done.

That needs to be able to continue, as opposed to AI being a consolidator. And what I worry about is the fact that the small businesses that most of us do business with today, the relationship that we have with them is personal or based on mere convenience. your AI agent isn’t going to necessarily care about those things. And so we need to make sure that small businesses, and especially those in the global South, have the tools to be able to survive as the world moves to more and more agentic commerce. We also need to recognize that unique cultures and unique identities, languages, shouldn’t be homogenized by AI. There is no one universal culture, and we can’t forget those things that make each region and each part of the world unique.

AI needs to respect and actually emphasize that. We don’t want to make the mistake of just merely Americanizing the world, but instead we want to honor the culture of all of those places around the world and honor those things that have made us unique. AI shouldn’t remove our humanity, it should accelerate it and enhance it. And finally, we need to make sure that the technology is available to all, especially the poorest of those in the global south. This can’t be something where you can only get the latest, unbiased, unfiltered, highest technology if you can afford to spend thousands of dollars per month on a subscription. There needs to be a business model that allows AI to be available to the broadest set of users and make sure that we aren’t leaving people behind with this incredibly powerful technology.

That’s the framework that I would aim for. One where AI is distributed, not controlled. One where AI is actually enabling creators and research. One where AI is enabling businesses, small and large, to compete on a fair playing field. One where AI is bringing about our humanity and our differences, not homogenizing us. And one where it is available to all, not only held by the rich. I think that’s something that most of the people in this room can agree to. And I think that as we think about policy, as we… As we think about technology, we should be thinking about making sure that we are moving in that direction, moving towards all five of those goals, not moving away from them.

Unfortunately, we are not yet there. And I think we are at a crossroads and we need to all, whether in business or government or civil society, be thinking about what are the actions that we can take in order to achieve those five milestones. So how am I the person here talking about this? What in the world gives me any right to be up here speaking? Cloudflare runs one of the world’s largest networks. We have presence in over 120 countries, more than 300 cities worldwide. We see an enormous percentage of the world’s global Internet traffic. Over 20 % of the Internet sits behind us. And so we are not an AI company. We don’t have a model ourselves. But today, over 80 % of the leading AI companies use us.

So a huge percentage of the Internet uses us. A huge percent of the AI companies use us. And we sit in between those things and are working towards our mission, which is to help build a better Internet. When I say help. is really important. We don’t believe that we can do it alone. We believe that we need the work of all of the people in this room in order to contribute to that. But we do see and can act as a broker between these two sides, the content creators on one, the AI companies on the other, trying to figure out what is that future of the internet going to be? What does it look like?

How can we make sure that it continues to achieve all of those goals? And there are some real challenges. The internet that we know today was really built based on a very simple formula. And that formula was create great content that drove traffic and then monetize that traffic through either selling things, subscriptions, or ads. And if you think about it, that’s how the internet was funded over that period of time. And Google was the great patron of funding that. In fact, the way that we can measure how this has changed is to actually look at how Google’s behavior has changed. Ten years ago, we have data on this based on Cloudflare. for every two pages that Google scraped on the internet, they sent you back one human visitor.

And with that human visitor, again, you could sell them something, you could show them an ad, you could get them to subscribe to whatever you were doing. That was the business model of the internet. And that’s what caused the internet to flourish. But that business model is fading away. If you look at Google themselves, they have gotten to the point that for every 30 pages they scrape today, they only send you one. It’s gotten 15 times harder to get traffic from a Google search. Microsoft is even worse, 70 to one. But that’s the good news. If we look at the pure AI companies, OpenAI, 3 ,700 pages taken from the internet for every one visitor they send back. And in Anthropic’s case, 500 ,000, a half a million pages scraped for every one visitor you send back.

The world is going to… …look more like Anthropic over time. And that is going to put pressure on what has been the historic business model of the internet and what I worry about. is that researchers, journalists, small businesses are going to get crushed by this change unless we recognize it and try and figure out what is a new way of dealing with this. How are we able to stay in front of these changes? What is the new business model of the internet going to look like? And so when we think about this, human eyeball traffic, the current currency of the internet, is going away. It’s going to be, and it’s never going to return in the same way.

We are all getting our answers more from AI than from original sources. And so we have to figure out some new way in order to compensate creators. And that might be very pessimistic, but I actually am optimistic about that. Because you see, it turns out that what we really want to compensate people for, for a better internet, is not repeating the mistakes of the internet’s past. The internet was never built with security in mind. We should be thinking about that with AI. And it was always wrong to equate traffic with value. There are a lot of times that are things that are salacious, that generate a lot of traffic, but don’t actually further human knowledge.

And so there’s an opportunity as we think about what the new business model of the internet is to try and figure out a reward system that actually rewards creators for furthering human knowledge. And what’s amazing is this is directly aligned with what the AI companies want. If you think about it, for the first time in human history, we have something close to a mathematical model of all of human knowledge. It’s not perfect, but that’s what the sum total of the AI systems that we have are today. They are taking up that way, and they’re a way of quantifying what we know and what we don’t know. And what’s interesting is I think of it as like a giant block of Swiss cheese.

And that block has a lot of cheese in it, but it also has a lot of holes. And those holes are the places where there are holes in human knowledge. And what the AI companies want, what all of us actually want, is for those holes to be filled. And if we could create a system where creators are actually rewarded by filling in those blanks in the Swiss cheese, those holes on the Swiss cheese, by rewarding people not for creating content which is rage baiting, content which makes people angry, content which is designed just to provoke, but instead content which is designed to further human knowledge, that is something that we have a market for today and that the AI companies are excited to pay for.

What we also have to think about is how we avoid the cycle of centralization and control. And we’ve seen this with technology over and over again. Telecoms exhibited it, social networks exhibited it, the hyperscalers are exhibiting it. And there is real risk that if we don’t make it so that more and more people can create an AI company, if we end up with a world of five AI companies, not 500 ,000, that is worse for everyone around the rest of the world. And so what we’re trying to do is think about how we can create and how we can make sure that anyone, anywhere in the world has the tools and the knowledge and the ability to compete in this incredibly exciting space.

We need to stop the consolidation of AI and, again, lead to 500 ,000 companies, not just five. So what we’re fighting for at Cloudflare, as an example, and what I would ask that anyone who is playing in this space fights for, is how do we make sure that we level the playing field and that we make sure that everyone around the world can participate in what is this incredible technology? We need to make sure that AI is coming to all the parts of the world, including the global south. And I am inspired by the stories of startups and students here in India that are inventing an AI future. We need to make sure we cultivate an environment where that AI future can grow and it doesn’t get stifled by a handful of companies that are out there.

So at Cloudflare, what specifically are we doing in order to make sure that this is the case? We’re trying to figure out how can we make sure that content is available all around the world and is accessible? And widely available to everyone. That’s by taking the top models and making them available across our global network so they can be run in the city where you are actually living. Um, that, that also means that we should make it easy to use and enroll in these, in these systems. So making it so that you don’t have to have a degree in computer science to start playing with AI models and making sure that that’s, that’s the case.

What we also are doing is actually funding the education of both startups and students to, uh, to do this. So we have our own startup accelerator and in India, it is the second largest by a country participants come from here. And it’s amazing to see what all of the startups in India are creating. And we’re proud of the fact that we are giving enormous credits to be able to use our services for free for startups that are trying to build that next generation and take on some of those giants. Okay. Okay. We’re trying to make sure that this is adaptable and multimodal around the world. So we have adopted the ability to roll out models across our platform that support all of the different things that you need, wherever you are in the world.

And those models should be regionalized so that they can be trained on local laws, local languages, and local cultures. I’m proud of the fact that we have done this with AI for Bharat, which we rolled out with 22 official languages across all of India and made it available for students in India to be able to experiment and try. And it’s incredible what we’re seeing people build with these models. We also launched an IIT build -a -thon to be able to take this with AI for Bharat and Cloudflare Workers AI. And it’s incredible what the students there were able to build and deliver. We also need to have secure by design. That’s the key to what we’re doing.

We need to not make the same mistakes that we had with the internet before. And we need to make sure that it’s actionable and affordable. It can’t be that you have to have trillions of dollars of budget. You have to stand up your own nuclear power plant in order to be the next AI company. And so we’re designing systems and we’re working not just to say how much money can we throw at the problem, but how can we make these systems more efficient so that we can pass on that cost and make it more affordable for everyone. These are the work that we’re doing at Cloudflare, and I would challenge anyone in the audience, if you’re working in AI, strive for these five values.

How can we make sure that everyone has a chance to participate in the AI economy? We want to make that available for the world. We can’t say that this is going to be a technology that is restricted to literally five companies in one postal code in San Francisco that have access to it. It needs to be available to the world. We’re here to help. I appreciate all of the effort and the great hosts from the AI Summit in India, and I’m looking forward to Geneva. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (16)
Factual NotesClaims verified against the Diplo knowledge base (5)
Confirmedhigh

“Matthew Prince delivered the opening keynote at the India AI Impact Summit 2026, thanking the audience and hosts.”

The knowledge base records Matthew Prince speaking at the India AI Impact Summit 2026, confirming his presence and role as keynote speaker [S43].

Confirmedhigh

“Matthew Prince is the CEO of Cloudflare and was the featured keynote speaker at the event.”

Speaker information lists Matthew Prince as the CEO of Cloudflare and the keynote presenter [S6].

Confirmedmedium

“The Gutenberg printing press was invented near Mainz, Germany.”

The source notes that the printing press was invented in Mainz, Germany in 1440 [S47].

Confirmedmedium

“The printing press spread in a decentralized way, with no single nation able to gate‑keep or suppress it.”

Discussion of the printing press highlights its fragmented diffusion and the lack of any one government controlling it, enabling democratization of ideas [S48].

Additional Contextlow

“Prince described the spread of the printing press as a “once‑in‑a‑lifetime” moment that improved society, drawing a parallel to AI.”

The knowledge base refers to the printing press as a “once-in-a-lifetime” moment that made the world better, providing contextual support for Prince’s analogy [S52].

External Sources (58)
S1
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S2
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S3
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S4
https://dig.watch/event/india-ai-impact-summit-2026/open-internet-inclusive-ai-unlocking-innovation-for-all — Very few individuals have done more to bring revolutionary and transformative technology into the hands of millions than…
S5
Protecting Democracy against Bots and Plots — In summary, Cloudflare utilizes AI and machine learning to anticipate and address threats and vulnerabilities, while pro…
S6
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Matthew Prince Cloudflare — -Matthew Prince- CEO, Cloudflare (formerly a professor who taught history) -Moderator- Event moderator/host Thank you….
S7
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Ebba Busch Deputy Prime Minister Sweden — “The printing press wasn’t dangerous. Not understanding it was dangerous,” Bush explained. She described an emotional cu…
S8
Defending the Cyber Frontlines / Davos 2025 — – Matthew Prince: CEO of Cloudflare Matthew Prince: Absolutely. So Cloudflare’s, our mission is to help build a bette…
S9
Large Language Models on the Web: Anticipating the challenge | IGF 2023 WS #217 — Many online platforms profit from journalistic content without adequately compensating those who produce it. The analys…
S10
Open Internet Inclusive AI Unlocking Innovation for All — Very few individuals have done more to bring revolutionary and transformative technology into the hands of millions than…
S11
AI in Action: When technology serves humanity — For many small business owners, the biggest challenge is not vision but capacity. Someone running a family coffee roasti…
S12
AI-driven Cyber Defense: Empowering Developing Nations | IGF 2023 — Audience:Hello everyone, I’m Prabhas Subedi from Nepal. It’s been so interesting in discussion, thank you so much panel….
S13
How Multilingual AI Bridges the Gap to Inclusive Access — Communities should preserve their own cultures and languages rather than having it done for them in a condescending way,…
S14
Intelligent Society Governance Based on Experimentalism | IGF 2023 Open Forum #30 — The development of AI and robotics is seen as increasingly necessary due to demographic changes and the complexity of ce…
S15
AI for Democracy_ Reimagining Governance in the Age of Intelligence — Global governance of AI is a precursor for a democratic development and evolution. And we need to continue to develop an…
S16
AI and human creativity: Who should hold the brush? — Economic structures that value human creativity:If AI can flood the market with ‘good enough’ content at minimal cost, w…
S17
Global AI Policy Framework: International Cooperation and Historical Perspectives — Bali contends that fundamental concepts like privacy vary significantly across cultures, and that Global South countries…
S18
How to ensure cultural and linguistic diversity in the digital and AI worlds? — Xianhong Hu:Thank you very much Mr. Ambassador. Good morning everyone. First of all please allow me, I’d like to be able…
S19
Artificial intelligence (AI) – UN Security Council — The discussion on the unintended consequences of rushed AI regulations was a central theme across multiple sessions duri…
S20
How AI Is Transforming Indias Workforce for Global Competitivene — Co-Founder and MD, Nucleus Software Policy, Governance, and Inclusion Strategies Vishnu calls for coordinated action b…
S21
AI/Gen AI for the Global Goals — Boa-Gue mentions the African Startup Policy Framework as an example of an initiative to enable member states to develop …
S22
Cloudflare launches Moltworker platform after AI assistant success — The viral success of Moltbot has prompted Cloudflare tolaunch a dedicated platformfor running the popular AI assistant. …
S23
AI is here. Are countries ready, or not? | IGF 2023 Open Forum #131 — Galia Daor:Yeah, thanks very much. I admit it’s a bit challenging to speak after Allison on that front, but I will try, …
S24
AI Governance: Ensuring equity and accountability in the digital economy (UNCTAD) — They raise concerns about the potential problems that could arise from adopting AI strategies that resemble fusion cuisi…
S25
Internet Governance Forum 2024 — The Global Digital Compact (GDC) aims to ensure that its commitments and calls have a meaningful and impactful reflectio…
S26
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — This comment introduces a critical counterpoint to the assumed benefits of global harmonization, highlighting power dyna…
S27
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Matthew Prince Cloudflare — “One where AI is distributed, not controlled.”[1]. “We need to stop the consolidation of AI and, again, lead to 500 ,000…
S28
AI for Democracy_ Reimagining Governance in the Age of Intelligence — Global governance of AI is a precursor for a democratic development and evolution. And we need to continue to develop an…
S29
Advancing Scientific AI with Safety Ethics and Responsibility — Oversight should be distributed across multiple entities rather than relying on a single central authority, creating che…
S30
Intelligent Society Governance Based on Experimentalism | IGF 2023 Open Forum #30 — The development of AI and robotics is seen as increasingly necessary due to demographic changes and the complexity of ce…
S31
Open Internet Inclusive AI Unlocking Innovation for All — Prince argued that a similar transformation must occur for internet content, with new business models emerging that comp…
S32
Large Language Models on the Web: Anticipating the challenge | IGF 2023 WS #217 — Another significant aspect discussed is the need to reconsider compensation structures for content creation. The analysi…
S33
Artificial intelligence (AI) – UN Security Council — The discussion on the unintended consequences of rushed AI regulations was a central theme across multiple sessions duri…
S34
How to ensure cultural and linguistic diversity in the digital and AI worlds? — Xianhong Hu:Thank you very much Mr. Ambassador. Good morning everyone. First of all please allow me, I’d like to be able…
S35
WS #205 Contextualising Fairness: AI Governance in Asia — Milton Mueller: Can you hear me? Am I on? Okay, thank you very much. Yeah, I am going to, yeah, first issue you a f…
S36
Cloudflare launches Moltworker platform after AI assistant success — The viral success of Moltbot has prompted Cloudflare tolaunch a dedicated platformfor running the popular AI assistant. …
S37
The potential of technical standards to either strengthen or undermine human rights and fundamental freedoms in case of artificial intelligence systems and other emerging technologies — Audience:Thank you, Nikki. Good morning, everyone. My name is Patrick Day from Cloudflare. Thank you so much for having …
S38
Main Session on Cybersecurity, Trust &amp; Safety Online | IGF 2023 — Alissa Starzak:Thank you for having me. I’m very excited to be here today. I actually think it’s worth a little bit of b…
S39
Evolving AI, evolving governance: from principles to action | IGF 2023 WS #196 — By utilizing a mix of tools and methods, it is possible to effectively address identified issues. Stakeholder cooperatio…
S40
AI Governance: Ensuring equity and accountability in the digital economy (UNCTAD) — They raise concerns about the potential problems that could arise from adopting AI strategies that resemble fusion cuisi…
S41
From Technical Safety to Societal Impact Rethinking AI Governanc — Historical patterns show technology doesn’t automatically benefit everyone without deliberate intervention
S42
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — This comment introduces a critical counterpoint to the assumed benefits of global harmonization, highlighting power dyna…
S43
Keynote Adresses at India AI Impact Summit 2026 — And critically, India brings strength. Peace doesn’t come from hoping adversaries will play fair. We all know they won’t…
S44
Thinking through Augmentation — While Ucuzoglu is optimistic about the long-term impact of transformative technology, he acknowledges that it is not an …
S45
The role of standards in shaping a safe and sustainable AI-driven future — In his concluding remarks, Onoe reflected on the historical role of standards in guiding societies through technological…
S46
Language (and) diplomacy — Analogical reasoning and comparison are well known to human nature. They are not safe from error. Together with forgetfu…
S47
Keynote-Rishi Sunak — Drawing on Geoffrey Ding’s book “Technology and the Great Powers,” Sunak challenged conventional narratives about techno…
S48
Powering AI Global Leaders Session AI Impact Summit India — “but two places went in very different directions on this one was europe and the other was china … fragmentation reall…
S49
Freedom of the press — The Freedom of the Press Act, the Swedish legislation passed in 1766, is recognised as the world’s first law supporting …
S50
By the Same Author — TV and radio were under government control, but the print media was independent and feisty. The Nation had the la…
S51
Closing Ceremony — Maria Ressa: I like that the panel… isn’t really high, and we could stand. Thank you, thank you for being here today. …
S52
https://dig.watch/event/india-ai-impact-summit-2026/building-trusted-ai-at-scale-cities-startups-digital-sovereignty-keynote-matthew-prince-cloudflare — This was one of those once -in -a -lifetime moments where technology spread and the world got better as a result. And I …
S53
The Innovation Beneath AI: The US-India Partnership powering the AI Era — Helbig suggested that current discussions about massive, power-hungry data centres might represent a similar blind spot….
S54
Comprehensive Summary: AI Governance and Societal Transformation – A Keynote Discussion — As AI models get more and more advanced, and lots of other people, I’m sure, will talk about evals, so I won’t get into …
S55
Leaders’ Plenary | Global Vision for AI Impact and Governance- Afternoon Session — These key comments fundamentally shaped the discussion by introducing multiple analytical frameworks that moved beyond s…
S56
International multistakeholder cooperation for AI standards | IGF 2023 WS #465 — Florian Ostmann:Thank you, Matilda. So with that set out in terms of what kinds of standards we are focused on and why w…
S57
Gen AI: Boon or Bane for Creativity? — Almar Latour, the CEO of Dow Jones, recently discussed the numerous benefits of Artificial Intelligence (AI) in the fiel…
S58
WS #41 Big Techs and Journalism: Disputes and Regulatory Models — 2. Determining fair compensation models for platforms’ use of media content Bia Barbosa: Okay, thank you. Is that oka…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
M
Matthew Prince
16 arguments183 words per minute2838 words925 seconds
Argument 1
Printing press analogy – the spread of the printing press showed how decentralized tech can empower societies (Matthew Prince)
EXPLANATION
Prince uses the historical diffusion of the printing press as a metaphor for how a technology that is not centrally controlled can rapidly empower many societies. He suggests that AI should follow a similar decentralized trajectory to maximize societal benefit.
EVIDENCE
He describes the printing press originating in Germany and spreading to Paris, Rome, the Netherlands, Spain, and London within sixty years, noting that it was not centrally controlled and therefore could not be gatekept by any single nation [7-15]. He then calls the present moment a comparable “once-in-a-lifetime” turning point for AI [18-19].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Prince’s keynote draws a direct parallel between the 16th-century diffusion of the printing press and today’s AI, emphasizing decentralization [S6]; the historical view of the press’s impact is also discussed by Ebba Busch, noting its non-dangerous nature and transformative curve [S7].
MAJOR DISCUSSION POINT
Historical precedent for decentralization
Argument 2
Past lessons for AI – the current moment mirrors that transformative spread, urging us to avoid single‑point control (Matthew Prince)
EXPLANATION
Prince argues that the AI era is analogous to the printing‑press revolution and therefore we must learn from history to prevent concentration of power. He warns that allowing a few entities to dominate AI would repeat past mistakes of centralized control.
EVIDENCE
After outlining the printing-press diffusion, he states that “today that is that turning point that we are now” and frames AI as a transformative moment that should not be controlled by a handful of firms [19-20].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Prince warns against AI concentration, stating it must be “distributed, not controlled,” a stance recorded in his keynote [S6].
MAJOR DISCUSSION POINT
Learning from history to prevent AI centralization
Argument 3
AI should be owned by 500,000 firms worldwide, not a handful of giants (Matthew Prince)
EXPLANATION
Prince proposes that AI ownership be massively distributed, targeting half a million companies globally rather than a few dominant players. This distribution is presented as essential for democratizing the technology.
EVIDENCE
He explicitly states that AI “should not be a technology which is controlled by five companies. It should be 500,000 companies, and those companies should be spread around the world” [24-26].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The target of 500,000 AI-owning companies is explicitly mentioned in Prince’s speech as the desired distribution model [S6].
MAJOR DISCUSSION POINT
Massive decentralization of AI ownership
Argument 4
Policies must ensure AI is globally distributed and not gatekept by a few (Matthew Prince)
EXPLANATION
Prince calls for policy frameworks that guarantee AI is accessible worldwide and not monopolized. He links this to the earlier call for democratization of the technology.
EVIDENCE
He repeats the need to “democratize this technology and make it available for everyone and anyone” and stresses global distribution as a policy goal [26-27].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Prince calls for policy frameworks that guarantee global AI distribution and prevent gatekeeping, as outlined in his keynote remarks [S6].
MAJOR DISCUSSION POINT
Policy‑driven global distribution of AI
Argument 5
Current AI extracts content without rewarding creators; a new model must pay journalists, academics, and researchers (Matthew Prince)
EXPLANATION
Prince highlights the imbalance where AI systems consume vast amounts of creative work without compensating the original producers. He calls for new business models that remunerate these knowledge creators.
EVIDENCE
He notes that “AI takes but it does not give back” and stresses the need for content creators, journalists, academics, and researchers to be compensated rather than having their work merely regurgitated by AI systems [28-30].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Prince highlights the imbalance of AI taking content without compensation, echoed by IGF findings on platforms profiting from journalism without paying creators [S9] and his own comment “AI takes but it does not give back” [S6].
MAJOR DISCUSSION POINT
Fair compensation for content creators
Argument 6
Reward systems should value knowledge‑advancing content rather than traffic‑driven, sensational material (Matthew Prince)
EXPLANATION
Prince argues that future reward mechanisms should prioritize content that expands human knowledge over content that merely generates clicks or provokes outrage. This shift would align incentives with societal benefit.
EVIDENCE
He proposes a reward system that “actually rewards creators for furthering human knowledge” and criticizes traffic-driven, rage-baiting content, emphasizing the need to fund knowledge-advancing work [103-112].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Prince critiques the traffic-driven reward model and advocates incentives that advance human knowledge, a point made in his keynote and aligned with broader discussions on shifting value away from clicks [S6].
MAJOR DISCUSSION POINT
Incentivizing knowledge‑building over sensationalism
Argument 7
Small, personal‑relationship businesses need AI tools to stay competitive in agentic commerce (Matthew Prince)
EXPLANATION
Prince warns that AI‑driven commerce could sideline small businesses that rely on personal relationships. He stresses the need to equip these firms with AI tools so they can remain viable.
EVIDENCE
He expresses concern that “the small businesses that most of us do business with today… your AI agent isn’t going to necessarily care about those things” and calls for tools to help them survive in an AI-centric market [33-35].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Prince stresses equipping relationship-based SMEs with AI tools, a concern also highlighted in analyses of small-business challenges in AI adoption [S11].
MAJOR DISCUSSION POINT
Supporting SMEs in an AI‑driven market
Argument 8
AI must empower entrepreneurs in the Global South, avoiding a market dominated by five large firms (Matthew Prince)
EXPLANATION
Prince emphasizes that AI should be a catalyst for entrepreneurship in the Global South rather than a tool that consolidates power among a few giants. He links this to broader goals of decentralization and equitable access.
EVIDENCE
He calls for AI to be “available to all, especially the poorest of those in the global south” and warns against a world with only five AI companies, advocating for 500,000 firms to ensure global participation [40-42][116-119][120-122].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Prince’s call for AI to empower Global South entrepreneurs and avoid concentration among five firms is documented in his speech [S6] and supported by capacity-building discussions for developing nations [S12].
MAJOR DISCUSSION POINT
Preventing AI concentration and fostering Global South entrepreneurship
Argument 9
AI must respect and highlight local languages, cultures, and identities rather than homogenize them (Matthew Prince)
EXPLANATION
Prince argues that AI systems should preserve cultural diversity by supporting local languages and identities. He cautions against a homogenizing effect that would erase regional uniqueness.
EVIDENCE
He states that AI should “recognize that unique cultures and unique identities, languages, shouldn’t be homogenized” and must “respect and actually emphasize” regional differences [35-38].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Prince asserts AI should “respect and actually emphasize” regional cultures and languages, a stance recorded in his keynote [S6] and reinforced by research on multilingual AI preserving cultural diversity [S13].
MAJOR DISCUSSION POINT
Preserving cultural and linguistic diversity in AI
Argument 10
Avoid an “Americanizing” effect; ensure AI reflects regional uniqueness (Matthew Prince)
EXPLANATION
Prince specifically warns against AI becoming an instrument of American cultural dominance. He calls for AI to honor the distinctiveness of all regions.
EVIDENCE
He says “We don’t want to make the mistake of just merely Americanizing the world, but instead we want to honor the culture of all of those places around the world” [38].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Prince warns against AI becoming an instrument of American cultural dominance and calls for honoring all cultures, as stated in his address [S6].
MAJOR DISCUSSION POINT
Countering cultural homogenization by dominant powers
Argument 11
AI services must be affordable for the poorest, not limited to costly subscriptions (Matthew Prince)
EXPLANATION
Prince stresses that AI should not become a luxury accessible only to those who can afford high subscription fees. He calls for models that make high‑quality AI reachable for the most disadvantaged.
EVIDENCE
He notes that AI “can’t be something where you can only get the latest, unbiased, unfiltered, highest technology if you can afford to spend thousands of dollars per month on a subscription” and calls for inclusive business models [40-42].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Prince emphasizes that AI should not be a luxury limited to high-price subscriptions, a point made in his keynote [S6].
MAJOR DISCUSSION POINT
Ensuring affordability for low‑income users
Argument 12
Business models should allow broad, low‑cost access to the latest, unbiased AI (Matthew Prince)
EXPLANATION
Prince advocates for business structures that provide the most advanced AI to a wide audience at low cost, preventing exclusion based on wealth. This complements his earlier affordability point.
EVIDENCE
He reiterates the need for a business model that “allows AI to be available to the broadest set of users” and prevents leaving people behind with powerful technology [42-43].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Prince advocates for business structures that deliver the most advanced, unbiased AI to the widest audience at low cost, as outlined in his speech [S6].
MAJOR DISCUSSION POINT
Designing inclusive AI business models
Argument 13
Cloudflare’s global network (120+ countries, 20% of Internet traffic) positions it as a broker between creators and AI firms (Matthew Prince)
EXPLANATION
Prince outlines Cloudflare’s extensive infrastructure as a strategic position to mediate between content creators and AI companies. He frames the company as a facilitator for a better internet ecosystem.
EVIDENCE
He cites Cloudflare’s presence in “over 120 countries, more than 300 cities worldwide” and that “over 20% of the Internet sits behind us” while noting that “over 80% of the leading AI companies use us” [56-63]. He describes Cloudflare as a broker between creators and AI firms [69-71].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Prince cites Cloudflare’s presence in over 120 countries and handling more than 20% of internet traffic, positioning the company as a broker between creators and AI firms [S6]; security statistics showing Cloudflare blocks billions of attacks further illustrate its central role [S8].
MAJOR DISCUSSION POINT
Cloudflare’s intermediary role in the AI ecosystem
Argument 14
Deploying top AI models locally, regionalizing them for local laws/languages (e.g., AI for Bharat) (Matthew Prince)
EXPLANATION
Prince explains that Cloudflare is making leading AI models available at the edge, tailored to regional legal and linguistic contexts. This approach aims to reduce latency and respect local norms.
EVIDENCE
He describes running top models across the global network so they can be executed “in the city where you are actually living” and making them easy to use without a CS degree [126-128]. He highlights the “AI for Bharat” rollout with 22 official Indian languages and its availability to students [136-138].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Prince describes edge-deployed, region-specific models such as “AI for Bharat,” supporting 22 Indian languages and local compliance, as detailed in his keynote [S6].
MAJOR DISCUSSION POINT
Localized, edge‑deployed AI models
Argument 15
Funding education, accelerator programs, and free credits for startups, especially in India (Matthew Prince)
EXPLANATION
Prince details Cloudflare’s initiatives to support startups and students through accelerator programs, generous credit allocations, and educational funding, particularly focusing on India’s vibrant startup ecosystem.
EVIDENCE
He mentions Cloudflare’s own startup accelerator, noting it is “the second largest by a country participants come from here” in India, and that the company provides “enormous credits” for free services to startups [129-133]. He also references an IIT build-a-thon linked to AI for Bharat and Cloudflare Workers AI [138-140].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Prince outlines Cloudflare’s accelerator, large credit allocations, and partnerships with Indian institutions to support startups and education, all mentioned in his speech [S6].
MAJOR DISCUSSION POINT
Supporting AI entrepreneurship through education and financing
Argument 16
Building secure‑by‑design, efficient infrastructure to keep AI affordable and prevent the need for massive capital (Matthew Prince)
EXPLANATION
Prince emphasizes that Cloudflare designs its AI infrastructure with security and efficiency at the core, aiming to lower the cost barrier for new AI entrants. He argues that affordable, secure systems are essential to avoid concentration of power.
EVIDENCE
He states that “we need to have secure by design” and that the goal is to avoid requiring “trillions of dollars of budget” or “stand up your own nuclear power plant” to become an AI company, focusing instead on efficiency to pass on lower costs [141-147].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Prince stresses a “secure-by-design” and efficient AI infrastructure to lower entry costs, a point echoed by Cloudflare’s security metrics showing billions of attacks blocked [S8] and his own remarks on avoiding trillion-dollar budgets [S6].
MAJOR DISCUSSION POINT
Secure, cost‑effective AI infrastructure
Agreements
Agreement Points
Similar Viewpoints
Unexpected Consensus
Overall Assessment

The transcript contains only a brief introductory remark from Speaker 1 and a substantive keynote by Matthew Prince. Apart from a shared courteous opening ([1][2]), there are no substantive points on which the two speakers agree or diverge, because Speaker 1 does not present any arguments. Consequently, the discussion shows minimal overlap in viewpoints.

Very low – the only observable consensus is a polite greeting. This limits the ability to draw broader conclusions about shared policy positions or strategic priorities for AI.

Differences
Different Viewpoints
Unexpected Differences
Overall Assessment

The session consisted of an introductory remark by Speaker 1 ([1]) followed by a single, uninterrupted presentation by Matthew Prince ([2-155]). No other speakers offered contrasting positions, and Prince’s remarks do not contain explicit counter-arguments to his own statements. Consequently, the transcript shows no direct disagreement between participants; the discussion is effectively a monologue presenting a set of proposals.

Minimal – the lack of opposing viewpoints means there is no substantive conflict to negotiate. This suggests strong internal consensus on the five‑point framework, but also indicates that the feasibility and policy pathways will need to be debated in subsequent multi‑stakeholder forums.

Takeaways
Key takeaways
The spread of the printing press illustrates how decentralized technology can empower societies; AI is at a similar transformative moment. AI should be democratized and distributed globally, owned by hundreds of thousands of firms rather than a handful of giants. Current AI practices extract content without compensating creators; new business models must reward journalists, academics, and researchers, especially for knowledge‑advancing work. Small businesses and entrepreneurs, particularly in the Global South, need AI tools to stay competitive and must be protected from market consolidation. AI systems must preserve cultural and linguistic diversity and avoid a homogenizing, “Americanizing” effect. Universal accessibility and affordability are essential; AI should not be limited to expensive subscriptions for the wealthy. Cloudflare, with its global network, positions itself as a broker to help realize these goals through local model deployment, regionalization, education funding, accelerator programs, free credits, and secure‑by‑design, cost‑efficient infrastructure.
Resolutions and action items
Cloudflare will deploy leading AI models on its global edge network, enabling low‑latency, regionalized inference. Cloudflare will regionalize models to respect local laws, languages, and cultures (e.g., AI for Bharat with 22 Indian languages). Cloudflare will expand its startup accelerator and provide free credits to startups and students, especially in India and other emerging markets. Cloudflare will invest in education programs and hackathons to build AI expertise in the Global South. Cloudflare will design its AI infrastructure to be secure‑by‑design and cost‑efficient, lowering the capital barrier for new AI entrants. The speaker challenged all participants to adopt the five outlined values (distribution, creator compensation, support for small businesses, cultural preservation, universal access).
Unresolved issues
Specific mechanisms for compensating content creators and researchers for AI‑generated use of their work remain undefined. Concrete policy frameworks or regulatory actions needed to prevent AI market consolidation were not detailed. How to transition the current internet business model (traffic‑driven revenue) to a new model that rewards knowledge‑advancing content is still an open question. Methods for ensuring affordable access to the latest, unbiased AI for the poorest populations were discussed but not finalized. Metrics and governance structures to monitor AI decentralization and cultural preservation were not established.
Suggested compromises
Balancing rapid AI advancement with the need for security and affordability – Cloudflare aims to make AI efficient and low‑cost while maintaining secure‑by‑design principles. Encouraging both large AI providers and a multitude of smaller entrants – acknowledging the current dominance of a few firms while promoting tools and incentives for many new players.
Thought Provoking Comments
Much like the printing press, this should not be a technology which is controlled by five companies. It should be 500,000 companies, spread around the world.
He draws a historical parallel to the printing press to argue for massive decentralization of AI, challenging the emerging reality of a handful of dominant AI firms.
Sets the overarching theme of the talk and frames the subsequent five‑point framework. It steers the audience toward thinking about distribution rather than concentration, prompting later remarks about global participation and the risk of consolidation.
Speaker: Matthew Prince
We need to make sure that content creators, journalists, academics, and researchers are compensated for the hard work they do, rather than having their content simply regurgitated by AI systems.
Introduces the ethical and economic problem of data extraction without remuneration, a topic that is often glossed over in AI hype.
Shifts the conversation from pure technology to the economics of knowledge. It leads to his later discussion of a new reward model and primes the audience to consider policy solutions for creator compensation.
Speaker: Matthew Prince
What I worry about is that small businesses, especially those in the global South, will be left behind because AI agents don’t care about personal relationships or convenience.
Highlights a concrete risk of AI‑driven commerce: the erosion of the personal, relationship‑based economy that sustains many SMEs, especially in developing regions.
Introduces a geographic equity dimension, prompting the later emphasis on regionalized models and the need for tools that empower businesses in the Global South.
Speaker: Matthew Prince
AI should not homogenize culture; it must respect and emphasize unique languages, identities, and regional differences.
Challenges the implicit assumption that AI will be a universal, one‑size‑fits‑all solution, urging preservation of cultural diversity.
Leads directly to his description of “AI for Bharat” and the rollout of models in 22 Indian languages, showing a concrete implementation of the principle he just articulated.
Speaker: Matthew Prince
The old internet business model—traffic → ads/subscriptions—is collapsing. Google now needs 30 pages scraped for one visitor; AI companies scrape thousands of pages per visitor.
Provides a data‑driven diagnosis of why the current value‑exchange model is unsustainable, framing AI as a disruptive force that will upend traditional revenue streams.
Creates a turning point in the talk: from describing ideals to confronting the economic reality. It sets up his proposal for a new reward system based on knowledge creation rather than traffic.
Speaker: Matthew Prince
Imagine knowledge as a block of Swiss cheese—holes are gaps in human understanding. If we reward creators for filling those holes, we align incentives of AI companies and society.
Offers a novel metaphor and concrete incentive structure that reframes the creator‑compensation problem as a collaborative effort to close knowledge gaps.
Deepens the analysis by moving from problem identification to a potential solution, influencing the audience to think about measurable metrics for “knowledge value” rather than clicks.
Speaker: Matthew Prince
We are taking top AI models and deploying them on our global network so they run in the city where users live, with regionalized training on local laws, languages, and cultures.
Shows a practical implementation of the decentralization and cultural‑preservation principles, turning abstract ideas into actionable engineering steps.
Provides a tangible example that validates earlier claims, reinforcing credibility and encouraging other stakeholders to consider similar distributed architectures.
Speaker: Matthew Prince
Security by design and affordability must be baked in; we cannot require trillions of dollars or nuclear‑scale infrastructure for the next AI company.
Links the earlier call for democratization to two concrete barriers—security and cost—highlighting that without addressing them, decentralization will fail.
Closes the talk by summarizing the technical prerequisites for the vision he outlined, leaving the audience with clear, actionable challenges to tackle.
Speaker: Matthew Prince
Overall Assessment

Matthew Prince’s remarks collectively shaped the discussion from a historical analogy into a multi‑dimensional roadmap for an inclusive AI future. By repeatedly juxtaposing the printing press’s diffusion with today’s risk of AI concentration, he reframed the debate around decentralization, creator compensation, cultural diversity, and economic sustainability. Each pivotal comment introduced a new layer—ethical, geographic, economic, technical—that broadened the conversation and forced listeners to consider concrete policy and engineering responses rather than abstract optimism. The speech’s turning points—particularly the diagnosis of the collapsing traffic‑based business model and the Swiss‑cheese knowledge‑gap metaphor—shifted the tone from aspirational to problem‑solving, steering the audience toward actionable solutions such as regional model deployment and reward mechanisms for knowledge creation.

Follow-up Questions
What will the new business model of the Internet look like in an AI‑driven world?
Understanding a sustainable model is crucial because traditional traffic‑based monetisation is collapsing as AI delivers content directly to users, threatening creators, journalists and small businesses.
Speaker: Matthew Prince
How can we stay ahead of the rapid changes brought by AI to the Internet ecosystem?
Proactive strategies are needed to anticipate shifts in traffic, content consumption and value creation, ensuring stakeholders can adapt before disruption harms them.
Speaker: Matthew Prince
How can we ensure small businesses, especially in the Global South, have the tools and support to survive and thrive in an increasingly agentic commerce environment?
Small enterprises rely on personal relationships and local knowledge; without appropriate AI tools they risk being displaced, which would undermine economic inclusion and diversity.
Speaker: Matthew Prince
What mechanisms can be put in place to compensate content creators, journalists, academics and researchers for the use of their work by AI systems?
Current AI pipelines scrape vast amounts of content without remuneration; fair compensation models are needed to sustain high‑quality knowledge production.
Speaker: Matthew Prince
How can we design a reward system that incentivises creation of knowledge‑advancing content rather than sensational or rage‑bait material?
Aligning incentives with human‑knowledge growth would improve the quality of information fed to AI and counteract the traffic‑driven, low‑value content that dominates today.
Speaker: Matthew Prince
How can we prevent AI from homogenising cultures and instead ensure it respects and amplifies diverse languages, identities and local customs?
Preserving cultural diversity is essential to avoid a monolithic, American‑centric AI output and to maintain the richness of global heritage.
Speaker: Matthew Prince
What policies and technical approaches are needed to make AI affordable and accessible to the poorest populations in the Global South, not just to those who can pay high subscription fees?
Equitable access prevents a digital divide where only wealthy users benefit from the most advanced, unbiased AI capabilities.
Speaker: Matthew Prince
How can we avoid the concentration of AI power in a handful of companies and instead foster a landscape of hundreds of thousands of AI enterprises?
Broad competition reduces the risk of monopolistic control, promotes innovation, and aligns with the historical diffusion of transformative technologies like the printing press.
Speaker: Matthew Prince
What technical and governance frameworks are required to build AI systems that are secure‑by‑design and affordable without needing trillion‑dollar budgets?
Security and cost‑effectiveness are critical to prevent repeating the Internet’s early security oversights and to enable widespread participation.
Speaker: Matthew Prince
How can regional AI models be effectively trained on local laws, languages and cultural nuances, and deployed at scale across diverse geographies?
Localized models ensure compliance, relevance and cultural sensitivity, supporting the goal of a truly global, inclusive AI ecosystem.
Speaker: Matthew Prince
What concrete actions should businesses, governments and civil society take to achieve the five outlined AI milestones (distribution, creator enablement, fair competition, cultural preservation, universal access)?
Identifying specific policy, investment and collaboration steps is necessary to move from high‑level principles to measurable progress.
Speaker: Matthew Prince

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Founders Adda Raw Conversations with India’s Top AI Pioneers

Founders Adda Raw Conversations with India’s Top AI Pioneers

Session at a glanceSummary, keypoints, and speakers overview

Summary

The session, organized by Archana Jahargirdar, was framed as a product-only showcase where founders present technical details without discussing business or funding, and speakers were asked to balance jargon with accessibility for non-AI audiences [2-8].


Ravindra Kumar introduced Technodate AI, describing its ambition to “automate automation” by using agentic AI to make industrial robotics and automation DIY-friendly, offering three modules for concept design, deployment, and troubleshooting [28-32]. He explained that while a foundational model would be ideal, limited funding in India forced the team to first engage customers, run pilot deployments, and only later recognize the need for such a model, yet they have already delivered solutions to Fortune-500 firms and the Indian Air Force [33-41]. Technodate’s credibility is reinforced by collaborations with experts such as Dr. Sumit Chopra and a team drawn from IITs, and a demo was promised to illustrate the end-to-end workflow [45-48][49-53].


Vaibhavath Shukla presented Quonsys AI, positioning it as a voice-infrastructure platform that removes humans from call-center loops, enabling end-to-end automation of customer support across Indian languages and leveraging a partnership with OpenAI for data generation [67-76][70-73]. He illustrated a real-estate lead scenario where the AI agent can answer calls, record interest, and schedule visits, and noted that the service is billed per minute of usage rather than a subscription [95-101].


Pradyum Gupta described Papri Labs’ visual-data mapping solution that continuously updates maps using dash-cam and CCTV feeds, processing petabytes of video to provide real-time information for applications such as billboard pricing, autonomous-vehicle safety, and public-transport optimization [134-142][148-154]. In response to data-privacy concerns, he stated that raw video is never released publicly, faces and number plates are blurred, and the system runs on bare-metal servers in Europe to comply with India’s DPDP regulations [207-214][225-233]. Pricing is offered on a per-tile basis, with a 25 km² tile costing 1.5 lakh rupees for a single-day license, scaling with volume [267-271].


Meenal Gupta introduced Imagix AI, an AI-driven precision imaging platform for cancer treatment planning that is HIPAA-compliant, ISO-13485 certified, holds four patents, and has achieved 92-99 % accuracy after training on a 5-million-image dataset that includes 30 % Indian data [288-295][332-340][336-342]. She emphasized that the system assists rather than replaces radiologists, keeping a human-in-the-loop for final approval to build trust in clinical settings [348-352].


Vivek Gupta then outlined Indus Labs AI’s voice operating system, a DIY, no-code platform that provides speech-to-text, text-to-speech, and LLM services optimized for Indian dialects with sub-400 ms latency and up to 70 % cost reduction compared with global providers [360-371][380-384]. The platform includes emotion detection, integrates with CRM workflows, offers per-second billing, and is hosted on Indian sovereign infrastructure, with partnerships for telecom connectivity and international white-labeling [389-403][417-424].


The session concluded with Archana thanking the founders and encouraging further one-on-one discussions, underscoring the emphasis on practical product deployment and compliance across diverse AI applications [468-469].


Keypoints

Major discussion points


Product-only presentation format: The moderator stresses that the summit is strictly for sharing product details, not business pitches or funding talks, and asks presenters to balance technical jargon with accessibility for non-AI audiences. [2-8][9-10]


AI-driven solutions targeting distinct industry problems:


Industrial automation: Technodate AI aims to “automate automation” with an agentic AI that helps users conceptualize, deploy, and troubleshoot robotics solutions, and notes the need for a foundational model after early customer experiments. [20-32][33-41]


Voice-first call-center automation: Quonsys AI builds a “voice infrastructure” that can run end-to-end call-center operations, handling inbound leads, booking appointments, and charging per-minute usage. [68-78][99]


Real-time map updating: Papri Labs uses city-wide dash-cam and CCTV feeds to create instantly refreshed visual maps and offers use-case-specific pricing (tiles of 25 km²). [118-138][267-270]


AI-assisted cancer treatment planning: EasyOPI’s Imagix AI provides HIPAA-compliant, ISO-certified imaging analysis that reduces manual contouring time from up to 90 minutes to 5-15 minutes, with reported 92-99 % accuracy across multiple Indian states. [288-336][337-345]


Voice-platform as an operating system: Indus Labs AI builds a DIY, low-latency voice stack (STT, TTS, LLM, emotion detection) for Indian languages, promising up to 70 % cost reduction and integration with CRM and telephony systems. [355-363][380-384][389-393]


Technical and regulatory challenges around foundational models, data, and compliance: Several founders discuss the difficulty of building or accessing large foundational models, the importance of proprietary data engines, and strategies for scaling while remaining compliant with data-privacy laws (DPDP) and medical regulations. [33-41][109-113][207-216][225-236][290-352]


Business models, pricing, and deployment considerations: Presenters outline their revenue approaches-per-minute usage for voice agents, tile-based licensing for mapping data, and subscription models for automation-while fielding audience questions about integration, incentives for data contributors, and cost structures. [99][267-270][244-247][99-101]


Overall purpose or goal of the discussion


The session is a founder-focused showcase at an AI summit where each startup presents only its product (no fundraising or market-size pitches) to enable peer learning, surface practical implementation issues, and foster collaboration among AI innovators. [2-8][9-10]


Overall tone


The conversation begins with a courteous, instructional tone as the moderator sets expectations. It then shifts to an enthusiastic, technical tone as founders detail their innovations, followed by a more interactive and inquisitive tone during the Q&A, where practical concerns (pricing, compliance, scaling) are raised. Throughout, the atmosphere remains supportive and collaborative, with occasional defensive nuances when addressing challenges (e.g., data-privacy compliance). [2-8][55-60][207-216][225-236][442-445]


Speakers

Archana Jahargirdar – Moderator/host from Rukam Capital; facilitates founder presentations and Q&A sessions. [S4]


Meenal Gupta – Founder of EasyOPI Solutions; expertise in AI-driven precision imaging and treatment planning for cancer (HIPAA-compliant, medical-device software). [S3]


Vaibhavath Shukla – Founder and CEO of Quonsys AI; focuses on voice infrastructure and AI-powered call-center automation. [S6]


Pradyum Gupta – Founder/representative of Papri Labs; builds real-time mapping and visual-analytics platform using dashcam/CCTV data. [S7]


Ravindra Kumar – Representative of Technodate AI; works on agentic AI for automation, robotics conceptualization, deployment and troubleshooting.


Vivek Gupta – Founder and CEO of Indus Labs AI; develops a voice operating system (speech-to-text, text-to-speech, LLM) for Indian languages with low-latency voice agents. [S11]


Audience – General participants asking questions; no specific role or title provided.


Additional speakers:


Weber – Mentioned by Archana as the next presenter; no further details available.


Karan – From Rukam Capital (mentioned alongside Archana); likely a partner or investor at Rukam Capital.


Dr. Sumit Chopra – Ph.D. collaborator referenced in the discussion; expertise in AI research.


Full session reportComprehensive analysis and detailed insights

The session began with moderator Archana Jahargirdar establishing a strict “product-only” format, asking founders to discuss only the technical aspects of their solutions and to avoid any mention of business models, funding or revenue. She also invited presenters to use jargon where appropriate but to simplify where possible for audience members who are not AI specialists [2-8][9-10].


Ravindra Kumar (Technodate AI) introduced the company’s ambition to “automate automation” by deploying agentic AI that makes industrial robotics and automation accessible as a DIY task [11-13]. He described three core modules: (i) conceptualising engineering solutions, (ii) deploying and commissioning them-including robot programming, and (iii) troubleshooting any failures [30-32]. Although an ideal foundational model would accelerate development, limited funding in India forced the team to first engage customers, run pilot deployments and only later recognise the need for such a model [33-41]. Kumar highlighted collaborations with Dr Sumit Chopra and a team drawn from IITs [45-48] and announced a live demo of the end-to-end workflow [49-52]. He also disclosed upcoming deployments with the Indian Air Force [53-55], partnerships with several Fortune 500 companies [45-48], and noted that Technodate will exhibit at Hall 14 for further discussions [58-60].


During the brief Q&A, Kumar argued that even if a super-intelligent model (ASI) were available, the real value lies in building the application layer that solves specific customer problems, rather than relying solely on the model itself [55-60][61-64].


Vaibhavath Shukla (Quonsys AI) positioned his venture as a “voice infrastructure” that can fully automate call-centre operations, removing humans from the loop and handling tasks such as inbound lead qualification, appointment booking and follow-up [68-78]. He cited partnerships with OpenAI, Paytm, CRED and PropBotX [84-86][95-101] and described a pricing scheme based on per-minute usage rather than a fixed subscription [95-101]. To overcome data scarcity, Quonsys built a proprietary data engine that generates synthetic training data and also powers Indic-language voice models in collaboration with OpenAI [78-80][109-115]. Shukla reported cost-saving figures of 70-90 % for large BPO or enterprise customers [119-122] and outlined plans to increase concurrency from the current 50 requests to thousands as the model scales [124-131]. In the subsequent audience interaction, Shukla reiterated that the proprietary data engine is central to scaling and that the system can dynamically adapt to varied use-cases (e.g., real-estate lead handling) by integrating with web-socket handshakes and CRM tools [95-101][124-131].


Pradyum Gupta (Papri Labs) described a visual-data mapping platform that continuously refreshes maps using dash-cam and CCTV feeds deployed across metro cities [134-142]. The platform processes petabytes of video to support use-cases such as dynamic billboard pricing, autonomous-vehicle safety checks, optimisation of Delhi Transport Corporation’s bus fleet, and automated news generation [148-152][150-156]. Pricing is offered on a per-tile basis (25 km² tiles at ₹1.5 lakh per day, with volume discounts) [267-271]. When questioned about data-privacy under India’s DPDP regime, Gupta clarified that raw video never leaves the company, that faces and number plates are blurred, and that all processing runs on bare-metal servers in Europe rather than on hyperscalers [207-214][225-233]. He also stated that contributors (e.g., dash-cam owners) are not paid incentives; instead, the platform charges them for the service [246-247].


Meenal Gupta introduced EasyOPI Solutions’ “Imagix AI”, an AI-driven precision imaging platform for cancer treatment planning. She highlighted the acute shortage of oncology experts in India and explained how the system assists radiologists by automatically contouring organs at risk, reducing manual processing time from up to 960 minutes to 5-15 minutes [332-340][336-345]. The product is HIPAA-compliant, ISO 13485 certified and holds four patents, with an accuracy range of 92-99 % after training on a 5-million-image dataset that includes 30 % Indian data [288-295][337-342]. Trust is reinforced by keeping a human-in-the-loop for final approval [348-352]. Gupta also mentioned an invitation by Bill Gates at Microsoft to showcase the technology [332-340].


Vivek Gupta (Indus Labs AI) presented a DIY, no-code voice operating system that provides speech-to-text, text-to-speech, large-language-model and speech-to-speech capabilities optimised for Indian dialects. The platform delivers sub-400 ms latency, supports emotion detection and integrates end-to-end with CRM workflows, promising up to 70 % cost reduction compared with global providers such as 11 Labs [362-368][370-384][389-393]. Data residency is ensured by hosting all components on Indian sovereign infrastructure [390-393]. The company has already partnered with telecom operators (Airtel, Geo) and international white-label partners in Dubai and Germany [417-424], and the system also supports Arabic, German, French and Mandarin languages [424-426]. The system is billed per second of usage, with a recharge-based model for customers [423-425]. In the Q&A, Gupta demonstrated how a user can define a lead-handling journey by linking nodes to Google Calendar, enabling the AI agent to book meetings automatically [435-440]. He also described the company’s origin story: after encountering pronunciation issues with third-party TTS, the team built its own stack, first using public data and then creating a proprietary data engine to achieve scalability up to 1 000 concurrent requests within ten minutes [451-466].


The presenters largely agreed that domain-specific application layers supported by proprietary data pipelines are more critical than investing in large, generic foundational models, and that such pipelines help meet data-privacy and regulatory requirements while delivering significant cost savings [55-60][109-115][162-165][362-368][207-214][225-233][290-294][390-393]. Divergences emerged around three points: (1) Kumar argued that a foundational model would eventually be required for industrial automation [33-40], whereas Shukla maintained that a custom data engine suffices [109-115]; (2) Papri Labs stores raw video on European bare-metal servers to satisfy DPDP, while Indus Labs insists on keeping all voice-AI data within India for sovereign control [207-214][225-233][390-393]; and (3) pricing strategies differ, with Papri Labs using a per-tile, per-day licence [267-271], Quonsys charging per minute of AI usage [95-101], and Indus Labs adopting a per-second, recharge-based model [423-425].


Most presenters largely respected the product-only guideline, though a few references to pricing, partnerships, or commercial arrangements slightly breached the rule [43][267-271][74-76].


The session concluded with Archana thanking the founders, encouraging attendees to continue one-on-one conversations, and inviting everyone for a group photograph [468-469], reinforcing the summit’s goal of fostering collaborative, product-centric dialogue among AI innovators while highlighting shared challenges of data ownership, regulatory compliance and cost-effective deployment across diverse Indian sectors.


Session transcriptComplete transcript of the session
Archana Jahargirdar

Thank you. Thank you. you you Thank you. Thank you. Thank you. so how do founders learn about these changes the only way you can learn or maybe the best way to learn at a conference like this at a summit like this is by listening to each other so it’s not a pitch there’s not going to be any talk about business there’s going to be no funding conversation it’s only about product so I’m going to request all the founders who are presenting to come up and then we’ll go sequentially and the other request on the presentations to all the founders who are presenting is please come I mean feel free to come up is that please use jargon because the intent is that the audience will understand it however also be mindful that if you want more people to learn who may not be AI natives may not be AI people may not be technologists may not be AI people but it’s still important for them to learn and understand.

So if you can simplify it, it’s fine. If you don’t want to simplify it, it’s also okay. So the format we’ll follow is that each one of you takes a little bit of time to talk about your product. But like I said again, only product. No business, no pitching, no money, nothing. So I’m going to request to start with, if I could request Ravindra Kumar to talk about what is it that you’re building. So quick introduction and then the product that you’ve built.

Ravindra Kumar

Hi everyone, this is Ravindra from Technodate AI. And we are aiming to automate automation itself. Everybody says AI won’t take away any jobs. We’re like, let us do something about it.

Archana Jahargirdar

Do you want to stand there at the podium or you want me to start? Whatever, whatever. No, no. Yeah, you can start your presentation. Shall we do that? Yeah, yeah, we should. Because I want people to really get into the product.

Ravindra Kumar

Can I use the clicker? What generally happens is that there are already very sophisticated automation equipment available out there in the market. Before starting Technodate, I have been working with this company, which happens to be world’s largest manufacturer of industrial robots. Way back in the year 2010 or so, they achieved 100 % automation, which means no human on the shop floor. Still, the manufacturing is happening at 100 % capacity. On the other side, if you see globally, including India, manufacturing is not even successful. People are not able to. use automation to the fullest extent. That is something which Technodate is aiming to solve. We want to make automation as easy as DIY using agentic AI. So what it does is basically help you in three ways.

First, to conceptualize an engineering robotics and automation engineering solution on your own. Then to deploy and commission that including robot programming etc. etc. And then eventually it also helps you to troubleshoot when something doesn’t work. For this, we started as okay, we have to do something like this because the idea of this discussion is how do we go from experimentation to real world deployment. So when I came up with this idea, the first thought was that you need to build a foundational model. But we are in India. It’s not that easy to raise money to build a foundational model. But then how do you approach this? The idea is okay, let us go talk to customer.

Let us experiment with what all options are available out there and then figure out in the process, do we need a foundational model? So we started working, started talking to customers, started doing some initial deployments. Today we stand again back where we started from that we need a foundational model for this. But in the process, we have already started deploying application, including with people like Fortune 500 companies. This is how the team comes from. We are working with some people that I’m sorry, Archana, but being a founder, some pitching comes in by default. But then, yeah, this is how the team looks like. We are collaborating with people like Dr. Sumit Chopra, a Ph .D. under the godfather of AI and Lincoln.

He worked at a fair earlier. We are exploring or we are rather going to deploy a use case very soon with Indian Air Force itself. Of course, the team comes from IITs. I have a small demo to show to everyone. There’s some music to this. But it is. It’s just music. There’s no audio in any case. So what it does is, as I said, three modules. it helps you to conceptualize robotics and automation solution it helps you to build that the agentic AI what it does is it acts mimics automation expert it really finds what it takes to deploy that solution in a real world scenario it gives you the complete architectures it gives you the programs it gives you the step by step procedures how do you put these systems together and then eventually in the final I mean of course you can also ask it to make changes what happens is in industrial scenario you change one equipment it has to talk to all other equipment so everything changes so it does all that on its own autonomously by using agents in the background you can also see how that solution will look like on your factory which has been conceptualized by the agents and then when it comes to robot programming many people ask me that Chad Jibadi can’t do this why do you want to build a foundational model for this when it comes to robotics necessarily we are interacting with the real world you have to understand what is the object what needs to be done how the robot needs to move all that data needs to be injected into the systems only then robotic programming can be done anyways uh then there is something called cnc programming cncs are the mother machine so every aerospace component every automotive engine be it two -wheeler four -wheeler they’re all machined on cnc machines for that matter to build other machines you need a cnc machine so all those programs can also be generated by using uh agentic or generative ai in this case in defense use case it’s like for example this is a case of aero engine where you just say the error code the 3d model explodes and the generative ai tells you where and what steps to take to solve that particular problem for example you will be able to see you said the error code it shows you where in the whole machine that error belongs and these are the steps you need to take to solve these problems so yes this is it from me we are exhibiting at hall 14 see you all there who want to discuss more

Archana Jahargirdar

so anybody has questions on the product you including founders sitting on this panel can ask questions on the product any question yes please

Ravindra Kumar

we’ll use their model see I have no I am not fond of building foundational model my aim is to solve the problem of my customer absolutely so one thing that this is this will never be in a human history these kind of tasks will never be simple chat response kind of scenario you need common play workflows, right? So even if, let us say, OpenAI wants to do it, he will have to build a custom application for this, right? So this is an application layer. Model can become ASI, the super intelligence level. You still will have to build the application. So that is our first approach. Second is for industrial domain. Even if OpenAI wants to do today, he will have to build a foundational model for this, separately.

Because it is related to industrial world. The 3D actual world, the data is proprietary, customer doesn’t share it with you. So your application has to run on his premises or on the virtual clouds.

Archana Jahargirdar

Okay, thank you. Weber, you are next.

Vaibhavath Shukla

Thank you so much. Thank you. Thank you. Thank you. first of all I would like to thank Karan and Archana from Rukam Capital for giving me this opportunity India doesn’t need more wrappers we need infrastructure and that’s what we are building at Quonsys AI my name is Vaibhavath Shukla I’m the founder and CEO of Quonsys AI we are building the voice infrastructure for India and so India is the customer support capital of the world it is 55 billion dollar industry for us it’s roughly 2 % of India’s GDP and the problem is that this entire model is outdated in the agentic era so that’s what we are solving we ask this question to ourselves if we can automate the call centers itself and the call centers can automate and completely run by themselves so for that we started solving this problem and we started building from scratch for what exactly is required to automate the entire call center pieces and that’s what we initiated with Quonsys AI.

So Quonsys is the default layer wherein you don’t need humans in the loop which can automate the entire call center and BPO infrastructure and we can completely run end -to -end for the processes. So these systems can listen, understand, act, respond and solve the entire purpose for any particular use case. So it’s not a concept anymore. We have been working with some of the top enterprises like Paytm, CRED, PropBotX. We are also partnered with OpenAI for the infrastructure. We are working with them on voice and Indic languages infrastructure which we have developed by our own digital data engine and we can generate data at scale. So we are different in a way because we have solved the entire layer for be it your application layer, for orchestration layer, then organization, on the model layer and the data layer itself.

So for example, anything and everything that is required we are basically making the entire suite of the… automation layer for call centers. So it’s completely, you can say call centers are completely running on itself. We have built companies before. We have a really good research team which is helping us in developing the entire foundational layer of it. And we have deployed some of the use cases when we have already worked with some of the large enterprises already. Yeah, and I’m happy to answer

Archana Jahargirdar

Any questions on the product?

Audience

Yes, yeah. So the call lands on the somebody’s phone.

Vaibhavath Shukla

Correct.

Audience

So it’s like again a kind of thing.

Vaibhavath Shukla

Correct. those kind of scenarios yes it can be can you be more specific on the use case

Audience

yeah for example uh i got generated a lead on google ads or say a training uh on digital marketing right

Vaibhavath Shukla

yeah

Audience

so that customer is calling to a particular number

Vaibhavath Shukla

correct

Audience

this lands on say in this phone

Vaibhavath Shukla

yeah

Audience

so can i put this agent into this phone which can attend that call and answer according to my requirements

Vaibhavath Shukla

yeah it can definitely do so what it will do in the back end and then you can you know have a handshake handshake of web sockets when your number and the other number that we have uh we can basically merge together and the conversation can float from there and it can answer the questions because these are dynamic questions it’s not a fixed kind of question

Audience

right

Vaibhavath Shukla

right it can so all the knowledge that you can you’re going to give it so for example i’ll give you a use case of real estate right so uh if somebody’s making an inquiry about the real estate project you basically fill the form you get the number there the AI agent will make the call it is already trained on the entire data set of your real estate project where is it what is the per square feet size the cost of it what are the amenities and all those things locality all those things which is already trained on that it will talk to you on the basis of all that information it will record the interest level from you whether you want to visit the site or not and then it will automatically book the site visit as well and you can trigger SMS WhatsApp email whatever you require so everything that was previously done by a call center agent is completely automated using AI agents and it’s end to end process so basically the purpose that you have given it it can completely solve for that

Audience

and can like institutes can take this or companies can take this on stand alone basis or you have put in a subscription mode kind of thing

Vaibhavath Shukla

so it’s more like with charging per minute kind of subscription at this point so you set it up one time then or whatever the number of minutes that you consume with us you pay for that

Archana Jahargirdar

Okay. Any other question?

Audience

Yeah. I mean, you talked about building foundational models before the ending language, right?

Vaibhavath Shukla

Yeah.

Audience

So if you could tell me how you’re scaling on the same because foundational models are very good for demos, but when we scale, we have even seen Servam breaking.

Vaibhavath Shukla

Right.

Audience

So how are you…

Vaibhavath Shukla

That is right. We basically gave a demo with Servam and… Guys.

Audience

Well, that was too loud, but then, yeah, how are you thinking of combating that scenario?

Vaibhavath Shukla

The main thing is basically the data engine, right? So, I mean, data that you have basically trained it on, that’s the most important piece. Initially, when we tried it with Bhashini and Google data sets, all the public libraries that are available, we basically tried to fine -tune and generate, basically train the model on that data set. But unfortunately, like you mentioned, there are so many problems with that. So that’s why we built our own data engine. As you can see, we… We won an award from Prime Minister Modi as well. It was right here in the Bhatman room last year. so we basically generated data generate data from our own data data engine and that is what we are basically putting it in the model so for use case by use case for example Paytm that is working with us at scale we are making tens of thousands of all with Paytm for those kind of use cases we basically take it what exactly is the use case on right for example merchant is a very complex use case yeah right right right so we are working I mean concurrency currently is around 50 now we are going to increase that as I mean as the model grows we will increase the concurrency like right right right right right right right right right correct so there are two kinds of uh problems right so there are smaller companies which are employing five guys ten guys so if you talk about that that’s not something where we are currently focusing on and it’s not the industry can’t focus on that i mean the pricing will come down drastically in the next couple of years but there are companies like sbi insurance they are employing tens of thousands of people in particular building right so from real estate from managing the security the parking spaces uh the hr management team managements all those things subscription headsets machinery all that those things if you take it down to the last minute so that costs roughly 25 to 30 minute rupees per minute for this thing this particularly maybe cost three rupees per minute so that’s more like 90 percent of the cost saving for those kind of companies so that’s where the current market is and that’s what we are basically focusing on

Archana Jahargirdar

thank you thank you very much I request guys a round of applause for all the founders I request Pradyum Gupta to now come and present be generous with the applause at the questions both please yeah I mean founders are taking time out to talk about their product

Pradyum Gupta

thank you ma ‘am for providing me this opportunity okay hi Hi everyone, my name is Pradyum. I am representing Papri Labs here. So, just giving a simple example what Papri Labs actually do is that, for example, you are today all coming to Bharat Mandapam. Now you must be using, maybe if you are here from Delhi, you might not be using a map, but I am from outside Delhi, so I was using a map from IIT Delhi to Bharat Mandapam. Now what happened was that it said to me that these gates are open, but these gates were all of them were closed. I was just looking around all over the parking areas. And normally this is a common problem in the map system today.

What map system have done is that they have brought a great navigation system. So, you want to go at a particular place that could be anywhere all over the city, you can go down there. But that navigation will never be so much aware that the kind of awareness that you require. So, for example, that there could be a ploy. There could be a place that the gates are closed. There could be something which is happening. Maybe a very heavy fog is there. Now that is not updated. What our company does is, it is not updated. is that we update the map, any kind of existing mapping system in a very instant way. How we do it?

We work on a visual system. So for example, any kind of a vehicle which are having on the ground, they have our cameras placed in. So these are simple dashcams, CCTVs, anything which is visual, we basically place all over the cities and we work only in the metro cities as on date or only on the major highways. We take out all the data and we plot it over the map and then we not only place the videos or the images, we categorize them. So like for example, right now in Delhi, we work with a local transport here, DTC is a Delhi Transport Company. So we plotted about 8 ,000 units and then we were getting like 100 petabytes of data from all this thing.

And then we will categorize them so fast that you basically see the entire Delhi life. And the best part was that you can search from them, what’s going on there. So now, what are the use cases that we… we brought probably three use cases in the market so for example there’s a company called JC de Cox they own about 4 ,000 billboards all over New Delhi the problem in all these billboards is these billboards come at a standard price so for example you own like 10 ,000 billboards but you don’t you usually sell it only on a like a basis of if it’s a posh area then I will charge maybe more if it’s a less posh I will charge them less what we brought as a new kind of pricing mechanism that you charge on the base of impression count what digital arts brought for them and that’s how we were able to increase the revenue for about 40 -45 percent because now they were charging more on the revenue basis we work with the company called autonomous cars of this mg motors they had a hectare hectare vehicle which when they were entering in India back then they were bringing that thing in the autonomous set that one they were bringing internet edge inside one of the problems that they had was that they had luxury passengers but they wanted to know that okay what is happening on the road Like instantly, even if it has a fog, they need to know that, okay, divider is broken or not.

Am I safe in there or not? So we started to update. There’s a company called MapMyIndia. We started to update their systems very fast. Third, we worked with a company like BCG. BCG is a consulting firm which basically consults government to take decisions on the ground. What we told them that this is where the demand is there. This is where the capacity is high. That’s how we brought a root rationalization algorithm. That’s helped DTC on the ground to basically manage all their 8 ,000 buses to where they need to actually deploy more buses so that they can increase the revenue. But the second perspective was that more passengers can actually board the bus. So we update the map on the cases that they want.

Now, we have been penetrating in news. So normal daily newspaper that you usually read on a regular basis, there’s an image tree that is attached to it. In that image, there’s an about. 8 ,000 to 10 ,000 people usually just on the ground. do a basic job is to collect these images. What we do is because we have a huge volume of videos that we have, we are just updating them and they are creating a news out of it. So like you want to search anything, any news you want to create, you can create from there. How we do it? So because this is a more of a product business, so one of the problems that we faced in India when we were trying to scale this product is that even everyone is talking about AI, but today if I am just going to be asking any single passenger just to put a phone in their car and provide us data, none of you will do it.

And this was a very basic problem. We realized that people want it. In India there is this perception is that they want to absorb, they absorb the technology really fast. But to give that information is very hard. So we created a mechanism in which the customer started to supply the data by itself. So for example, when we started to deal with passenger bus service, we started to deal with passenger bus service. So we started to give them passenger bus service. So we started to give them passenger bus service. So we started to give them passenger bus service. So we started to give them passenger bus service. So we started to give them passenger bus service.

So we started to give them passenger bus service. So we started to give them passenger bus service. So we started to give them passenger bus service. counting because the problem was that they didn’t know how many people come inside the bus on a particular bus stand where we need to run more buses. When we started to deal with packages like logistic companies, logistic companies use digital locks. So today any truck which goes all over the country, the problem is that they put a digital lock and then they expect that the truck is safe. But that digital lock even gets opened, any kind of thing is not evidence in a court. What we added was a small camera in the particular container.

That thing counted how many goods were getting inside and how many goods were not, like what was the exact tally value. That’s how we deployed in our highway sector. So here if there are three sides of the data part, one is the passenger, that means we are getting city data. If we are deploying in the logistic sector, we are getting city data. We are getting highways data. And if you are deploying in the normal commercial cars, we are getting the lane information. And the perspective was. to just to get the front imagery. The back imagery they use it for themselves and we are certainly not interested in that part. That’s how we formed this entire information.

One of the problem that we faced to one of our customers was so we reached to one of so Delhi police and we started to sell this entire platform. They started to Google that like basically they want to search everything that where people are not wearing helmets because they want to cut the chalance very instantly. Now these things was that we created layers but we didn’t have a system that we can create very dynamic layers for that particular person. So what we did was we added that’s where the LLM thing came in that we started to describe every image and then internally we were searching everything for them. So like you have anything any idea in your mind you want to for example a person comes to me he says to me that find me all the CCTVs.

In New Delhi find me street lights which are not working just prompt it up. internally we are a video analytics company like we are so we keep we are running on a bare metal like hundred petabytes and then we’re just processing them really fast and you can then the best part that we brought was let’s start to compare like what changed now and what was previously before like six days back year back what was the development going on and this is how the basically the end customer gets so for example if it’s a local bus passenger company they wants to know that how many passengers actually board so we we provide a system to them but internally we use a front camera system so they use a fleet management system then we brought popular over the top so this is an example that we brought with DTC this this was funded by JICA JICA is an investment corporation which funds the government of New Delhi and that’s how we scaled in entire New Delhi second thing was that if you want to know any count like where the people whether how many cars been gone through or how many buses pass through that particular portion or where the two wheelers are or where the ambulances actually cross, we started to take out every information all over New Delhi and this is all real time.

So if you want to do it today, you want to compare it for like last six months and then you will track and target them, you can do all of it. So we brought these pay systems because JCD Cox was the organization for. So we are

Audience

Yes. Hello. So thank you so much for presenting. I am curious about knowing that you mentioned a certain petabyte of data that you are using and data you know is a very debatable topic right now after DPDP. So how are you handling that? How are you DPDP compliant? Because you are going to give this to certain other businesses also. So because you are getting a lot of personal data too, like getting images of people, getting images of the car numbers and all of these things. So how are you dbdp compliant and ensure that?

Pradyum Gupta

So there are two things. One is that inside videos we never take out for the public information. Even though the clients are ready to pay even 10 times over that value. Second thing is like this is the rule of property labs internally. Second thing is only front data is used. Front camera data faces are blurred. Number plates are also blurred. Second, third thing is that right now we don’t run on any AWS. So we don’t use hyperscalers right now. We only use bare metal servers. So bare metal is stacked in which we keep everything in Europe right now. So Europe, Hetzner, we have taken a portion of their data centers. And the second thing, so in India there’s a big problem.

Audience

how are you handling that? How are you DPDP compliant? Because you are going to give this to certain other businesses also. So because you are getting a lot of personal data too, like getting images of people, getting images of the car numbers and all of these things. So how are you DPDP compliant and ensure that?

Pradyum Gupta

So there are two things. One is that inside videos, we never take out for the public information, even though the clients are ready to pay even 10 times over that value. Second thing is, like this is the rule of popular. It loves internally. Second thing is only front data is used. Front camera data faces are blurred. Number plates are also blurred. Second, third thing is that right now, we don’t run on any AWS. So we don’t use hyperscalers right now. We only use bare metal servers. So bare metal means stacks in which we keep everything in Europe right now. So Europe, Hetzner, we have taken a portion of their data centers. And the second thing, so in India, there’s a big problem.

problem. One of the things that people say that, okay, GPUs are a lot, but the reality is that GPUs are, the companies which are actually selling these GPUs never purchase from them, rather purchase from CDAC. So CDAC is an organization called Aravat. Aravat is providing us supercomputers and like a bare, dirty price. So if you’re going to be searching on Aravat, just purchase their GPUs and then keep data on bare metal on your security premises, then it’s very safe. And it’s very cheap.

Archana Jahargirdar

Okay, one more question. And in the end, we’ll take more questions because once everyone’s done their presentations, please go for it.

Audience

I want to ask, because I’m a performative on the product, and I just want to ask, like, what are the incentives you are giving to the dashcam holders? Like, I heard you are giving incentives to the local DTDC buses or like…

Pradyum Gupta

So we don’t pay incentives, they pay us.

Audience

So like, what is the leverage you are holding for them to…

Pradyum Gupta

For example… So this company… DTC, Dairy Transport Corporation, they burn about 80 crores every year on not providing the timely bus service. And they had a very low revenue. Like they had a revenue loss of about 800 crores as I had a talk with A .S. Sachin Shinde back then he was there. Now A .S. Jitendraji came in. Now when we came into the system, we actually reduced on the revenue loss for them. So for example, if you see this number, 27 is the demand which is there and 25 is the capacity. So in India what happened was that when Amadvi Party came in, they provided female passengers as a free bus service. Now every party started to criticize all these parties that you are providing free for the bus service to the females.

We were the first company which actually gave them a mandate that 1 % is the actual female passengers are operating. So that’s how they were able to save their lives. So that’s when, you know, when we came in, we saw there are a lot of… operational issues.

Audience

We can access it through our apps or like…

Pradyum Gupta

Nothing is possible. We are a pure B2B company. We never intend to be B2C.

Archana Jahargirdar

Okay. We’ll do questions at the end. Let’s finish the presentations and… Okay, quickly. But short answer on your part. Yeah, but short answer, please.

Pradyum Gupta

So, we sell on tile basis. So, for example, this particular area, this comes at 25 by 25 square kilometers. This starts at 1 .5 lakh rupees. Per tile. This is for only valid for one day. And this usually multiplies at a volume at the company comes in. Thank you so much.

Archana Jahargirdar

So, now I’ll request Meenal to come and present, please.

Meenal Gupta

Hello everyone, I am Meenal Gupta from EasyOPI Solutions and so nice to see you over here. Who all are founders over here? Oh wow, so many. So we… Founders are here. So founders should be here. Who all are founders over here? So I love to be with founders. I know they share the journey and the struggle, they know it very well. So we all three women, mostly known as GreenDeviya because this name was given to us by Mr. Narendra. Okay. So I am Narendra Modi. So Meenal Gupta, I am the founder. I am the founder. Noor for… Noor for… and Sheetal Tarkas. We all started this journey. Our platform, we have named it as Imagix AI.

It’s an AI driven precision imaging to treatment planning for cancer. So we are HIPAA compliant. We have four patents in hand. We are ISO 13485 certified company. We also have SEDESCO license. Talking about SEDESCO, people who are from medical field, they might be knowing that there is a license which is required when we want to take our solution to hospitals. So there is a ICMR agency that certifies your product that is software as a medical device. Once it is certified, then you can take it to any hospital. You can actually commercialize post that. So we are a company. We are SEDESCO certified. So talking about the problem, we have a lot of people who are from medical field.

We know there are around 20 million new cancer cases every year. And there is not like that doctors don’t have the intent of solving the problem or treating cancer. But the main problem is the shortage of clinical experts. We can increase the devices like diagnosis devices can be increased, imaging can be increased. But the problem that is facing is the shortage of expertise for oncology. So and finally the treatment planning. So talking about the problem, once a cancer is being detected, the patient is sent for CT scan. Once the CT scan MRI. Once it is being done, then. Tumor board decides whether the patient has to go for radiation therapy or they have to go for surgery.

or the combination of both. I can understand everyone can relate it because almost every family in India or world are having someone very near and dear who are facing through cancer and they have gone through just such challenges. So what happens is because of this shortage, it cause life. It cause life or the stage of the cancer changes. Either it changes from first level to second stage. It changes because of this unavailability. So this is where our solution comes in. So this was our own personal experience where all three founders have personally experienced cancer to our near ones and we have gone through this radiation therapy where we had to wait in queue because of unavailability of specialist, because of unavailability of treatment planning.

So this was a very big bottleneck. You can see over here. So this was a very big bottleneck. You can see over here. So this was a very big bottleneck. So this was a very big bottleneck. So this was a very big bottleneck. So this was a very big bottleneck. So this was a very big bottleneck. So this was a very big bottleneck. Once a patient is recommended to go for radiation therapy, there is a planning to do for that radiation therapy. For this planning, there is a manual process where wherever there is a tumor, all the surrounding organs of tumor, they are to be segmented. And this is a manual process. I can proudly say that in India, there is no one who is solving this problem and we are the only one who have this solution.

Here we contour all the, contouring is the masking. We mask all the organs which are on risk and are surrounding tumor. So here the main purpose is that all the organs that are surrounding tumor, they should be saved because they are healthy organs. And radiation therapy should be as less as possible on those healthy organs. the manual process it used to take somewhere around 960 to 90 minutes we have reduced it to at the max of 15 minutes the reason is uh for complex uh radiation therapy when it is head and neck cancer it takes lots of time so maximum 15 minutes and minimum 5 minutes so we have reduced it here uh what we do is once the patient is diagnosed with cancer he goes for city scan tumor board this city scan is being done it is uploaded on our cloud tumor board have the access of this city scan through our own uh dicom viewer uh dicom is the format uh through which this images can be seen they decide whether they have to go for city scan radiation therapy planning or surgery and we have various suits we do ai analysis where do we do first level of analysis where we mention the load of the tumor and second level of analysis is we have various suits we have XraySuite, NeuroSuite and OncoSuite which works on this scans and finally we give the final report.

So this is our product. We have trained our AI on about 5 million of data set in which around 30 % is Indian data set and this 30 % we gathered it from northeast region. So northeast region is very tough terian and we got support from Niti Aayog where we went and collected data. It is very tough terian. Means taking off AI over there is very challenging because 4G has not reached there yet. So we had to implement it on -premise solution so that we can get the data and we can help them solve that. So 30 % of data we have almost deployed it in 14 states in India. We got our data 30 % from that. Accuracy is around 92%. So it is around 92 % to 99 % depending upon the data.

complexity you can see this data we are working in Gujarat in seven district where we are helping to scan to do CXR chest and lung analysis where we have helped we have helped we have made somewhere around 1 million of scans we have detected around 4 ,000 TB positive cases till yet in which there were around six lung cancer cases where early intervention is still possible we have done around thousand of radiation radiotherapy plans till yet and talking about in last three months we had done around five fifty thousand of chest x -rays where twenty seven hundred of TB were flagged so we could save TB these are live photos where handle x -rays and all are being done so I this was our solution was recognized first by Mr.

Naren Indra Modi and day before yesterday we were invited in Microsoft by Bill Gates to show our solution to him.

Audience

In health tech, I’ve observed that trust is a very big factor in terms of AI adoption and you seem to be implementing it across India. So how do you make sure that the technology and the science behind it is trusted by the people who are being benefited by it?

Meenal Gupta

Yeah, I understand. So here, our solution, we are not replacing doctors. We are just assisting doctors. We have made their manual process easy process. But final approval has to be done by radiologists. So it is human in the loop. We are not claiming that directly our AI will solve it.

Archana Jahargirdar

Thank you. I’ll request Vivek now to come

Vivek Gupta

Hi everyone. So first of all, thank you so much team Rukam Capital for organizing such a vibrant event and the energy is full of high in this room, I can see. It can be higher though. Yeah. I think Yes. So my name is Vivek Gupta. I’m the founder and CEO of company called Indus Labs AI. So we are building the voice architecture of India. So we are basically building the whole layer of . operating system of voice, where all the layers like speech to text, text to speech, the LLM, speech to speech, all of this infrastructure we are building, right? So it is a common platform where anyone, I said anyone can come on this platform and build their own voice agent.

As sir was asking the question about whether you are running a campaign on Google, you have put a number, you can build your own agent by yourself, right? So it’s a DIY platform where we are training, we are primarily focusing on Indian languages because the problem, linguistic problem in our country is if you see after each 20 kilometer dialect changes, right? So we are working with couple of banks, NBFCs, right? And whenever they run a campaign, they run a basically cold calling in let’s say Mojaffarnagar region, in UP, right? And whenever they call in Gorakhpur region, the Hindi is totally different. But global players like 11 Labs or maybe, you know, the Indian language, they are all different.

You know, some different global players, players like Azure and Google, they are providing a generic Hindi, but we need. a company in India who can build the infrastructure of voice in our country based on our directs. And again, so while we are building the infrastructure on our own GPUs and servers, hyperscalers we have inbuilt in our system, so that’s how we are able to reduce the latency. So we have some sub 400, 500 millisecond around latency into the system, which is like more human conversation, you can feel it. And you know, the complete analysis of the call is there, right? As soon as the call gets disconnected, you will find the sentiment analysis of the call, the outcome of the call, and it will log into the system.

So what is the expected outcome of the call? It will go into your CRM. So the journey starts from your CRM and ends with CRM. You trigger the calls from CRM and it ends with the CRM. So again, as I said, like native dialect mastery we have, and we are ultra low latency. And again, you know, call. somebody was also asking question regarding how effective it is in terms of cost. So if I talk about existing system cost we are reducing the cost up to 70 % right. So up to 70 % cost can be reduced and operationally you are enabled like 24 7 availability of the system is there. System is multilingual right. You don’t need to have multiple people for different languages.

Single system can handle 24 7 and that’s how you are able to reduce cost and operationally efficient. And you know the important part is emotional handling right. So like one year back I started this company 2 .5 years back right. I used to be director of an engineering company in software company in Bangalore and 2 .5 years back I quit and started Indus Labs and my background is from IT Delhi right. So the core problem was emotions right when I started this company. I always thought like if somebody is laughing over the call how would AI system would recognize the person is happy or angry. That’s how the agent would say sorry or congratulate you. Right because you need to understand the emotions right.

So we were working on this. is speech -to -text model since last 1 .5 years and on the 16th of this month in the department only we launched this model called it’s basically no emotion of your STD so we launched this model here in part of the monthly and we are basically you know distributing it to our customers existing customers now so that they can start using it’s a PUC phase right now and the good part is since we are an Indian company the whole the data is going to reside here on our sovereign feel is there right so we are pure Indian origin company so as I said like if I if I compare with now global pairs like Google and 11 labs so we are cost so like let’s say I hope many people knows what 11 labs is right so their cost is somewhere around eight rupees per minute right but we are seventy percent lower we sell at two rupees per minute right and we are superior in terms of Indian dialect accuracy and we are superior and streaming latency is somewhere around three hundred to four hundred millisecond and emotional expressiveness is already there in our system as we recently launched it and Indian data residency clause is obviously there because we are an Indian company.

So, I mean, so we are a huge case agnostic platform. We don’t say like we are having mastery in this huge case. You can come on our platform. So, like as of today, we are working with multiple use cases, right? We are working with banks. We are working with enterprises in FMCG. We are working with, you know, customer support people. They are building their own voice agents, but they use our STT and TTS through API, right? So, not anyone can build STT and TTS. So, instead of using 11 labs, they use us because we are cost effective and obviously good in terms of Indian dialect mastery. So, it’s DIY platform. Anyone can come and build their own agent.

We have different flows. Workflow is already there. You can create nodes. Each node can be connected with webhooks or API and that can be used to build your own voice agent. So, we are working with customers. We are working with customers. We are working with customers. We are working with customers. We are working with customers. We are working with customers. We are working with customers. So it’s completely guided journey and you can create and you can also integrate your voice agent with telephony. So we have already partnership with Airtel and Geo. So that’s how you can give the SIP channels through there and connect with your voice agent. So it’s complete end to end journey.

And again, so the core market is B2B enterprises. Right. And we are also platform for developers. So developers can use our APIs into their existing systems wherever they want to use. Right. So you just based API and it is again per second based costing is there. So as how many seconds you will use, you will be the credits would be detected. So it’s a recharge based system. You recharge and you can use it. And also we are into we are building channel partners as well. So like we have a couple of partners. One is in Dubai. One is in Germany. So we are white white labeling our platform for them. Right. So they can onboard their clients on their platform internally.

They would become our client shared. and we can share the revenue so we have right now we have developed four to five partners globally so we are building from Bharat for the globe so we have foreign languages as well we have Arabic we have German we have French and Mandarin Mandarin is in building stage right now so we have core and English all the accents we have or still in accent British accent American accent all the male female voices and again you can clone your voice as

Archana Jahargirdar

thank you any questions quick questions any questions

Vivek Gupta

yep absolutely it’s a no -code platform it’s a journey based platform you need to trigger what you want to do what you want to basically build out of it so let’s say you are you want to build an agent for inbound agent for your leads right so anybody who is calling from Google Ads would land on this voice agent. So you will define the journey and how you want to integrate. Let’s say you want to, some meeting has been fixed for your product, right? So it is connected with Google Calendar, your Google Calendar. So as soon as AI agent books a meeting, you will get an email on your Google Calendar. This meeting has been fixed and your Google Calendar will be blocked.

Audience

So my question is like, I have to make this like, there are some nodes, I have to connect these and make a flow or you have all things are made up, we just have to click and the agent will start walking.

Vivek Gupta

Yeah, it’s like DIY platform and we have tutorials as well, right? If you are stuck, you can see the tutorials. So and still you are feeling you are not able to build it, you can support the customer center. So our team will help you in that case.

Archana Jahargirdar

Okay, quick question.

Audience

Yeah. How did you start? When you start, left your job?

Archana Jahargirdar

We don’t have so much time. You can talk to them offline, but the data is a question.

Vivek Gupta

I’ll make it short. So, journey started 2 .5 years back. So, I initially started building voice agent and started using TTS of somebody else. Then we figured out that this TTS is having this issue. How can I solve this? Because my customer will ask the issues complaint to me. So, we were able to solve this problem of pronunciation of some words because these issues were there already in the third -party system. So, we thought of initially building our own infrastructure. And we pivoted in a model that people will use our APIs. That’s what we want to build it, right? So, then, firstly, we used public data, publicly available data. Then we started creating our own data, right?

So, we create the data and we have multiple hyperscalers available. And scalability -wise, our system is so much scalable that you can put a thousand requests at a time, it will handle. So, it will scale 0 to 1000 within 10 minutes. So, that’s how we built it. Thank you very much.

Archana Jahargirdar

thank you so much for all the engaging questions that everybody did make the effort to ask we are time constrained over here so I want to thank all the founders for sharing your product and any questions you have for the founders please do connect with them and do continue the conversation it’s just that we need to leave the room and I request all of us to do a quick picture together thank you Thank you. Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (16)
Factual NotesClaims verified against the Diplo knowledge base (2)
Confirmedhigh

“Moderator Archana Jahargirdar emphasized a product‑only format, asking founders to discuss only technical aspects and avoid business model, funding, or revenue details.”

The knowledge base states that the session was moderated by Archana Jahargirdar, who emphasized that presentations should focus purely on product details, confirming the product‑only directive.

Additional Contextmedium

“Ravindra Kumar described Technodate AI’s ambition to “automate automation” by making industrial robotics and automation accessible as a DIY task.”

The knowledge base notes that before starting Technodate, Kumar worked with the world’s largest manufacturer of industrial robots, providing background on his experience in industrial automation that underpins the “automate automation” claim.

External Sources (86)
S1
Scaling Innovation Building a Robust AI Startup Ecosystem — -Shri Ashok Gupta: Title – Director STPI Gurugram; Role – Dignitary presenting mementos Hi, I’m Meenal Gupta, founder o…
S2
https://dig.watch/event/india-ai-impact-summit-2026/founders-adda-raw-conversations-with-indias-top-ai-pioneers — Accuracy is around 92%. So it is around 92 % to 99 % depending upon the data. complexity you can see this data we are wo…
S3
Founders Adda Raw Conversations with India’s Top AI Pioneers — Hello everyone, I am Meenal Gupta from EasyOPI Solutions and so nice to see you over here. Who all are founders over her…
S4
Founders Adda Raw Conversations with India’s Top AI Pioneers — -Archana Jahargirdar- Conference moderator/host from Rukam Capital, facilitating the founder presentations and Q&A sessi…
S5
https://dig.watch/event/india-ai-impact-summit-2026/leveraging-ai4all_-pathways-to-inclusion — That’s great. I think moving into the curriculum always helps that you’re planting the seeds early on for the training. …
S6
S7
Founders Adda Raw Conversations with India’s Top AI Pioneers — Pradyum Gupta from Papri Labs showcased a real-time mapping system that updates existing maps using visual data from das…
S8
Comprehensive Report: Preventing Jobless Growth in the Age of AI — He’s a commissioner at the European Commission. His focus is on the economy and productivity. Ravi Kumar is the CEO of C…
S10
https://dig.watch/event/india-ai-impact-summit-2026/founders-adda-raw-conversations-with-indias-top-ai-pioneers — And again, so the core market is B2B enterprises. Right. And we are also platform for developers. So developers can use …
S11
Founders Adda Raw Conversations with India’s Top AI Pioneers — Hi everyone. So first of all, thank you so much team Rukam Capital for organizing such a vibrant event and the energy is…
S12
Driving Indias AI Future Growth Innovation and Impact — And for this, I’m delighted to welcome two very eminent leaders who are instrumental in shaping the journey, both from p…
S13
WS #280 the DNS Trust Horizon Safeguarding Digital Identity — – **Audience** – Individual from Senegal named Yuv (role/title not specified)
S14
Building the Workforce_ AI for Viksit Bharat 2047 — -Audience- Role/Title: Professor Charu from Indian Institute of Public Administration (one identified audience member), …
S15
Nri Collaborative Session Navigating Global Cyber Threats Via Local Practices — – **Audience** – Dr. Nazar (specific role/title not clearly mentioned)
S16
Internet standards and human rights | IGF 2023 WS #460 — Engaging with technical standard bodies demands technical expertise, but acknowledging the need for inclusivity without …
S17
AI Transformation in Practice_ Insights from India’s Consulting Leaders — The tone was pragmatically optimistic and refreshingly candid. Both speakers were honest about challenges and uncertaint…
S18
Agentic AI and the new industrial diplomacy — Chileoffers a very different entry point into the same technology.Codelco, the state-owned copper giant and one of the w…
S19
AI for Good Technology That Empowers People — The professor emphasised that whilst foundation models attempting to solve universal problems receive significant attent…
S20
Conversation: 02 — Companies like us and others who are starting to make, we have been doing that for a few years, where they’ve been makin…
S21
WS #462 Bridging the Compute Divide a Global Alliance for AI — Ivy Lau-Schindewolf highlighted OpenAI’s Stargate infrastructure project as an example of private sector leadership mobi…
S22
Box 4.1: Adapting ITU’s price data collection to ICT developments — 1. The prices of the operator with the largest market share (measured by the number of fixedtelephone subscriptions) are…
S23
Day 0 Event #171 Legalization of data governance — He Bo: Thank you. Good afternoon, everyone. I’m He Bo from China Academy. Academy of Information and Communication T…
S24
Panel #3: « Gouverner les données : entre souveraineté, éthique et sécurité à l’ère de l’interconnexion » — Patricia Egger Merci. Oui, donc, c’est particulier d’être ici parce qu’en effet, j’ai la casquette, je représente Proton…
S25
Israel’s Policy on Artificial Intelligence Regulation and Ethics — Privacy– The development and use of AI systems necessitates the use of large quantities of data, some of which could inc…
S26
Closing remarks — Minimal to no disagreement present. This transcript represents a closing ceremony where speakers (Doreen Bogdan Martin, …
S27
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Ebba Busch Deputy Prime Minister Sweden — This transcript contains a single keynote speech by Deputy Prime Minister Ebba Bush with only brief introductory comment…
S28
Laying the foundations for AI governance — Low to moderate disagreement level. The speakers largely agreed on problem identification but differed on solutions and …
S29
Setting the Rules_ Global AI Standards for Growth and Governance — Yes. I’ll take it back to what Chris was talking about in terms of collective action problems. So some of the mitigation…
S30
Table of Contents — Advanced manufacturing addresses the transformation of the manufacturing and automation industry to a new level of intel…
S31
INCREASING ACCESS TO DATA ACROSS THE ECONOMY — Industrial policy objectives attempt to improve the business environment for specific sectors or technologies t…
S32
Global Data Partnership Against Forced Labour: A Comprehensive Discussion Summary — Integration with existing regulatory frameworks and compliance systems across different jurisdictions presents complex t…
S33
E-Commerce Legal and Regulatory Framework for Data Governance in Developing Countries ( Nigeria Customs Service) — In conclusion, startups face challenges when it comes to sharing data with regulatory agencies, particularly in terms of…
S34
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — Abhishek Singh: One part is that, of course, the way the technology is evolving, there is IP-driven solutions and there …
S35
OVERVIEW — – -The propensity to act fast including when ‘testing in the wild’ and deploying innovations at scale in ways that can u…
S36
Networking Session #37 Mapping the DPI stakeholders? — ## Audience Contributions A significant challenge he identified was the lack of visibility into deployment impact. Most…
S37
Main Session | Policy Network on Meaningful Access — Oscar G Leon Suarez: Hello. This is Oscar León, Executive Secretary of the Inter-American Telecommunication Commissio…
S38
Founders Adda Raw Conversations with India’s Top AI Pioneers — -Real-time Map Intelligence and Urban Analytics: Papri Labs demonstrated their visual data processing system that update…
S39
WS #257 Data for Impact Equitable Sustainable DPI Data Governance — Malik Payal: Thank you, Priya. And yes, as you mentioned about that T20 policy brief, which we did, it was really great …
S40
Turbocharging Digital Transformation in Emerging Markets: Unleashing the Power of AI in Agritech (ITC) — The business model for AI in farming can be particularly challenging, especially for smallholder farmers in emerging eco…
S41
GEO-politics/economics/emotions in the AI era — Paradoxically, as technology developed, it became increasingly tied to geography. Once connected, users’ physical locati…
S42
Keynote_ 2030 – The Rise of an AI Storytelling Civilization _ India AI Impact Summit — The speaker calls for a fundamental redesign of monetisation, moving away from advertising‑only and subscription models …
S43
NRIs MAIN SESSION: DATA GOVERNANCE — Additionally, there is an advocacy for appropriate data protection legislation and policies. Data is subject to the laws…
S44
The Challenges of Data Governance in a Multilateral World — An advocate in the discussion strongly supports data governance models that prioritize cooperation, privacy, and the com…
S45
Dare to Share: Rebuilding Trust Through Data Stewardship | IGF 2023 Town Hall #91 — The speakers also emphasized the importance of extending beyond first-generation rights when it comes to data governance…
S46
Driving Social Good with AI_ Evaluation and Open Source at Scale — The conversation then shifted to the growing problem of AI-generated code submissions to open source projects. Sanket Ve…
S47
Tessl secures $125M for AI-powered code platform — London-based startup Tesslhas raised$125 million in funding, achieving a valuation exceeding $500 million. Led by founde…
S48
From principles to practice: Governing advanced AI in action — Ya Qin Zhang: I thought the National AI Safety Institute and a lot of the NGOs have played a very constructive and posit…
S49
AI That Empowers Safety Growth and Social Inclusion in Action — Well, I mean, I think in general we have sort of corporations are incentivized to put products on market that are safe a…
S50
Summit Opening Session — The summit’s emphasis on practical guidance, from streamlining permitting processes to strengthening repair readiness, d…
S51
(Interactive Dialogue 1) Summit of the Future – General Assembly, 79th session — Tunisia: Mr. Chairman, an objective review of the current shape of our organization stresses the need of a deep reform…
S52
Background — – review and assess progress at the international and regional levels in the implementation of action lines, recommendat…
S53
Ad Hoc Consultation: Friday 9th February, Morning session — Additional Observations: – The focused nature of the statement, omitting counterarguments or challenges from other membe…
S54
Survival Tech Harnessing AI to Manage Global Climate Extremes — Professor Amit Sheth opened the discussion by explaining the origins of IRO, which emerged from a December 2023 meeting …
S55
The Foundation of AI Democratizing Compute Data Infrastructure — Given the volume of funds available, I would focus a lot more on capability development of people to be able, their abil…
S56
Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression — ; Association for Progressive 28. The scale and complexity of addressing hateful expression presents long-term …
S57
PREAMBLE — – -The Signatories of this Code recognise the importance of diluting the visibility of Disinformation by i…
S58
OVERVIEW — 1. Technology company business models, and the commercial underpinnings of 21st century technological advances…
S59
Founders Adda Raw Conversations with India’s Top AI Pioneers — This was a founder showcase event organized by Rukam Capital where AI startup founders presented their products to an au…
S60
Closing remarks — Minimal to no disagreement present. This transcript represents a closing ceremony where speakers (Doreen Bogdan Martin, …
S61
Day 0 Event #178 Ethical Procurement in the Digital Age — As this is a single-speaker presentation, there is no consensus to assess among multiple speakers. However, the speaker …
S62
Strategic Action Plan for Artificial Intelligence — Many large Dutch companies are already working on deepening their knowledge of AI and using it to improve their services…
S63
Strategy — ‘Foster the use of AI in vital developmental sectors using partnerships with local beneficiaries and local or foreign te…
S64
Multistakeholder Partnerships for Thriving AI Ecosystems — LLMs solve only part of the problem; industry-specific, company-specific, and context-specific solutions still require s…
S65
MASTERPLAN FLAGSHIP PROGRAMMES — To create this plan, the government will convene an interagency AI task force comprised of National Government agencies,…
S66
MASTERPLAN FLAGSHIP PROGRAMMES — To create this plan, the government will convene an interagency AI task force comprised of National Government agencies,…
S67
DC-Blockchain Implementation of the DAO Model Law:Challenges &amp; Way Forward | IGF 2023 — The frustration faced in movement and change across various legal systems is acknowledged. Overall, the analysis provide…
S68
Exploring Emerging PE³Ts for Data Governance with Trust | IGF 2023 Open Forum #161 — Automation is widely regarded as a crucial component in privacy management. It allows for scaling efforts and addressing…
S69
E-Commerce Legal and Regulatory Framework for Data Governance in Developing Countries ( Nigeria Customs Service) — In conclusion, startups face challenges when it comes to sharing data with regulatory agencies, particularly in terms of…
S70
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — Abhishek Singh: One part is that, of course, the way the technology is evolving, there is IP-driven solutions and there …
S71
WS #225 Bridging the Connectivity Gap for Excluded Communities — Christopher Locke presented community networks as viable alternatives to traditional telecommunications models, emphasiz…
S72
OVERVIEW — – -The propensity to act fast including when ‘testing in the wild’ and deploying innovations at scale in ways that can u…
S73
About the Authors — Modularity also has several important implications from a supply-side perspective. First, the same task can be accomplis…
S74
AI Infrastructure and Future Development: A Panel Discussion — And of course, Sora, because now we have multimodal. So the product platform is multidimensional. And then finally, the …
S75
Invest India Fireside Chat — -Moderator: Event moderator introducing the session participants
S76
AI for social good: the new face of technosolutionism — Birhane concluded her presentation by acknowledging that being allowed to “take centre stage here and to speak about thi…
S77
Al and Global Challenges: Ethical Development and Responsible Deployment — Dr. Shukla further discussed the importance of transparency in AI applications, which would enable better understanding …
S78
Democratizing AI: Open foundations and shared resources for global impact — Repeatedly invited audience participation, encouraged reaching out to the presenters, and emphasized the openness of the…
S79
Comprehensive Summary: The Future of Robotics and Physical AI — And so there are plenty of challenges, of technical challenges. And yet, if we look at what the machines can do today, w…
S80
https://dig.watch/event/india-ai-impact-summit-2026/keynote-by-vivek-mahajan-cto-fujitsu-india-ai-impact-summit — But then this technology, the compute networks, as well as the AI platform stack, comes together in edge devices. Robots…
S81
AUDA-NEPAD White Paper: Regulation and Responsible Adoption of AI in Africa Towards Achievement of AU Agenda 2063 — Strengthening the digital component of education entails a good foundation for scientific education at the tertiary leve…
S82
From Innovation to Impact_ Bringing AI to the Public — If we don’t make for it, our all compounded historical knowledge will be lacking in the next generation. So instead of a…
S83
How the Global South Is Accelerating AI Adoption_ Finance Sector Insights — Data residency requirements and lack of cutting-edge model infrastructure in India create deployment barriers
S84
https://dig.watch/event/india-ai-impact-summit-2026/driving-indias-ai-future-growth-innovation-and-impact — Thank you, Mridu, and thank you, everyone, for joining us for the unveiling of this important blueprint. As we have hear…
S85
The Innovation Beneath AI: The US-India Partnership powering the AI Era — Thank you. Thank you, Joel. Thank you, everybody, for being here this morning. Let me first start by putting the AI. Tha…
S86
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — Brandon Mello from GenSpark identified adoption challenges, noting that 95% of AI pilots fail to reach production due to…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
A
Archana Jahargirdar
2 arguments69 words per minute569 words488 seconds
Argument 1
Emphasis on product‑only pitches, no business or funding talk
EXPLANATION
Archana instructed the founders to keep their presentations strictly about the product, avoiding any discussion of business models, funding, or revenue. This rule was set to ensure the summit focuses on technical product insights rather than commercial pitches.
EVIDENCE
She outlined the format, stating that each founder should talk only about their product and that there should be no business, pitching, or money discussion during the presentations [5-8].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The session moderation notes state that Archana instructed founders to keep presentations strictly about the product and avoid any business or funding discussion [S3].
MAJOR DISCUSSION POINT
Product‑only presentation rule
DISAGREED WITH
Ravindra Kumar, Pradyum Gupta, Vaibhavath Shukla
Argument 2
Guidance to presenters to balance technical jargon with accessibility
EXPLANATION
Archana asked presenters to use appropriate technical language but also to simplify explanations for audience members who may not be AI experts. She emphasized that both jargon‑heavy and simplified talks are acceptable as long as the audience can follow.
EVIDENCE
She requested presenters to use jargon if the audience can understand it, while also being mindful of non-AI natives and offered the option to simplify the language [2]; she explicitly said simplifying is fine [3] and not simplifying is also okay [4].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Archana asked presenters to use jargon only when the audience can follow, while also allowing simplification for non-AI natives, as highlighted in the discussion summary [S17] and the session guidelines [S3].
MAJOR DISCUSSION POINT
Balancing jargon and accessibility
R
Ravindra Kumar
5 arguments161 words per minute1033 words382 seconds
Argument 1
Goal to “automate automation” using agentic AI; three modules: conceptualize, deploy, troubleshoot
EXPLANATION
Ravindra presented Technodate AI’s vision of making automation as easy as DIY by leveraging agentic AI. The solution is structured into three modules that help users design, implement, and maintain automation systems.
EVIDENCE
He explained that Technodate aims to make automation DIY using agentic AI and described three modules: conceptualizing robotics solutions, deploying and commissioning them, and troubleshooting when issues arise [28-32].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Ravindra described Technodate’s agentic AI platform with three modules (conceptualize, deploy, troubleshoot) in the founders’ conversation [S3], and the broader context of agentic AI in industry is discussed in an external analysis of industrial diplomacy [S18].
MAJOR DISCUSSION POINT
Agentic AI for automation
Argument 2
Need for a foundational model despite funding challenges; iterative customer‑driven approach
EXPLANATION
Ravindra noted that building a foundational AI model is essential for their product, but raising funds for such a model in India is difficult. Consequently, they adopted an iterative approach, engaging customers early and experimenting before deciding on the need for a foundational model.
EVIDENCE
He described the initial thought of building a foundational model, the difficulty of raising money in India, and the decision to talk to customers and experiment, eventually realizing a foundational model was still required [33-40] and highlighted funding challenges [34-36].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
He noted difficulty raising funds for a foundational model in India and adopted an iterative, customer-driven approach, as recorded in the raw conversation transcript [S3].
MAJOR DISCUSSION POINT
Foundational model funding dilemma
DISAGREED WITH
Vaibhavath Shukla
Argument 3
Foundational models are optional; focus should be on solving specific customer problems at the application layer
EXPLANATION
Ravindra argued that building a foundational model is not always necessary; the priority should be delivering solutions that address concrete customer needs through application‑level development.
EVIDENCE
He stated that he is not fond of building foundational models and that the aim is to solve the customer’s problem, emphasizing the importance of the application layer over the model itself [55-60].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
He argued that building foundational models is not always necessary and emphasized application-layer solutions, aligning with perspectives on context-specific AI versus universal foundation models [S19].
MAJOR DISCUSSION POINT
Application‑layer focus over foundational models
Argument 4
Strategic partnerships with the Indian Air Force and Fortune 500 companies validate the platform’s impact
EXPLANATION
Ravindra highlighted collaborations with high‑profile customers, including a forthcoming deployment with the Indian Air Force and existing applications for Fortune 500 firms, demonstrating market traction and potential societal impact.
EVIDENCE
He mentioned that they are exploring a use case with the Indian Air Force and have already deployed applications with Fortune 500 companies, indicating strong validation of their technology [47] and [41].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
He mentioned an upcoming deployment with the Indian Air Force and existing Fortune 500 customers, confirming market validation in the discussion notes [S3].
MAJOR DISCUSSION POINT
Strategic partnerships and market validation
Argument 5
Agentic AI can automatically generate CNC programming for manufacturing and defense use cases
EXPLANATION
Ravindra explained that their platform extends automation beyond robotics by using generative AI to produce CNC programs required for aerospace, automotive, and defense components, thereby streamlining complex manufacturing workflows.
EVIDENCE
He described CNC programming as essential for aerospace and automotive parts and stated that such programs can be generated using agentic or generative AI, providing an example of an aero-engine error diagnosis powered by AI [53-55].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The platform’s capability to generate CNC programs for aerospace and defense using generative AI is described in the founders’ conversation [S3].
MAJOR DISCUSSION POINT
AI‑driven CNC programming for industrial automation
V
Vaibhavath Shukla
7 arguments163 words per minute1130 words414 seconds
Argument 1
Building a complete voice infrastructure that removes humans from the loop; partnerships with OpenAI and large enterprises
EXPLANATION
Vaibhavath described Quonsys AI’s end‑to‑end voice platform that automates call‑center operations without human intervention. The company collaborates with OpenAI and serves major enterprises such as Paytm, CRED, and PropBotX.
EVIDENCE
He introduced Quonsys AI as a voice infrastructure that eliminates the need for humans in the loop, mentioning partnerships with OpenAI and work with top enterprises like Paytm, CRED, and PropBotX [68-76].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Vaibhavath presented Quonsys AI as an end-to-end voice infrastructure with OpenAI partnership and enterprise customers such as Paytm and CRED, as detailed in the raw conversation summary [S3].
MAJOR DISCUSSION POINT
Voice AI for call‑center automation
Argument 2
Pricing model based on per‑minute usage; scalability plan with custom data engine and concurrency growth
EXPLANATION
The pricing strategy charges customers per minute of AI usage, with a subscription‑like model. Vaibhavath also highlighted plans to increase concurrency and scale the system as demand grows.
EVIDENCE
He explained a per-minute charging model and described scaling concurrency from 50 upwards, noting cost reductions of up to 70 % for large customers [99-107] and detailed pricing per minute [119-122].
MAJOR DISCUSSION POINT
Per‑minute pricing and scalability
DISAGREED WITH
Pradyum Gupta, Vivek Gupta
Argument 3
Creation of a proprietary data engine to generate training data; avoiding public datasets that cause reliability issues
EXPLANATION
After encountering problems with public datasets, Vaibhavath’s team built their own data engine to generate high‑quality training data for their voice models, ensuring better performance and reliability.
EVIDENCE
He described building a proprietary data engine after public datasets proved problematic, generating data internally and even receiving an award from the Prime Minister for this effort [109-115].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
He emphasized building a proprietary data engine to avoid unreliable public datasets, reflecting the need for domain-specific data highlighted in analyses of AI strategy and model development [S19].
MAJOR DISCUSSION POINT
Proprietary data engine development
Argument 4
Demonstrated cost savings of 70‑90 % for large BPO/enterprise customers
EXPLANATION
Vaibhavath claimed that Quonsys AI delivers substantial cost reductions for large enterprises, citing savings ranging from 70 % to 90 % compared with traditional call‑center operations.
EVIDENCE
He mentioned that the solution can save 70-90 % for big customers, providing an example of reduced per-minute cost and overall cost efficiency for enterprises like Paytm [119-122].
MAJOR DISCUSSION POINT
Significant cost savings
Argument 5
Building a proprietary data engine is essential for domain‑specific performance; foundational model development is secondary
EXPLANATION
Vaibhavath reiterated that a custom data engine is crucial for achieving high performance in their specific domain, reducing reliance on generic foundational models.
EVIDENCE
He emphasized that the proprietary data engine is central to their approach because public datasets caused reliability issues, and that this engine underpins their domain-specific performance [109-115].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The priority given to a custom data engine over generic foundational models mirrors arguments about context-specific AI solutions versus universal models [S19].
MAJOR DISCUSSION POINT
Domain‑specific data engine priority
DISAGREED WITH
Ravindra Kumar
Argument 6
Inquiry about routing calls to AI agents and dynamic use‑case handling
EXPLANATION
An audience member asked whether an AI agent could be placed on a phone to answer incoming calls according to specific requirements. Vaibhavath confirmed this capability and explained the technical handshake involved.
EVIDENCE
The audience asked if an agent could be embedded in a phone to answer calls, and Vaibhavath responded that it can be done via a backend handshake using web sockets, allowing dynamic question handling [94-95].
MAJOR DISCUSSION POINT
Dynamic call routing to AI agents
Argument 7
Question on scaling challenges of foundational models and concurrency limits
EXPLANATION
The audience raised concerns about scaling foundational models and handling high concurrency. Vaibhavath addressed these concerns by describing their data engine, concurrency handling, and plans to increase capacity.
EVIDENCE
The audience queried scaling and concurrency issues, and Vaibhavath answered by discussing the proprietary data engine, current concurrency of 50, and plans to scale up to handle thousands of requests, emphasizing cost-effective scaling [103-105] and his detailed response on scaling strategy [109-115].
MAJOR DISCUSSION POINT
Scaling and concurrency strategy
P
Pradyum Gupta
5 arguments190 words per minute2346 words739 seconds
Argument 1
Collecting massive visual data via dash‑cams/CCTVs to update maps instantly; use cases in billboard pricing, autonomous vehicle safety, bus fleet optimization
EXPLANATION
Pradyum explained that Papri Labs gathers visual data from dash‑cams and CCTVs across metro cities, processes petabytes of footage, and updates maps in real time. This data supports applications such as dynamic billboard pricing, safety for autonomous vehicles, and optimizing bus fleet deployment.
EVIDENCE
He described deploying cameras on vehicles to collect visual data, handling around 100 petabytes, and using it for use cases like billboard pricing based on impressions, autonomous vehicle safety, and bus fleet capacity optimization [121-144].
MAJOR DISCUSSION POINT
Real‑time visual mapping platform
Argument 2
Business model based on selling “tiles” of mapped area on a per‑day basis
EXPLANATION
Papri Labs monetizes its mapping service by selling geographic tiles (25 km × 25 km) to customers, charging a fixed fee per tile per day.
EVIDENCE
He stated that the company sells mapped areas in tiles of 25 km by 25 km, priced at 1.5 lakh rupees per tile per day [267-271].
MAJOR DISCUSSION POINT
Tile‑based pricing model
DISAGREED WITH
Vaibhavath Shukla, Vivek Gupta
Argument 3
Blurring faces and number plates; keeping raw video internal; using bare‑metal European servers, no hyperscalers
EXPLANATION
Pradyum outlined privacy safeguards: raw video footage is never released, personal identifiers such as faces and license plates are blurred, and all data is stored on bare‑metal servers in Europe, avoiding public cloud providers.
EVIDENCE
He explained that they never expose raw video, blur faces and number plates, and store data on bare-metal European servers (Hetzner), without using hyperscalers like AWS [208-214].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The privacy measures-blurring personal identifiers and storing raw video on bare-metal European servers-are consistent with data-governance practices discussed in recent policy literature on data protection and sovereignty [S23][S25].
MAJOR DISCUSSION POINT
Data privacy and compliance measures
DISAGREED WITH
Vivek Gupta
Argument 4
Concern about DPDP compliance and personal data handling in visual mapping
EXPLANATION
An audience member asked how Papri Labs complies with India’s DPDP regulations given the personal data they collect. Pradyum responded by detailing their anonymisation practices and secure storage architecture.
EVIDENCE
The audience raised DPDP compliance concerns regarding personal images and number plates, and Pradyum answered that they blur faces and plates, keep raw video internal, and host data on bare-metal European servers, ensuring compliance [208-214] (reiterated in [225-231]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Their compliance approach aligns with privacy regulation considerations outlined in AI privacy studies and data-governance frameworks, which stress anonymisation and secure storage for visual data [S25][S23].
MAJOR DISCUSSION POINT
DPDP compliance strategy
Argument 5
Difficulty in obtaining crowdsourced visual data because individuals are reluctant to install dash‑cams or share phone data
EXPLANATION
Pradyum noted that despite the need for large volumes of visual data, convincing individual passengers or vehicle owners to place dash‑cams or share phone data is a major obstacle, limiting the scalability of their data collection approach.
EVIDENCE
He remarked that asking a single passenger to put a phone in their car to provide data meets resistance, with none willing to comply, illustrating the challenge of crowdsourced data acquisition [162-164].
MAJOR DISCUSSION POINT
Challenges in crowdsourced data acquisition
M
Meenal Gupta
3 arguments154 words per minute1230 words478 seconds
Argument 1
AI‑assisted contouring reduces manual radiotherapy planning from 90‑960 min to 5‑15 min; HIPAA, ISO, and SEDESCO certifications ensure regulatory compliance
EXPLANATION
Meenal described how EasyOPI’s Imagix AI automates the contouring step in radiotherapy planning, cutting processing time dramatically. The solution is certified for medical use, meeting HIPAA, ISO 13485, and SEDESCO standards.
EVIDENCE
She explained that manual contouring takes 90-960 minutes, while their AI reduces it to 5-15 minutes, and highlighted certifications such as HIPAA, ISO 13485, and SEDESCO [332-335] and [290-294].
MAJOR DISCUSSION POINT
AI acceleration of radiotherapy planning
Argument 2
Trust built through human‑in‑the‑loop validation; AI provides assistance, not autonomous decisions
EXPLANATION
Meenal emphasized that their AI system assists clinicians but final decisions remain with radiologists, ensuring a human‑in‑the‑loop approach that builds trust in the technology.
EVIDENCE
She stated that the AI assists doctors, but final approval must be given by radiologists, maintaining a human-in-the-loop process [348-352].
MAJOR DISCUSSION POINT
Human‑in‑the‑loop trust model
Argument 3
Large‑scale deployment has yielded over one million scans and identified thousands of TB and lung‑cancer cases, demonstrating tangible health impact
EXPLANATION
Meenal highlighted the extensive reach of Imagix AI across multiple Indian states, processing more than a million chest X‑ray scans and flagging thousands of TB‑positive and lung‑cancer cases, thereby showing concrete public‑health benefits.
EVIDENCE
She reported that the platform has processed around one million scans, detected approximately 4,000 TB-positive cases and 2,700 TB-flagged cases, and performed hundreds of thousands of chest X-rays, underscoring its real-world impact [334-340].
MAJOR DISCUSSION POINT
Real‑world health impact of AI‑driven imaging
V
Vivek Gupta
4 arguments193 words per minute1679 words519 seconds
Argument 1
DIY, no‑code voice platform covering STT, TTS, LLM, speech‑to‑speech with sub‑400 ms latency; native dialect mastery
EXPLANATION
Vivek presented Indus Labs AI’s platform as a no‑code, DIY solution that provides end‑to‑end voice capabilities—including speech‑to‑text, text‑to‑speech, and large‑language‑model integration—with low latency and support for diverse Indian dialects.
EVIDENCE
He described the platform as a DIY, no-code solution that includes STT, TTS, LLM, speech-to-speech, achieves sub-400 ms latency, and handles native Indian dialects across regions [362-368] (also noted in [363]).
MAJOR DISCUSSION POINT
DIY multilingual voice platform
DISAGREED WITH
Pradyum Gupta, Vaibhavath Shukla
Argument 2
End‑to‑end integration with CRM, cost reduction up to 70 %, data residency on Indian servers
EXPLANATION
Vivek explained that the platform integrates directly with CRM systems, provides sentiment analysis, reduces operational costs by up to 70 %, and ensures that all data remains within India for sovereignty.
EVIDENCE
He detailed CRM integration, sentiment analysis, cost reductions of up to 70 %, and Indian data residency, noting latency and multilingual support [374-382].
MAJOR DISCUSSION POINT
CRM integration and cost efficiency
Argument 3
Request for details on how to construct voice‑agent flows on the platform
EXPLANATION
An audience member asked whether the platform provides pre‑built flows or requires users to manually connect nodes. Vivek clarified that the platform is DIY, offering tutorials and support for building custom flows.
EVIDENCE
The audience queried the flow-building process, and Vivek responded that the platform is DIY with tutorials, and that support is available if users encounter difficulties [441-445].
MAJOR DISCUSSION POINT
DIY flow construction guidance
Argument 4
Commitment to Indian data sovereignty by keeping all platform data on Indian servers
EXPLANATION
Vivek emphasized that the entire data pipeline resides within India, ensuring compliance with sovereign data requirements and enhancing trust for Indian enterprises and regulators.
EVIDENCE
He stated that the data will reside in India, describing the platform as a pure Indian company with data residency on Indian soil, reinforcing the focus on data sovereignty [390-393].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Vivek’s emphasis on Indian-only data residency reflects broader policy discussions on data sovereignty and national regulatory compliance in AI deployments [S25][S23].
MAJOR DISCUSSION POINT
Data sovereignty and compliance
DISAGREED WITH
Pradyum Gupta
A
Audience
5 arguments161 words per minute492 words183 seconds
Argument 1
Concern about DPDP compliance for large‑scale visual data collection
EXPLANATION
The audience raised questions about how Papri Labs complies with India’s Data Protection and Data Privacy (DPDP) regulations given the massive amount of personal visual data they collect, such as images of people and vehicle number plates.
EVIDENCE
Audience members asked how the company handles DPDP compliance and personal data after hearing that they process petabytes of video and capture images of individuals and car numbers, prompting a clarification on anonymisation and storage practices [199-206].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The audience’s DPDP concerns echo the same data-protection guidelines and best-practice recommendations on handling personal visual data in large-scale projects [S25][S23].
MAJOR DISCUSSION POINT
Data privacy and regulatory compliance
Argument 2
Question about incentive mechanisms for dash‑cam data contributors
EXPLANATION
The audience inquired what incentives are offered to owners of dash‑cams or other devices that supply visual data, highlighting the need for a sustainable model to encourage data contribution.
EVIDENCE
An audience member asked about incentives for dash-cam holders, and Pradyum responded that they do not pay incentives; instead, the data providers pay the company for the service [244-247] and [246].
MAJOR DISCUSSION POINT
Incentive structures for data contributors
Argument 3
Skepticism about scaling foundational AI models after demo‑level success
EXPLANATION
Audience members expressed doubts that foundational models, which work well in demos, can be scaled reliably, citing the failure of the Servam model at larger scale and asking how the startup plans to address such scalability issues.
EVIDENCE
The audience referenced seeing Servam break when scaling and asked how the company would combat similar scenarios, leading to a discussion about their proprietary data engine and concurrency scaling plans [103-108].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The skepticism mirrors expert commentary on the challenges of scaling foundation models from prototype to production, as discussed in analyses of AI model scalability and context-specific deployments [S19].
MAJOR DISCUSSION POINT
Scalability challenges of AI models
Argument 4
Inquiry about deploying AI agents directly on end‑user phones
EXPLANATION
The audience asked whether an AI agent could be installed on a phone to answer incoming calls according to specific business requirements, probing the feasibility of on‑device AI solutions.
EVIDENCE
Audience members asked if an agent could be placed into a phone to attend calls and answer according to requirements, and Vaibhavath confirmed it is possible via a backend handshake using web sockets [94-95].
MAJOR DISCUSSION POINT
On‑device AI agent deployment
Argument 5
Emphasis on trust as a critical factor for AI adoption in health technology
EXPLANATION
The audience highlighted that trust in the underlying technology and scientific methodology is essential for the acceptance of AI‑driven health solutions across India.
EVIDENCE
An audience participant asked how the company ensures that the technology and science behind its health AI product are trusted by beneficiaries [346-347].
MAJOR DISCUSSION POINT
Building trust in health AI solutions
Agreements
Agreement Points
All presenters adhered to the moderator’s rule to keep pitches strictly product‑focused, avoiding business, funding or revenue discussion.
Speakers: Archana Jahargirdar, Ravindra Kumar, Vaibhavath Shukla, Pradyum Gupta, Meenal Gupta, Vivek Gupta
Emphasis on product-only pitches, no business or funding talk (Archana) [5-8] Presentation of Technodate AI focused on product modules (Ravindra) [28-32] Presentation of Quonsys AI focused on voice infrastructure product (Vaibhavath) [68-76] Presentation of Papri Labs focused on visual-mapping product (Pradyum) [121-144] Presentation of EasyOPI Solutions focused on Imagix AI product (Meenal) [288-335] Presentation of Indus Labs AI focused on voice platform product (Vivek) [362-368]
Archana set a clear guideline that founders should talk only about their product and not about business or funding, and every founder respected this instruction, keeping their remarks centred on technical capabilities and use‑cases rather than commercial details.
Consensus that delivering domain‑specific applications and building proprietary data assets is more critical than investing in large foundational models.
Speakers: Ravindra Kumar, Vaibhavath Shukla, Pradyum Gupta, Vivek Gupta
Foundational models are optional; focus should be on solving specific customer problems at the application layer (Ravindra) [55-60] Creation of a proprietary data engine to generate training data; avoiding public datasets (Vaibhavath) [109-115] Collecting massive visual data via own dash-cams and processing it internally rather than relying on generic models (Pradyum) [162-165] Building a DIY platform with own infrastructure rather than depending on external foundational models (Vivek) [362-368]
All four speakers argued that the priority is to build specialised, application‑level solutions supported by in‑house data pipelines, and that the expense and difficulty of creating a generic foundational model can be bypassed.
POLICY CONTEXT (KNOWLEDGE BASE)
This view aligns with expert commentary urging focus on small, domain-specific niche models rather than large foundation models, emphasizing democratized AI development [S55].
Strong emphasis on data privacy, regulatory compliance and sovereignty across different domains.
Speakers: Pradyum Gupta, Meenal Gupta, Vivek Gupta, Vaibhavath Shukla
Blurring faces and number plates; storing raw video on bare-metal European servers; DPDP compliance (Pradyum) [208-214] HIPAA, ISO 13485 and SEDESCO certifications ensure medical data compliance (Meenal) [290-294] Indian data residency; all data kept on Indian servers for sovereignty (Vivek) [390-393] Proprietary data engine to control data quality and avoid reliance on public datasets (Vaibhavath) [109-115]
Each speaker highlighted concrete measures to protect personal data and meet national or sectoral regulations, signalling a shared commitment to privacy and data‑sovereignty.
POLICY CONTEXT (KNOWLEDGE BASE)
The emphasis mirrors multilateral data-governance frameworks that stress data sovereignty, compliance with national laws, and robust protection policies [S43][S44][S45].
Cost reduction is presented as a primary value proposition for AI‑enabled automation.
Speakers: Vaibhavath Shukla, Vivek Gupta, Ravindra Kumar
Demonstrated cost savings of 70-90 % for large BPO/enterprise customers (Vaibhavath) [119-122] End-to-end integration can reduce operational costs up to 70 % (Vivek) [380-382] Automation of industrial processes reduces labour and improves efficiency (Ravindra) [28-32]
All three speakers framed their solutions as ways to achieve substantial cost efficiencies for enterprises, whether in call‑center operations, voice platforms or industrial automation.
Promotion of DIY / no‑code platforms that enable non‑technical users to build AI‑driven solutions.
Speakers: Ravindra Kumar, Vaibhavath Shukla, Vivek Gupta
Make automation as easy as DIY using agentic AI (Ravindra) [28-32] Voice infrastructure is a DIY platform for building voice agents (Vaibhavath) [68-76] DIY, no-code voice platform with low latency and dialect mastery (Vivek) [362-368]
Each of these founders positioned their product as a self‑service, low‑code environment that lowers the barrier for organisations to create AI solutions without deep technical expertise.
Similar Viewpoints
Both argue that the core competitive advantage lies in owning the data pipeline and tailoring models to the specific problem domain, reducing dependence on large, generic foundation models.
Speakers: Ravindra Kumar, Vaibhavath Shukla
Need for proprietary data / domain-specific models rather than generic foundational models (Ravindra) [55-60] Proprietary data engine to generate high-quality training data (Vaibhavath) [109-115]
Both stress the importance of controlling data collection and processing to ensure quality, security and regulatory compliance.
Speakers: Pradyum Gupta, Vaibhavath Shukla
Privacy safeguards (blurring, internal storage) for large visual datasets (Pradyum) [208-214] Proprietary data engine to avoid unreliable public datasets (Vaibhavath) [109-115]
Both see trust‑building measures—whether through human oversight or data sovereignty—as essential for adoption of AI solutions in sensitive sectors.
Speakers: Meenal Gupta, Vivek Gupta
Human-in-the-loop validation to build trust in health AI (Meenal) [348-352] Data residency and sovereign hosting to foster trust in voice AI (Vivek) [390-393]
Unexpected Consensus
Both a health‑imaging startup (EasyOPI) and a voice‑AI startup (Indus Labs) highlighted the need for human‑in‑the‑loop or sovereign data handling as a trust mechanism, despite operating in very different domains.
Speakers: Meenal Gupta, Vivek Gupta
Human-in-the-loop validation ensures trust in radiotherapy planning (Meenal) [348-352] Indian data residency guarantees sovereign control and trust (Vivek) [390-393]
While one focuses on clinical validation and the other on national data residency, both converge on the principle that trust is achieved by keeping a human or jurisdictional safeguard over AI decisions.
POLICY CONTEXT (KNOWLEDGE BASE)
Their trust-by-design approach reflects governance recommendations for human oversight and sovereign data stewardship to build user confidence [S43][S44].
Agreement between a visual‑mapping company (Papri Labs) and a voice‑AI company (Quonsys AI) on the necessity of building a proprietary data engine to overcome limitations of public datasets, even though their products serve unrelated markets.
Speakers: Pradyum Gupta, Vaibhavath Shukla
Difficulty of crowdsourced visual data and reliance on own data collection (Pradyum) [162-165] Creation of a proprietary data engine after public datasets proved unreliable (Vaibhavath) [109-115]
Both founders independently arrived at the conclusion that owning the data generation pipeline is essential for performance, showing a cross‑domain convergence on data strategy.
POLICY CONTEXT (KNOWLEDGE BASE)
Papri Labs demonstrated a proprietary visual data engine to address public-dataset gaps, illustrating the broader industry trend toward private data assets for domain solutions [S38][S55].
Overall Assessment

The discussion revealed a clear convergence around product‑centric, application‑layer AI solutions that prioritize proprietary data, privacy compliance, cost efficiency and user‑friendly DIY interfaces. Speakers from diverse sectors (industrial automation, voice call‑center automation, visual mapping, health imaging) repeatedly stressed the same strategic pillars: avoid heavyweight foundational models, protect data, demonstrate tangible cost savings, and empower users through low‑code platforms.

High consensus on strategic approach (application focus, data ownership, privacy, cost reduction, DIY enablement). This alignment suggests that future AI deployments in the Indian context are likely to follow a model of domain‑specific, privacy‑by‑design products that are accessible to non‑technical users and deliver clear economic benefits.

Differences
Different Viewpoints
Whether a foundational AI model is required for the product
Speakers: Ravindra Kumar, Vaibhavath Shukla
Need for a foundational model despite funding challenges; iterative customer‑driven approach Building a proprietary data engine is essential for domain‑specific performance; foundational model development is secondary
Ravindra states that after experimenting with customers they realized they still need a foundational model for their automation platform [33-40], while Vaibhavath argues that a custom data engine is sufficient and a foundational model is not essential for delivering performance [109-115].
POLICY CONTEXT (KNOWLEDGE BASE)
The debate echoes ongoing discussions in AI policy circles about preferring niche, domain-specific models over large, generic foundation models [S55].
Approach to data residency and storage for large‑scale visual data
Speakers: Pradyum Gupta, Vivek Gupta
Blurring faces and number plates; keeping raw video internal; using bare‑metal European servers, no hyperscalers Commitment to Indian data sovereignty by keeping all platform data on Indian servers
Pradyum explains that all raw video is stored on bare-metal servers in Europe and never exposed publicly, emphasizing privacy and avoiding hyperscalers [208-214]. Vivek, in contrast, stresses that all data for his voice platform resides within India to satisfy data-sovereignty requirements [390-393].
POLICY CONTEXT (KNOWLEDGE BASE)
Data-residency considerations are guided by principles that data remains subject to the laws of its country of origin and must respect sovereignty, as highlighted in recent data-governance policy statements [S43][S41].
Pricing and monetisation models for AI‑driven services
Speakers: Pradyum Gupta, Vaibhavath Shukla, Vivek Gupta
Business model based on selling “tiles” of mapped area on a per‑day basis Pricing model based on per‑minute usage; scalability plan with custom data engine and concurrency growth DIY, no‑code voice platform covering STT, TTS, LLM, speech‑to‑speech with sub‑400 ms latency; native dialect mastery
Pradyum proposes a tile-based fee of 1.5 lakh rupees per 25 km × 25 km tile per day [267-271]. Vaibhavath charges customers per minute of AI usage and plans to increase concurrency as demand grows [99-107]. Vivek also uses a per-minute pricing model but at a lower rate (≈2 rupees per minute) and highlights cost reductions up to 70 % [380-382]. The three founders therefore disagree on the optimal monetisation strategy.
POLICY CONTEXT (KNOWLEDGE BASE)
Challenges of AI service monetisation for low-resource users and calls for new, sustainable models have been documented in agritech AI deployments and AI-storytelling summit recommendations [S40][S42].
Adherence to the summit’s “product‑only” presentation rule
Speakers: Archana Jahargirdar, Ravindra Kumar, Pradyum Gupta, Vaibhavath Shukla
Emphasis on product‑only pitches, no business or funding talk Ravindra acknowledges that “being a founder, some pitching comes in by default” Business model based on selling tiles; discussion of revenue and partnerships Building a complete voice infrastructure … includes partnership with OpenAI and pricing details
Archana explicitly instructs presenters to avoid any business, funding or revenue discussion and focus solely on the product [5-8]. Ravindra, however, admits that pitching elements slipped into his talk [43]. Pradyum and Vaibhavath both describe commercial aspects such as pricing, partnerships and revenue models during their presentations [267-271] and [74-76], respectively, creating tension with the moderator’s rule.
Unexpected Differences
Founders’ inclusion of commercial details despite a moderator‑enforced product‑only format
Speakers: Archana Jahargirdar, Ravindra Kumar, Pradyum Gupta, Vaibhavath Shukla
Emphasis on product‑only pitches, no business or funding talk Ravindra acknowledges that “being a founder, some pitching comes in by default” Business model based on selling tiles; discussion of revenue and partnerships Building a complete voice infrastructure … includes partnership with OpenAI and pricing details
The moderator’s clear instruction to keep presentations strictly technical [5-8] was unexpectedly breached by multiple founders who introduced business-related content (pricing, partnerships, revenue models). This tension was not anticipated given the session’s stated purpose.
Overall Assessment

The discussion revealed several substantive disagreements: (1) the necessity of a foundational AI model versus reliance on bespoke data engines; (2) contrasting data‑sovereignty strategies (European bare‑metal vs Indian‑hosted servers); (3) divergent monetisation approaches (tile‑based, per‑minute, or low‑cost per‑minute pricing); and (4) tension between the moderator’s product‑only rule and founders’ inclination to discuss commercial aspects. While all participants agree on the broader aim of AI‑driven automation, they differ markedly on technical architecture, data governance, and business models.

Moderate to high – the disagreements span technical design choices, regulatory compliance strategies, and presentation norms, indicating that consensus on implementation pathways is limited. These divergences could affect collaboration, standard‑setting, and policy formulation within the AI‑driven automation ecosystem.

Partial Agreements
All speakers share the overarching goal of leveraging AI to automate complex, domain‑specific processes (industrial automation, call‑center operations, real‑time visual mapping, voice infrastructure, radiotherapy planning). However, they diverge on the technical route to achieve this—Ravindra emphasises a foundational model, Vaibhavath relies on a custom data engine, Pradyum uses crowdsourced visual data, Vivek builds a DIY multilingual voice stack, and Meenal focuses on AI‑assisted medical imaging with strict regulatory compliance.
Speakers: Ravindra Kumar, Vaibhavath Shukla, Pradyum Gupta, Vivek Gupta, Meenal Gupta
Goal to automate automation / voice / mapping / radiotherapy planning using AI Building a proprietary data engine is essential for domain‑specific performance; foundational model development is secondary Collecting massive visual data via dash‑cams/CCTVs to update maps instantly; use cases in billboard pricing, autonomous vehicle safety, bus fleet optimisation DIY, no‑code voice platform covering STT, TTS, LLM, speech‑to‑speech with sub‑400 ms latency; native dialect mastery AI‑assisted contouring reduces manual radiotherapy planning from 90‑960 min to 5‑15 min; HIPAA, ISO, SEDESCO certifications ensure regulatory compliance
Takeaways
Key takeaways
The summit adopted a strict product‑only presentation format: founders were asked to discuss only their technology, avoiding business, funding, or sales pitches. Presenters were encouraged to balance technical jargon with accessibility so non‑AI audiences could understand. Technodate AI (Ravindra Kumar) aims to ‘automate automation’ using agentic AI with three modules – conceptualization, deployment, and troubleshooting – and highlighted the need for a domain‑specific foundational model despite funding constraints. Quonsys AI (Vaibhavath Shukla) is building an end‑to‑end voice AI platform to fully automate call‑center operations, leveraging a proprietary data engine, per‑minute pricing, and partnerships with OpenAI and large enterprises. Papri Labs (Pradyum Gupta) offers a real‑time visual mapping and video‑analytics platform that aggregates dash‑cam/CCTV footage to keep maps current and provides B2B services such as dynamic billboard pricing and fleet optimization, sold on a per‑tile, per‑day basis. Papri Labs addressed data‑privacy concerns by blurring personally identifiable information, keeping raw video internal, and hosting on bare‑metal European servers rather than hyperscalers. EasyOPI Solutions (Meenal Gupta) delivers AI‑assisted cancer imaging and radiotherapy treatment planning, reducing contouring time from up to 960 minutes to 5‑15 minutes, and emphasizes regulatory compliance (HIPAA, ISO 13485, SEDESCO) and a human‑in‑the‑loop model for trust. Indus Labs AI (Vivek Gupta) provides a DIY, no‑code voice‑architecture platform covering STT, TTS, LLM and speech‑to‑speech with sub‑400 ms latency, native Indian dialect support, end‑to‑end CRM integration, and a cost model up to 70 % cheaper than global alternatives. Founders across the board stressed that solving concrete customer problems at the application layer is more critical than building generic foundational models.
Resolutions and action items
Founders were invited to continue one‑on‑one conversations after the session (implicit action to follow up with interested parties). Papri Labs clarified its pricing model (tile‑based, per‑day) in response to audience queries. Quonsys AI explained its per‑minute usage pricing and plans to increase concurrency as the model scales.
Unresolved issues
How Quonsys AI will reliably scale its foundational model to handle higher concurrency without the failures observed in other systems (e.g., Servam) – no concrete scaling plan was detailed. Detailed DPDP compliance mechanisms for Papri Labs beyond blurring faces and number plates and using European bare‑metal servers remain unclear. Incentive mechanisms for dash‑cam owners or bus operators supplying visual data were not fully explained; the claim that they pay the platform leaves the motivation question open. Specific steps for non‑technical users to construct voice‑agent flows on Indus Labs’ platform were only broadly described; a concrete UI/UX walkthrough was not provided. Long‑term sustainability of Technodate AI’s foundational model development given funding challenges was not resolved.
Suggested compromises
Archana asked presenters to use jargon if needed but also to simplify for non‑AI audiences – a compromise between technical depth and accessibility. Ravindra Kumar offered presenters the choice to simplify or retain technical language, respecting both preferences. Papri Labs chose to keep raw video data internal and only sell processed, anonymized outputs, balancing data utility with privacy compliance. Quonsys AI adopted a per‑minute pricing model rather than a fixed subscription, allowing customers to pay only for actual usage.
Thought Provoking Comments
The format we’ll follow is that each one of you takes a little bit of time to talk about your product. But like I said again, only product. No business, no pitching, no money, nothing.
Sets a clear, disciplined scope for the session, emphasizing knowledge sharing over fundraising, which frames the entire discussion and encourages technical depth.
Established the tone of the meeting, prompting founders to focus on product details. This led to more technical explanations (e.g., foundational models, data engines) rather than sales pitches, shaping the subsequent flow of the conversation.
Speaker: Archana Jahargirdar
Model can become ASI, the super intelligence level. You still will have to build the application.
Highlights the distinction between raw AI capability and practical productisation, reminding the audience that even the most advanced models need domain‑specific application layers to deliver value.
Shifted the discussion from abstract AI hype to concrete engineering challenges. It prompted follow‑up questions about foundational models, data ownership, and on‑premise deployment, deepening the technical debate.
Speaker: Ravindra Kumar
India doesn’t need more wrappers we need infrastructure and that’s what we are building at Quonsys AI… we can automate the entire call‑center and run it end‑to‑end without humans in the loop.
Frames the problem as a missing foundational layer rather than incremental features, positioning voice AI as essential national infrastructure.
Redirected the audience’s focus to large‑scale, systemic challenges (data generation, latency, cost). Sparked a series of questions about deployment, scaling, and pricing models, moving the conversation toward practical implementation.
Speaker: Vaibhavath Shukla
We built our own data engine because public datasets were insufficient; we generate data at scale and fine‑tune on it.
Identifies a core bottleneck—data scarcity—and presents a self‑sufficient solution, illustrating a strategic approach to AI development in a resource‑constrained environment.
Introduced the theme of data sovereignty and scalability, leading to deeper inquiries about model performance, concurrency limits, and cost structures.
Speaker: Vaibhavath Shukla
We never take out front‑camera video for public use; faces and number plates are blurred. We run on bare‑metal servers in Europe, not on hyperscalers.
Addresses privacy and regulatory compliance (DPDP) head‑on, showing a concrete governance framework for handling massive visual data.
Turned the discussion toward legal and ethical considerations, prompting the audience to probe further on incentives for data contributors and the business model, thereby expanding the scope beyond pure technology.
Speaker: Pradyum Gupta
We are not replacing doctors. We are just assisting them; final approval has to be done by radiologists. It’s a human‑in‑the‑loop system.
Acknowledges trust issues in health‑tech AI and offers a pragmatic mitigation strategy, reinforcing credibility and ethical responsibility.
Reassured the audience about safety and trust, leading to a concise Q&A about adoption barriers and reinforcing the product’s positioning as a supportive tool rather than a black‑box replacement.
Speaker: Meenal Gupta
We are building the voice operating system of India – low latency (sub‑500 ms), Indian dialect mastery, emotional handling, and sovereign data residency.
Combines technical performance metrics with cultural relevance and data sovereignty, presenting a comprehensive value proposition that differentiates from global players.
Created a pivot point where the conversation moved to comparative analysis with global solutions, cost advantages, and the importance of localized AI, prompting questions about no‑code flow and integration.
Speaker: Vivek Gupta
Overall Assessment

The discussion was shaped by a handful of strategic comments that repeatedly redirected the conversation from generic product pitches to deeper, systemic issues—such as the necessity of application layers over raw AI models, data sovereignty, regulatory compliance, and trust in high‑stakes domains like health. Archana’s opening rule set the disciplined, product‑centric tone, while each founder’s standout remark introduced a new dimension (foundational models, infrastructure gaps, data engine creation, privacy safeguards, human‑in‑the‑loop design, and localized voice OS). These insights triggered focused Q&A rounds, broadened the scope to include legal, ethical, and scalability concerns, and ultimately elevated the dialogue from superficial descriptions to a nuanced exploration of how AI products can be responsibly and effectively deployed in India.

Follow-up Questions
Can the AI agent be deployed directly onto a phone number to answer inbound calls according to specific requirements?
Clarifies technical feasibility of integrating the AI call‑center solution with existing telephony infrastructure, crucial for practical adoption.
Speaker: Audience (unidentified participant)
What is the pricing and subscription model for the AI call‑center solution (per‑minute vs subscription)?
Understanding the cost structure is essential for scaling the product and for potential customers to evaluate ROI.
Speaker: Audience (unidentified participant)
How will the voice AI platform scale reliably, especially given observed failures like Servam when scaling foundational models?
Addresses concerns about robustness and performance of large‑scale AI models, a key factor for enterprise deployment.
Speaker: Audience (unidentified participant)
How does Papri Labs ensure compliance with DPDP (data privacy) when handling personal data such as faces and vehicle number plates in its mapping solution?
Legal compliance and privacy protection are critical for operating in regulated markets and maintaining user trust.
Speaker: Audience (unidentified participant)
What incentives are offered to dash‑cam or vehicle owners to contribute data for the mapping platform?
Sustainable data collection depends on effective incentive mechanisms; understanding this helps assess scalability of data acquisition.
Speaker: Audience (unidentified participant)
How does EasyOPI ensure trust and validation of its AI‑driven cancer treatment planning among clinicians and patients?
Trust is a major barrier in health‑tech adoption; mechanisms for validation and clinician oversight are vital for acceptance.
Speaker: Audience (unidentified participant)
Is the voice‑AI platform a fully no‑code solution where users can simply click to start an agent, or must they manually connect nodes and build flows?
Usability determines adoption speed for non‑technical users; clarity on the level of required configuration is needed.
Speaker: Audience (unidentified participant)
What motivated the founders to leave their previous jobs and start their AI ventures, and what challenges did they face early on?
Founder stories provide insight into entrepreneurial pathways and potential hurdles for future founders.
Speaker: Audience (unidentified participant)
Is building a proprietary foundational model necessary for industrial automation, or can existing models suffice?
Determines the strategic direction and resource allocation for developing AI solutions in manufacturing.
Speaker: Ravindra Kumar
Can agentic AI be effectively used for CNC programming and automated error diagnosis in aerospace/defense equipment?
Explores a high‑impact application area where AI could streamline complex engineering processes.
Speaker: Ravindra Kumar
What methods does Quonsys AI use to generate large‑scale synthetic training data via its data engine, and how does this affect model performance?
Understanding data generation pipelines is key for replicating success and improving model robustness.
Speaker: Vaibhavath Shukla
What challenges arise when deploying AI‑driven medical imaging solutions in low‑connectivity regions, and how can on‑premise deployments address them?
Highlights infrastructure constraints in remote areas, informing strategies for broader healthcare AI rollout.
Speaker: Meenal Gupta
What are the advantages and trade‑offs of using bare‑metal servers versus hyperscalers for security, cost, and compliance in AI deployments?
Infrastructure choices impact data sovereignty, latency, and operational expenses, influencing deployment decisions.
Speaker: Pradyum Gupta
How does Indus Labs achieve low latency and accurate multi‑dialect support for Indian languages in its voice AI platform?
Technical solutions for dialect diversity and latency are critical for user experience in a linguistically varied market.
Speaker: Vivek Gupta
How does emotional detection (affect recognition) in voice AI improve customer interactions, and what metrics are used to evaluate its effectiveness?
Affective computing can enhance satisfaction; measuring its impact guides product refinement.
Speaker: Vivek Gupta
What is the pricing strategy for Papri Labs’ geospatial data (tile‑based pricing), and how does it scale with larger geographic coverage?
Understanding pricing models for spatial data informs business sustainability and market penetration.
Speaker: Pradyum Gupta

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.